This page lists currently available resources for benchmark experiments in MobiBench.
Contact: William Van Woensel
In this section, we list all currently available resources for OWL reasoning benchmarks.
Matentzoglu et al. [1] extracted this corpus from general-purpose repositories including the Oxford Ontology repository and the Manchester OWL Corpus (MOWLCorp). It contains ontologies from clinical and biomedical fields (ProPreo, ACGT, SNOMED), linguistic and cognitive engineering (DOLCE) and food & wine domains (Wine); and from BioPortal, a comprehensive repository of biomedical ontologies. To suit the constrained resources of mobile platforms, we extracted ontologies with 500 statements or less from this ontology corpus, resulting in 189 benchmark ontologies (total size: ca. 9Mb). These ontologies can be found here, both in their original form (0-188.nt), with embedded materialized schema inferences (mat-schema/), materialized schema and instance inferences (mat-inst/), and materialized schema inferences with the inst-entailed ruleset applied (mat-schema_inst-ent/).
Using our web service (MobiBenchWebService), custom benchmark rule- and axiom sets can easily be generated by applying any of the OWL2 RL ruleset selections. The full OWL2 RL ruleset (and selection criteria) used by MobiBench can be found here. For ease of reference, we also supply the purpose- and reference-based rule subsets (which can be generated using the web service) here.
Conformance tests
A benchmark can easily check the conformance of generated inferences (see here for an example configuration; our automation support fills in these conformance files automatically if that option is selected). In particular, the system will automatically compare the generated inferences with conformance test file.
Conformance test files for the OWL2 RL Benchmark Corpus can be found here. These were taken from the inference output of a conformant rule engine, loaded with the full OWL2 RL ruleset.
We note that, currently, only resources are available to check the conformance of OWL2 RL reasoning output. However, other kinds of reasoning can be supported by adding suitable conformance files under res\owl\conf\. To facilitate comparing service matching output, code from the MobiBenchUtils project can be used.
This test suite was published by Schneider et al. [2] (a main contributor of the W3C OWL2 RL specification). The original test suite can be found here (the original archive is no longer online). We converted this test suite to "input" data files and conformance-testing files that allows MobiBench to automatically check conformance. However, in our version, we had to leave out some test cases, either due to the limitations of our OWL2 RL ruleset (e.g., lack of datatype support), or due to difficulties testing conformance. We list these cases here. We further note that we had to replace blank nodes in the premise & conclusion graphs by concrete resources. This was done to facilitate automatic conformance testing, since any RDF system will keep different blank nodes for 2 different graphs. To facilitate replacing these blank nodes, we converted the premise & conclusion graphs to N-TRIPLE format.
Semantic service matching
This section lists all available resources for semantically-enhanced service matching benchmarks.
For benchmarking semantic service matching, we based ourselves on the OWL-S Service Retrieval Test Collection [3] (also available here). For the purpose our benchmark, we extracted pre- and post-conditions / effects (originally in SWRL) as SPIN rules and RDF data (N-TRIPLE format) (also including the types of input and output variables). Since not all descriptions contained these conditions, this resulted in a final set of 17 goals and 152 services.
Further, we generated an extended version of this dataset that includes all related ontology elements; allowing for self-contained, semantically-enhanced service matching. This was done by manually analyzing the conditions and referenced ontologies, and only including elements affecting OWL2 RL inferences. Seeing how only avg. ca. 5 ontology terms are referenced per condition, it would have been excessive to include referenced ontologies in their entirety (with avg. ca. 2100 statements, ranging between ca. 30 to ca. 40k statements).
The resulting dataset contains conditions found in the user queries (queries/) and service descriptions (services/). Both are available as RDF data (target/ subfolder)and semantic rules (source/ subfolder), allowing rule-based matching in both directions. Further, each condition is available both without (precond|effect) and with ((precond|effect)_schema) related ontology elements.
In this file, we list the extra service matches generated by semantically enhancing rule-based service matching.