This page details a number of entailments that enable the identification of equivalent OWL2 RL rule subsets. In particular, these entailments indicate that, aside from two selections, each rule subset selection retains full conformance with OWL2 RL (i.e., they do not result in loss of expressivity). For "Equivalence with instance-based rules", we show that, although certain schema inferences will be missing, these do not result in missing instance inferences. Further, for "Removal of inefficient rules", we show that only a particular consistency-checking case will be no longer supported.
MobiBench allows automatically selecting such rule subsets.
Contact: William Van Woensel
This section lists logical entailments between OWL2 RL rules.
These rules are entailed by the #cax-sco rule, each time combined with a second inference rule.
$$T(?c_{1}, subclassOf, ?c_{2}),T(?x, type, ?c_{1}) \rightarrow T(?x,type,?c_{2})$$Applying #scm-uni on two premises from #cls-uni returns inference (1). Then, #cax-sco is applied on the remaining premise, together with (1). This yields the inference in (2), which equals the #cls-uni consequent.
$$T(?c, unionOf, ?x), T(?x, hasMember, ?cl) \rightarrow T(?cl, subClassOf, ?c) \hspace{10px} (1)\\ T(?cl, subclassOf, ?c), T(?y, type, ?cl) \rightarrow T(?y,type,?c) \hspace{10px} (2)$$Applying #scm-int on two premises from #cls-int2 returns inference (1). Then, #cax-sco is applied on the remaining premise, together with (1). This yields the inference in (2), which equals the #cls-int2 consequent.
$$T(?c, intersectionOf, ?x), T(?x, hasMember, ?cl) \rightarrow T(?c, subclassOf, ?cl) \hspace{10px} (1)\\ T(?c, subclassOf, ?cl), T(?y, type, ?c) \rightarrow T(?y, type, ?cl) \hspace{10px} (2)$$Applying #scm-eqc1 on the first premise from #cax-eqc1 returns inference (1). Then, #cax-sco is applied on the remaining premise, together with (1). This yields the inference in (2), which equals the #cax-eqc1 consequent.
$$T(?c_{1}, equivalentClass, ?c_{2}) \rightarrow T(?c_{1}, subClassOf, ?c_{2}) \hspace{10px} (1)\\ T(?c_{1}, subClassOf, ?c_{2}), T(?x, type, ?c_{1}) \rightarrow T(?x, type, ?c_{2}) \hspace{10px}$$Applying #scm-eqc1 on one premise from #cax-eqc2 returns inference (1). Then, #cax-sco is applied on the remaining premise, together with (1). This yields the inference in (2), which equals the #cax-eqc2 consequent.
$$T(?c_{1}, equivalentClass, ?c_{2}) \rightarrow T(?c_{2}, subClassOf, ?c_{1}) \hspace{10px} (1)\\ T(?c_{2}, subClassOf, ?c_{1}), T(?x, type, ?c_{2}) \rightarrow T(?x, type, ?c_{1}) \hspace{10px}$$These rules are entailed by the #prp-spo1 rule, each time combined with rules indicating equivalence between owl:equivalent[Class|Property] and rdfs:sub[Class|Property]Of.
$$T(?p_{1}, subPropertyOf, ?p_{2}), T(?x, ?p_{1}, ?y) \rightarrow T(?x, ?p_{2}, ?y)$$Applying #scm-eqp1 on the first premise of #prp-eqp1 returns inference (1). Then, #prp-spo1 is applied on the remaining premise, together with (1). This yields the premise in (2), which equals the #prp-eqp1 consequent.
$$T(?p_{1}, equivalentProperty, ?p_{2}) \rightarrow T(?p_{1}, subPropertyOf, ?p_{2}) \hspace{10px} (1)\\ T(?p_{1}, subPropertyOf, ?p_{2}), T(?x, ?p_{1}, ?y) \rightarrow T(?x, ?p_{2}, ?y) \hspace{10px} (2)$$Applying #scm-eqp1 on the first premise of #prp-eqp2 returns inference (1). Then, #prp-spo1 is applied on the remaining premise, together with (1). This yields the premise in (2), which equals the #prp-eqp2 consequent.
$$T(?p_{1}, equivalentProperty, ?p_{2}) \rightarrow T(?p_{2}, subPropertyOf, ?p_{1}) \hspace{10px} (1)\\ T(?p_{2}, subPropertyOf, ?p_{1}), T(?x, ?p_{2}, ?y) \rightarrow T(?x, ?p_{1}, ?y) \hspace{10px} (2)$$When executing the #eq-rep-o rule on relevant data, the ?p variable will be instantiated with owl:sameAs, thus covering each possible inference of #eq-trans.
$$T(?y, sameAs, ?z), T(?x, sameAs, ?y) \rightarrow T(?x, sameAs, ?z)$$This section lists possibilities to introduce extra axioms, allowing us to leave out certain rules.
These rules are entailed by #prp-symp, each time combined with an extra axiom and, potentially, a second rule.
$$T(?p, type, SymmetricProperty), T(?x, ?p, ?y) \rightarrow T(?y, ?p, ?x)$$Typing the sameAs property as a SymmetricProperty and applying #prp-symp on the #eq-sym premise yields inference (1), which equals the #eq-sym consequent.
$$T(sameAs, type, SymmetricProperty), T(?x, sameAs, ?y) \rightarrow T(?y, sameAs, ?x) \hspace{10px} (1)$$Typing the inverseOf property as a SymmetricProperty and applying #prp-symp on the first premise of #prp-inv2 returns inference (1). Then, #prp-inv1 is applied on the remaining premise, together with (1). This yields the premise in (2), which equals the #prp-inv2 consequent.
$$T(inverseOf, type, SymmetricProperty), T(?p_{1}, inverseOf, ?p_{2}) \rightarrow T(?p_{2}, inverseOf, ?p_{1}) \hspace{10px} (1) \\ T(?p_{2}, inverseOf, ?p_{1}), T(?x, ?p_{2}, ?y) \rightarrow T(?y, ?p_{1}, ?x) \hspace{10px} (2)$$These rules are entailed by #prp-trp, each time combined with an extra axiom.
$$T(?p, type, TransitiveProperty), T(?x, ?p, ?y), T(?y, ?p, ?z) \rightarrow T(?x, ?p, ?z)$$Typing the subClassOf property as a TransitiveProperty and applying #prp-trp on the two premises of scm-sco returns inference (1), which equals the #scm-sco consequent.
$$T(subClassOf, type, TransitiveProperty), T(?c_{1}, subClassOf, ?c_{2}), T(?c_{2}, subClassOf, ?c_{3}) \rightarrow T(?c_{1}, subClassOf, ?c_{3}) \hspace{10px} (1)$$Typing the subPropertyOf property as a TransitiveProperty and applying #prp-trp on the two premises of scm-spo returns inference (1), which equals the #scm-spo consequent.
$$T(subPropertyOf, type, TransitiveProperty), T(?p_{1}, subPropertyOf, ?p_{2}), T(?p_{2}, subPropertyOf, ?p_{3}) \\ \rightarrow T(?p_{1}, subPropertyOf, ?p_{3}) \hspace{10px} (1)$$This section lists possibilities for introducing new generalized rules & supporting axioms, which allows us to leave out certain specialized rules.
Rules #eq-rep-p and #prp-spo1 are structurally very similar:
$$T(?p, sameAs, ?p_{2}), T(?s, ?p, ?o) \rightarrow T(?s, ?p_{2}, ?o)$$Both may be generalized into a single new rule, with accompanying axioms:
$$T(?p_{1}, ?p, ?p_{2}), T(?p, type, SubLink), T(?s, ?p_{1}, ?o) \rightarrow T(?s, ?p_{2}, ?o) \\ T(sameAs, type, SubLink) \\ T(subPropertyOf, type, SubLink)$$Rules #scm-hv and #scm-svf2 are structurally very similar:
$$T(?c_{1}, hasValue, ?i), T(?c_{1}, onProperty, ?p_{1}), \\ T(?c_{2}, hasValue, ?i), T(?c_{2}, onProperty, ?p_{2}), \\ T(?p_{1}, subPropertyOf, ?p_{2}) \rightarrow T(?c_{1}, subClassOf, ?c_{2})$$Both may be generalized into a single new rule, with accompanying axioms:
$$T(?p, type, ValueRestriction), \\ T(?c_{1}, ?p, ?y), T(?c_{1}, onProperty, ?p_{1}), \\ T(?c_{2}, ?p, ?y), T(?c_{2}, onProperty, ?p_{2}), \\ T(?p_{1}, subPropertyOf, ?p_{2}) \rightarrow T(?c_{1}, subClassOf, ?c_{2})\\ T(hasValue, type, ValueRestriction)\\ T(someValuesFrom, type, ValueRestriction)$$Rules #scm-avf1 and #scm-svf1 are structurally very similar:
$$T(?c_{1}, allValuesFrom, ?y_{1}), T(?c_{1}, onProperty, ?p), \\ T(?c_{2}, allValuesFrom, ?y_{2}), T(?c_{2}, onProperty, ?p), \\ T(?y_{1}, subClassOf, ?y_{2}) \rightarrow T(?c_{1}, subClassOf, ?c_{2})$$Both may be generalized into a single new rule, with accompanying axioms:
$$T(?p1, type, MultiValueRestriction), \\ T(?c_{1}, ?p1, ?y_{1}), T(?c_{1}, onProperty, ?p), \\ T(?c_{2}, ?p1, ?y_{2}), T(?c_{2}, onProperty, ?p), \\ T(?y_{1}, subClassOf, ?y_{2}) \rightarrow T(?c_{1}, subClassOf, ?c_{2})\\ T(someValuesFrom, type, MultiValueRestriction)\\ T(allValuesFrom, type, MultiValueRestriction)$$Rules #eq-diff2 and #eq-diff3 are structurally very similar:
$$T(?x, type, AllDifferent), T(?x, members, ?list), \\ T(?list, hasMember, ?y_{1}), T(?list, hasMember, ?y_{2}), \\ ?y_{1} != ?y_{2}, T(?y_{1}, sameAs, ?y_{2}) \rightarrow false$$Both may be generalized into a single new rule, with accompanying axioms:
$$T(?x, type, AllDifferent), \\ T(?p, type, GenericMembersProperty), T(?x, ?p, ?list), \\ T(?list, hasMember, ?y_{1}), T(?list, hasMember, ?y_{2}), \\ ?y_{1} != ?y_{2}, T(?y_{1}, sameAs, ?y_{2}) \rightarrow false \\ T(members, type, GenericMembersProperty) \\ T(distinctMembers, type, GenericMembersProperty)$$Rules #prp-npa1 and #prp-npa2 are structurally very similar:
$$T(?x, sourceIndividual, ?i_{1}), T(?x, assertionProperty, ?p), T(?x, targetIndividual, ?i_{2}), \\ T(?i_{1}, ?p, ?i_{2}) \rightarrow false$$Both may be generalized into a single new rule, with accompanying axioms:
$$T(?x, sourceIndividual, ?i_{1}), T(?x, assertionProperty, ?p), \\ T(?p_{1}, type, NpaTargetProperty), T(?x, ?p_{1}, ?i_{2}), T(?i_{1}, ?p, ?i_{2}) \rightarrow false \\ T(targetIndividual, type, NpaTargetProperty) \\ T(targetValue, type, NpaTargetProperty)$$Rules #cls-com and #cax-dw are structurally very similar:
$$T(?c_{1}, complementOf, ?c_{2}), T(?x, type, ?c_{1}), \\ T(?x, type, ?c_{2}) \rightarrow false $$Both may be generalized into a single new rule, with accompanying axioms:
$$T(?p, type, GenericDisjointClassProperty), T(?c_{1}, ?p, ?c_{2}), \\ T(?x, type, ?c_{1}), T(?x, type, ?c_{2}) \rightarrow false \\ T(complementOf, type, GenericDisjointClassProperty) \\ T(disjointWith, type, GenericDisjointClassProperty)$$This section lists entailments showing that certain inferred schema elements will not lead to new instances being inferred. As such, they can be considered redundant at the instance level. However, we note that these schema elements could be considered valuable additions to the ontology itself, regardless of their capability of inferring new instances.
Rule #scm-dom1 generates a new schema element:
$$T(?p, domain, ?c_{1}), T(?c_{1}, subClassOf, ?c_{2}) \rightarrow T(?p, domain, ?c_{2})$$Based on the new element T(?p, domain, ?c2), rule #prp-dom can then generate new instance inferences:
$$T(?s, ?p, ?o), T(?p, domain, ?c_{2}) \rightarrow T(?s, type, ?c_{2})$$Below, we show that these instance inferences are already covered by rules #prp-dom (1) and #cax-sco (2):
$$T(?s, ?p, ?o), T(?p, domain, ?c_{1}) \rightarrow T(?s, type, ?c_{1}) \hspace{10px} (1)\\
T(?s, type, ?c_{1}), T(?c_{1}, subClassOf, ?c_{2}) \rightarrow T(?s, type, ?c_{2}) \hspace{10px} (2)$$
Rule #scm-rng1 generates a new schema element:
$$T(?p, range, ?c_{1}), T(?c_{1}, subClassOf, ?c_{2}) \rightarrow T(?p, range, ?c_{2})$$Based on the new element T(?p, range, ?c2), rule #prp-rng can then generate new instance inferences :
$$T(?s, ?p, ?o), T(?p, range, ?c_{2}) \rightarrow T(?o, type, ?c_{2})$$Below, we show that these instance inferences are already covered by rules #prp-rng (1) and #cax-sco (2):
$$T(?s, ?p, ?o), T(?p, range, ?c_{1}) \rightarrow T(?o, type, ?c_{1}) \hspace{10px} (1)\\
T(?o, type, ?c_{1}), T(?c_{1}, subClassOf, ?c_{2}) \rightarrow T(?o, type, ?c_{2}) \hspace{10px} (2)$$
Rule #scm-dom2 generates a new schema element:
$$T(?p_{2}, domain, ?c), T(?p_{1}, subPropertyOf, ?p_{2}) \rightarrow T(?p_{1}, domain, ?c)$$Based on the new element T(?p1, domain, ?c), rule #prp-dom can then generate new instance inferences:
$$T(?s, ?p_{1}, ?o), T(?p_{1}, domain, ?c) \rightarrow T(?s, type, ?c) $$Below, we show that these instance inferences are already covered by rules #prp-spo1 (1) and #prp-dom (2):
$$T(?s, ?p_{1}, ?o), T(?p_{1}, subPropertyOf, ?p_{2}) \rightarrow T(?s, ?p_{2}, ?o) \hspace{10px} (1) \\
T(?s, ?p_{2}, ?o), T(?p_{2}, domain, ?c) \rightarrow T(?s, type, ?c) \hspace{10px} (2)$$
Rule #scm-rng2 generates a new schema element:
$$T(?p_{2}, range, ?c), T(?p_{1}, subPropertyOf, ?p_{2}) \rightarrow T(?p_{1}, range, ?c)$$Based on the new element T(?p1, range, ?c), rule #prp-rng can then generate new instance inferences:
$$T(?s, ?p_{1}, ?o), T(?p_{1}, range, ?c) \rightarrow T(?o, type, ?c) $$Below, we show that these instance inferences are already covered by rules #prp-spo1 (1) and #prp-rng (2):
$$T(?s, ?p_{1}, ?o), T(?p_{1}, subPropertyOf, ?p_{2}) \rightarrow T(?s, ?p_{2}, ?o) \hspace{10px} (1) \\
T(?s, ?p_{2}, ?o), T(?p_{2}, range, ?c) \rightarrow T(?o, type, ?c) \hspace{10px} (2)$$
Purpose- and reference-based subsets
Since ABox (instance) statements are the only statements likely to be added at runtime, an option is to only perform TBox (schema) reasoning once at startup, or whenever the ontology changes; and ABox reasoning each time new instances are added. Further, we note that in case data is automatically generated by the system, it has a lower likelihood of being in consistent, thus removing the need for continuous consistency checking.
Consequently, we divided our OWL2 RL ruleset into 2 major subsets; 1) inference ruleset, comprising inference rules (53 rules), and 2) consistency-checking ruleset, containing consistency checking rules (18 rules). The inference ruleset is further subdivided into 1.1) instance ruleset, consisting of rules referring both instance and schema elements (32 rules); and 1.2) schema ruleset, comprising rules only referencing schema elements (23 rules). Since the consistency-checking ruleset only contains rules referring to both instance and schema elements, it cannot be further subdivided.
To ensure OWL2 RL-conformant inferences, the inference-instance rule subset (1.1) needs to be applied to an ontology in which schema-based inferences are already materialized. For this purpose, the inference-schema (1.2) subset is applied on the initial ontology. (For proof that this sequential application of rule subsets is equivalent to applying the entire ruleset, we refer to [1].) For instance, this may be done at design-time or install-time, and on ontology updates.The utility of separating these subsets clearly depends on the frequency of ontology updates, since each update requires re-materializing the ontology (depending on the concrete scenario, ontology updates may be infrequent; at least compared to instance data). Similarly, the concistency-checking ruleset needs to be applied on data where all inferences have been materialized, by applying the inference rule subset.
Further, certain rules, not referenced by the schema or instance dataset, may be left out as well. For this purpose, we implemented a domain-based ruleset selection algorithm, which is available from our web service (see Tai et al. [2] for details on this algorithm). Similar to separating instance and schema rulesets, its applicability depends on the frequency (and, also the significance) of ontology updates; since this requires re-calculating the ruleset.
Our web service allows creating custom OWL2 RL rule subsets by automatically applying (individually or together) any of the presented ruleset selections.
The #eq-ref rule has the potential to greatly bloat the dataset:
$$T(?s,?p,?o) \rightarrow T(?s,sameAs,?s), T(?p,sameAs,?p),T(?o,sameAs,?o)$$
For each unique resource, this rule creates a new statement indicating the resource's equivalence to itself. Consequently, 3 new triples are generated for each triple with unique resources, resulting in a worst-case 4x increase in dataset size (!). One could argue that there is limited practical use in materializing these self-equivalency statements: especially seeing how leaving them out will not influence other inferences, although it could influence consistency-checking in one particular case.
To see why leaving out these self-equivalency statements does not influence other inferences, we need to look at rules that reference owl:sameAs statements.
Instantiating this rule with a self-equivalency statement T(X, sameAs, X) yields the following inference:
$$T(X, sameAs, X) \rightarrow T(X, sameAs, X)$$The same statement is again inferred, meaning no additional inferences are made.
All possible instantiations of this rule with a self-equivalency statement T(X, sameAs, X) results in the following inferences:
$$T(X, sameAs, X), T(X, sameAs, X) \rightarrow T(X, sameAs, X)$$In all cases, one of the instantiated premises is simply inferred again, meaning no additional inferences are made.
Instantiating this rule with a self-equivalency statement T(X, sameAs, X) yields the following inference:
$$T(X, sameAs, X), T(X, ?p, ?o) \rightarrow T(X, ?p, ?o)$$This rule simply infers the second premise again, meaning no additional inferences are made.
Instantiating this rule with a self-equivalency statement T(X, sameAs, X) yields the following inference:
$$T(X, sameAs, X), T(?s, X, ?o) \rightarrow T(?s, X, ?o)$$This rule simply infers the second premise again, meaning no additional inferences are made.
Instantiating this rule with a self-equivalency statement T(X, sameAs, X) yields the following inference:
$$T(X, sameAs, X), T(?s, ?p, X) \rightarrow T(?s, ?p, X)$$This rule simply infers the second premise again, meaning no additional inferences are made.
On the other hand, leaving out the #eq-ref rule could have an impact on consistency-checking performed by the #eq-diff1 rule:
$$T(?x, sameAs, ?y), T(?x, differentFrom, ?y) \rightarrow false$$In case of reciprocal statements such as T(X, differentFrom, X), this rule will only be triggered in case X's self-equivalency statement (i.e., T(X, sameAs, X)) is also present. Therefore, in this particular case, this consistency-checking rule will no longer fire.
Regarding the remaining two consistency-checking rules that reference sameAs statements (see below): these cannot fire for self-equivalency statements due to the condition ?y1 != ?y2. We note that this condition always needs to be present, else the two clauses T(?list, hasMember, ?y1) and T(?list, hasMember, ?y2) will fire for a single hasMember triple.
$$(?x, a, AllDifferent), T(?x, members, ?list), T(?list, hasMember, ?y1), T(?list, hasMember, ?y2),\\ y1 != y2, T(?y1, sameAs, ?y2) \rightarrow false$$