You are currently browsing the tag archive for the ‘Ontology’ tag.

The availability of inference services in the Semantic Web context is fundamental for performing several tasks such as the consistency check of an ontology, the construction of a concept taxonomy, the concept retrieval etc.

Currently, the main approach used for performing inferences is deductive reasoning. In traditional Aristotelian logic, deductive reasoning is defined as the inference in which the (logically derived) conclusion is of no greater generality than the premises. Other logic theories define deductive reasoning as the inference in which the conclusion is just as certain as the premises. The conclusion of a deductive inference is necessitated by the premises: the premises cannot be true while the conclusion is false. Such characteristics of deductive reasoning are the reason of its usage in the SW. Indeed computing class hierarchy as well as checking ontology consistency require certain and correct results and do not need of high general conclusions with respect to the premises.

Conversely, tasks such as ontology learning, ontology population by assertions, ontology evaluation, ontology mapping and alignment require inferences that are able to return higher general conclusions with respect to the premises. To this end, inductive learning methods, based on inductive reasoning, could be effectively used. Indeed, inductive reasoning generates conclusions that are of greater generality than the premises, even if, differently from the deductive reasoning, such conclusions have less certainty than the premises. Specifically, in contrast to the deduction, the starting premises of the induction are specific (typically facts or examples) rather than general axioms. The goal of the inference is to formulate plausible general assertions explaining the given facts and that are able to predict new facts. Namely, inductive reasoning attempt to derive a complete and correct description of a given phenomenon or part of it.

It is important to mention that, of the two aspects of inductive inference: the generation of plausible hypothesis and their validation (the establishment of their truth status), only the first one is of primary interest to inductive learning research, because it is assumed that the generated hypothesis are judged by human experts and tested by known methods of deductive inference and statistics.

Advertisements

Controlled vocabularies provide a way to organize knowledge for subsequent retrieval. In library and information science controlled vocabulary is a carefully selected list of words and phrases, which are used to tag units of information (document or work) so that they may be more easily retrieved by a search.

The fundamental difference between an ontology and a controlled vocabulary is the level of abstraction and relationships among concept. A formal ontology is a controlled vocabulary expressed in an ontology representation language. This language has a grammar for using vocabulary terms to express something meaningful within a specified domain of interest. The grammar contains formal constraints (e.g., specifies what it means to be a well-formed statement, assertion, query, etc.) on how terms in the ontology’s controlled vocabulary can be used together.

Controlled vocabulary uses in making ontology not only to reduce the duplication of effort involved in building an ontology from scratch by using the existing vocabulary, but also to establish a mechanism for allowing differing vocabularies to be mapped onto the ontology.

Here is the list of famous controlled vocabularies:

  • FOAF: Friend Of A Friend—the most well known vocabulary for modeling people (and one of the most well known RDF vocabularies), FOAF can represent basic person information, such as contact details, and basic relationships, such as who a person knows.
  • SKOS: Simple Knowledge Organization System — it provides a model for expressing the basic structure and content of concept schemes such as thesauri, classification schemes, subject heading lists, taxonomies, folksonomies, and other similar types of controlled vocabulary. Useful for describing models that have some hierarchy and structure but are not sufficiently discrete and formal to map directly into OWL.
  • AIISO: Academic Institution Internal Structure Ontology—effectively models organizational relationships, such as Institution->School->Department->Faculty with the property part_of and defines courses taught by those Departments with the teaches property. AIISO was developed within the past year by Talis, a software company dedicated to semantic technologies, for their academic resource list management system, Talis Aspire.
  • University Ontology—University ontology is undergoing active development and is currently unstable, but does a good job of modeling the details of course scheduling. It is being developed by Patrick Murray-John at University of Mary Washington, who is in touch with the developers of the AIISO ontology at Talis.
  • SWRC: Semantic Web for Research Communities—there is much overlap between AIISO and SWRC. While there is a text on the development of SWRC, it is hard to find a clear documentation of the ontology itself, so a comparison of the two would take more time.
  • DC: Dublin Core—One of the original and most widely used vocabularies, Dublin Core can be used for cataloging publications.
  • bibTeX.owl—bibTeX is a format description for source citation. bibTeX.owl is the bibTeX ontology chosen by Nick Matsakis to use in his BibTeX RDFizer that is part of MIT’s SIMILE project. Depending on whether bibTeX data is prevalent and used throughout the community, this may be another option for cataloging publications.
  • Bibliography ontology—Bibliography reuses many existing ontologies such as Dublin Core and FOAF properties. It’s goal is to be a superset of legacy formats like BibTeX. It has multiple levels, such as level one which is for simple bibliographic data, or level three which can aggregate many medium sources like: writings, speeches, and conferences. It is used in the University ontology.
  • SIOC:Semantically-Interlinked Online Communities—SIOC Core Ontology Specification – an RDFS/OWL vocabulary/ontology for describing the main concepts and properties for online communities.

This topic will be discussed in the Webinar on 5 March 2009. Actually, there are two well known technical issues when reasoning with ontologies that contain hundreds of thousands of classes/subclasses and where change happens frequently.

 

The first problem, materializing type information, takes far too much time. In some triple stores, materialization takes almost as long as loading the data. Once an ontology changes, the entire materialization process has to start over.


The second problem, optimizing a SPARQL engine for a reasoning triple store, is more challenging than just using SPARQL as a retrieval language. In a non-reasoning SPARQL engine, optimizing is relatively straightforward, applying the right hash and sort joins once given the statistics of the database when it reorders appropriately. However, when SPARQL is used on top of a reasoner, suddenly additional considerations are required. In practice, you only know the statistics of each clause after you have done the reasoning.


This Webinar will discuss a new solution that mitigates or nearly solves both problems. We will discuss some indexing techniques that do not require materialization and we will cover how an ordinary backtracking technique can be very fast with the right reordering.


Register for this webinar at:

https://www2.gotomeeting.com/register/494147427

Since the inception of the Semantic Web, the development of languages for modelling ontologies has been seen as a key task. The initial proposals focused on RDF and RDF Schema; however, these languages were soon found to be too limited in expressive power.

 

OWL Web Ontology Language became a W3C recommendation in February 2004. OWL is actually a family of three language variants (often called species) of increasing expressive power: OWL Lite, OWL DL, and OWL Full.

 

The standardization of OWL has sparked the development and/or adaption of a number of reasoners, including FacT++, Pellet, RACER, and HermiT, and ontology editors, including Protégé and Swoop.

 

Practical experience with OWL 1 has shown that OWL 1 DL, the most expressive but still decidable language of the OWL 1 family, lacks several constructs that are often necessary for modelling complex domains

 

Why OWL 2?

Although, or even perhaps because, OWL 1 has been successful, certain problems have been identified in its design. None of these problems are severe, but, taken together; they indicate a need for a revision of OWL 1.

 

One of the important limitations of OWL 1 is the lack of a suitable set of built-in datatypes; because OWL 1 relies on XML Schema (xsd) for the list of built-in datatypes. OWL 2 is a new version of OWL which considerably improves the datatypes. Apart from addressing acute problems with expressivity, a goal in the development of OWL 2 was to provide a robust platform for future development.

 

OWL 2 extends the W3C OWL Web Ontology Language with a small but useful set of features that have been requested by users, for which effective reasoning algorithms are now available, and those OWL tool developers are willing to support. The new features include extra syntactic sugar, additional property and qualified cardinality constructors, extended datatypes support, simple meta-modelling, and extended annotations.

 

Considerable progress has been achieved in the development of tool support for OWL 2. The new syntax is currently supported by the new version of the OWL API. The widely used Protégé system has recently been extended with support for the additional constructs provided by OWL 2. The commercial tool TopBraid composer also currently supports OWL 2. Support for OWL 2 has also been included into the FaCT++ and the Pellet systems.

 

Reference: OWL2: The Next Step for OWLand “OWL 2.0: W3C Working Draft 02 December 2008” .