You are currently browsing the tag archive for the ‘Semantic Web’ tag.

The Web is increasingly understood as a global information space consisting not just of linked documents, but also of linked data. The term Linked Data was coined by Tim Berners-Lee in his Linked Data Web architecture note. The goal of Linked Data is to enable people to share structured data on the Web as easily as they can share documents today. More specifically, Wikipedia defines Linked Data as “a term used to describe a recommended best practice for exposing, sharing, and connecting pieces of data, information, and knowledge on the Semantic Web using .

More than just a vision, the Web of Data has been brought into being by the maturing of the Semantic Web technology stack, and by the publication of an increasing number of datasets according to the principles of Linked Data. Today, this emerging Web of Data includes data sets as extensive and diverse as DBpedia, Geonames, US Census, EuroStat, MusicBrainz, BBC Programmes, Flickr, DBLP, PubMed, UniProt, FOAF, SIOC, OpenCyc, UMBEL and Yago. The availability of these and many other data sets has paved the way for an increasing number of applications that build on Linked Data, support services designed to reduce the complexity of integrating heterogeneous data from distributed sources, as well as new business opportunities for start-up companies in this space.

The basic tenets of Linked Data are to:

  • use the RDF data model to publish structured data on the Web
  • use RDF links to interlink data from different data sources

Applying both principles leads to the creation of a data commons on the Web, a space where people and organizations can post and consume data about anything. This data commons is often called the Web of Data or Semantic Web.

In summary, Linked Data is simply about using the Web to create typed links between data from different sources. It is important to know that Linked Data is not the Semantic Web, it’s the basement for it.

For more information, you may refer to:

2008 is just about to end, but looking back clearly reveals: it was an exciting year for the Semantic Web. Remember Yahoo’s BOSS strategy and Search Monkey initiave, Microsoft’s struggle over semantic search, the release of Calais, Cuil, Zemanta or Twine, the silent decline of Web 2.0 and the sudden appearance of a “Gimme-A-Break” Web 3.0 …

And what is to expect for 2009? Media industries and advertisers will jump on the Semantic Web, we will see a lot more of open data bubbles on the web, and smartly designed applications will bring “semantics to the home”.

We see the Semantic Web as an enabler of the Relationship Web. What metadata, annotation, and labeling are to the Semantic Web, relationships of all forms (implicit, explicit, and formal) are to the Relationship Web.  The primary goal of the Semantic Web has been described (by Tim Berners-Lee and many others) as integration of data or labeling of Web resources for more precise exploitation by both machines and humans.   At the next level, the Relationship Web organizes Web resources for analysis that goes beyond integration to trailblazing, leading to deeper insights and better decision making.

 

Relationship Web takes you away from “which document” could have information I need, to “what’s in the resources” that gives me the insight and knowledge I need for decision making.

 

For more information, See the article: “Relationship Web Blazing Semantic Trails between Web Resources“.

The Semantic Deep Web integrates Semantic Web components with the employment of ontology-aware browsers to squeeze information out of the Deep Web, which is nonindexable, invisible, and concealed online content that is only accessible via Web services or Web-form interfaces, write New Jersey Institute of Technology professor James Geller and colleagues. “The primary goals of the Semantic Deep Web are to access Deep Web data through various Web technologies and to realize the Semantic Web’s vision by enriching ontologies using this data,” the authors note. To access the Deep Web with Semantic Web technologies, the Semantic Deep Web utilizes ontology plug-in search, a method for enriching a domain ontology with Deep Web data semantics so that it can be used to refine user search queries processed by a conventional search. Another key Semantic Deep Web process is Deep Web service annotation, in which Deep Web services are annotated with Deep Web data semantics so that they can be searched by a Semantic Web search engine. It is simpler from a semantic perspective to obtain ontologies from Deep Web data sources, especially well-structured relational back-end databases, than from unstructured natural-language text documents. Activities Geller lists as necessary for fusing Semantic Web and Deep Web technologies together include the development of ontology-aware, high-quality Web search engines; construction of large ontologies from Deep Web sites, beginning with all e-commerce subdomains; achieving acceptance of an “open source attitude” in the e-commerce space to simplify the building of Deep Web ontologies by accessing securely locked data sources; creation of libraries of semantic crawlers designed to extract back-end database information; and assembly of comprehensive index structures for Deep Web sites.

Click Here to View Full Article

 

Conventional information retrieval techniques are document-centric search paradigm. Users are required to investigate an document or lists of documents in order to obtain an answer.
Semantic serach is an object, entity or knowledge centric approach based on the semantic web technologies.

It aims to find precise and abundant information of the objects under consideration and their related objects. Semantic search technology enables accurate retrieval of information via concept/meaning match. It is very effective, and perhaps the only method, in application to credible and dynamic content. Because most of the credible and dynamic content are statistically flat (infertile) for popularity algorithms to work effectively beyond common queries.

hakia (http://www.hakia.com/)  is a general purpose “semantic” search engine, dedicated to quality search experience.

For more information, you may refer to An Overview of Semantic Search Engines.

The Resource Description Framework (RDF) is a W3C Recommendation for the formulation of metadata on the World Wide Web. RDF Schema (RDFS) extends this standard with the means to specify domain vocabulary and object structures. These techniques will enable the enrichment of the Web with machine-processable semantics, thus giving rise to what has been dubbed the Semantic Web.

  

The basic building block in RDF is an subject-predicate-object triple, commonly written as P(S;O). That is, a subject S has an predicate (or property) P with value O. Another way to think of this relationship is as a labeled edge between two nodes: [S] –P–>[O].

  

The RDF data model provides no mechanisms for declaring vocabulary that is to be used. RDF Schema is a mechanism that lets developers define a particular vocabulary for RDF data (such as the predicate hasWritten) and specify the kinds of objects to which predicates can be applied (such as the class Writer). RDFS does this by pre-specifying some terminology, such as Class, subClassOf and Property, which can then be used in application-specific schemata.

 

RDFa was first proposed by Mark Birbeck in the form of a W3C note entitled XHTML and RDF, which was then presented to the Semantic Web Interest Group at the W3C’s 2004 Technical Plenary. RDF/A is a set of attributes used to embed RDF in XHTML. 

 

For more information, watch this video : http://www.youtube.com/watch?v=ldl0m-5zLz4

Moreover, have a look to the  presentation : RDFa – Bridging the Web of Documents and the Web of Data” that will be presented in ISWC2008.