ISWC2005 Notes: Uncertainty Reasoning for the Semantic Web 2

The next presenter is talking about Discovery and Uncertainty in Semantic Web Services. He refers to TBL's comments last year that the semweb doesn't need a facility for uncertainty reasoning (refuted by a later questioner!)

Web service discovery scenario - user wants to fly to conference from Austria. Needs to find web services to assist. Use a broker (F-Broker). Servicesare defined in terms of capabilities such as preconditions, input roles, output roles and external capabilities. Then starts a process of semantic matchmaking, goal capabilities are compared with web service capabilities. Several matching notions: exact (goal and service capabilities match in all respects), plugin (missed this one...?), subsumption (goal is completely within service), intersection (goal and service overlap in some way), non-match (no overlap between service and goal)

Solution for uncertain situations: use incidence calculus, truth functional probabilistic calculus, developed by Alan Bundy in 1985. Experiments with more than 1000 synthetic services showed that performance not much affected by incidence extensions. Still some issues such as quality of service since changes in environment are not immediatelyt reflected in F-Broker.

Next up is Peter Haase, talking about Ontology Learning and Reasoning. Trying to automatically extract domain ontologies from natural language text. Challenges: knowledge in documents is imperfect in first place (inconsistencies, imprecision etc); ontology learing algorithm generates uncertainf knowledge; naive translation to logic based ontologies results in highly inconsistent ontologies.

Consistency of ontologies: consistent iff it is satisfisable. Some approaches disallow inconsistencies. Some reason with inconsistent ontologies. Final approach is fuzzy reasoning.

LOM: Learned Ontology Model. Extensible collection of modelling primitives for different types of ontology elements. Includes confidence and relevance annotations for capturing uncertainty. Then transform into any reasonable expressive language such as RDFS, OWL. For OWL, the goal is to obtain the "best" ontology that is consistent and captures the most certain information. Start with an empty ontology; incrementally add all elements with certainty greater than threshold; detect inconsistencies; generate new changes; remove elements that are least certain. Approach is applicable for incremental learning too.


Other posts tagged as rdf

Earlier Posts