Introduction and Motivation

SmartWeb: Mobile Broadband Access to the Semantic Web

Recent progress in mobile broadband communication and semantic web technology is enabling innovative internet services that provide advanced personalization and localization features. The goal of the SmartWeb project (duration: 2004 - 2007) is to lay the foundations for multimodal user interfaces to distributed and composable semantic Web services on mobile devices. The SmartWeb consortium brings together experts from various research communities: mobile services, intelligent user interfaces, language and speech technology, information extraction, and semantic web technologies.

SmartWeb is based on two parallel efforts that have the potential of forming the ba-sis for the next generation of the Web. The first effort is the semantic Web[1] which provides the tools for the explicit markup of the content of Web pages; the second effort is the development of semantic Web services which results in a Web where programs act as autonomous agents to become the producers and consumers of infor-mation and enable automation of transactions.

The appeal of being able to ask a question to a mobile internet terminal and receive an answer immediately has been renewed by the broad availability of information on the Web. Ideally, a spoken dialogue system that uses the Web as its knowledge base would be able to answer a broad range of questions. Practically, the size and dynamic nature of the Web and the fact that the content of most web pages is encoded in natu-ral language makes this an extremely difficult task. However, SmartWeb exploits the machine-understandable content of semantic Web pages for intelligent question-answering as a next step beyond today's search engines. Since semantically annotated Web pages are still very rare due to the time-consuming and costly manual markup, SmartWeb is using advanced language technology and information extraction methods for the automatic annotation of traditional web pages encoded in HTML or XML.

But SmartWeb does not only deal with information-seeking dialogues but also with task-oriented dialogues, in which the user wants to perform a transaction via a Web service (e.g. buy a ticket for a sports event or program his navigation system to find a souvenir shop).

SmartWeb is the follow-up project to SmartKom (www.smartkom.org), carried out from 1999 to 2003. SmartKom is a multimodal dialog system that combines speech, gesture, and facial expressions for input and output[2]. Spontaneous speech under-standing is combined with the video-based recognition of natural gestures and facial expressions. One version of SmartKom serves as a mobile travel companion that helps with navigation and point-of-interest in-formation retrieval in location-based services (using a PDA as a mobile client). The SmartKom architecture[3] supports not only simple multimodal command-and-control interfaces, but also coherent and cooperative dialogues with mixed initiative and a synergistic use of multiple modalities. Although SmartKom works in multiple domains (e.g. TV program guide, tourist information), it supports only restricted-domain question answering. SmartWeb goes beyond Smart-Kom in supporting open-domain question answering using the entire Web as its knowledge base.

SmartWeb provides a context-aware user interface, so that it can support the user in different roles, e.g. as a car driver, a motor biker, a pedestrian or a sports spectator. One of the planned demonstrators of SmartWeb is a personal guide for the 2006 FIFA world cup in Germany, that provides mobile infotainment services to soccer fans, anywhere and anytime. Another SmartWeb demonstrator is based on P2P communica-tion between a car and a motor bike. When the car's sensors detect aqua-planing, a succeeding motor biker is warned by SmartWeb "Aqua-planing danger in 200 meters!". The biker can interact with SmartWeb through speech and haptic feedback; the car driver can input speech and gestures.

SmartWeb is based on two new W3C standards for the semantic Web, the Resource Description Framework (RDF/S) and the Web Ontology Language (OWL) for repre-senting machine interpretable content on the Web. OWL-S ontologies support seman-tic service descriptions, focusing primarily on the formal specification of inputs, out-puts, preconditions, and effects of Web services. In SmartWeb, multimodal user re-quests will not only lead to automatic Web service discovery and invocation, but also to the automatic composition, interoperation and execution monitoring of Web ser-vices.

The academic partners of SmartWeb are the research institutes DFKI (consortium leader), FhG FIRST, and ICSI together with university groups from Erlangen, Karlsruhe, Munich, Saarbrücken, and Stuttgart. The industrial partners of SmartWeb are BMW, DaimlerChrysler, Deutsche Telekom, and Siemens as large companies, as well as EML, Ontoprise, and Sympalog as small businesses. The German Federal Ministry of Education and Research (BMBF) is funding the SmartWeb consortium with grants totaling 13.7 million euros.


References
  1. Fensel, D., Hendler, J.A., Lieberman, H., Wahlster, W. (eds.): Spinning the Semantic Web: Bringing the World Wide Web to Its Full Potential, MIT Press, Boston (2003)
  2. Wahlster, W.: Towards Symmetric Multimodality: Fusion and Fission of Speech, Gesture, and Facial Expression. In: Günter, A., Kruse, R., Neumann, B. (eds.): KI 2003: Advances in Artificial Intelligence, Lecture Notes in Artificial Intelligence, Vol. 2821, Springer-Verlag, Berlin Heidelberg New York (2003) 1-18
  3. Wahlster, W. (ed): SmartKom: Foundations of Multimodal Dialogue Systems. Springer-Verlag, Berlin Heidelberg New York (2004)

© Webmaster
Last modified: Thu Mar 3 13:58:59 CET