Outcome & Impact
The most important results from the TALK project are the following scientific contributions:
A methodology for developing efficient and natural dialogue systems based on behaviour automatically learned from human communication. This is needed to make dialogue systems effective enough to make them commercially viable. These data-driven methods also promise to reduce development times and costs.
A reconfigurable dialogue system design which separates application-specific information from generic communicative behaviour. This provides the foundations for reconfigurable systems that are not tied to a single application, but rather have the ability to handle many dynamically changing devices and services. This will solve a major bottleneck in (currently very expensive) deployment of multimodal systems.
A dialogue system design which isolates modality- and language-independent information from modality- and language-specific information. This is needed in order to make it easier to port dialogue systems to new languages and to extend the interaction by adding new modalities without having to redesign the entire system. Similar to the problem above, extending a dialogue system to new languages is currently an expensive process.
In the areas of dialogue system development, integration, and evaluation, reconfigurable dialogue systems, and in combining learning with ISU dialogue management, the project end results are also particularly strong. For example, in terms of automatic learning of dialogue strategies, we developed a system which outperforms all the (hand-programmed) DARPA COMMUNICATOR dialogue systems.
In addition, several new multimodal dialogue systems were developed (one is the first ISU system to employ learned dialogue strategies). We have shown that users of the prototype in-car showcase system achieve 80% task completion and increased dialogue efficiency when compared to a "Command and Control" system.
All of the project's objectives (both central and secondary) were reached, with some final outcomes surpassing our expectations at the project outset. The single exception was in the task of combining multiple modalities with SLMs. Here we found that mouse clicks are not sufficiently coordinated with speech to enable the creation of n-gram based SLMs. We therefore conducted an additional investigation of methods for parsing the output of SLMs.
In the four main research themes the initial overall objectives were achieved, both in terms of the theoretical advances that were required:
integrated multimodal and multilingual processing with dialogue management
reconfigurable dialogue systems using ontologies
multimodal turn planning strategies
combining statistical learning with ISU dialogue management
but also in terms of the showcase and prototype dialogue systems that were developed:
SAMMIE in-car dialogue system installed in BMW test car
GODIS multimodal and multilingual dialogue systems
MIMUS in-home multimodal and multilingual dialogue system
TownInfo learned policy dialogue system
HIS POMDP dialogue system for tourist information
Linguamatics in-home ontology-based dialogue system.
In addition, a variety of valuable dialogue corpora were collected and annotated (SACTI, SAMMIE, ISU-COMMUNICATOR), and released in the TALK data archive. – This data is available upon request from the consortium.
The TALK project has achieved a number of advances over the state-of-the-art in dialogue systems both in industrial deployment and research labs. All the TALK advances share the same basic advantages over the current industrial state-of-the-art which derive from the inherent flexibility, mixed-initiative processing, and modularity of the ISU approach. In addition to these advances, the TALK project has developed a number of novel conceptual and technological advances:
first dialogue systems combining multimodality and multilinguality
first ISU system using CPS (Collaborative Problem Solving) dialogue management
first full dialogue system combining Reinforcement Learning with complex dialogue contexts
first full dialogue system showing POMDP learning
first voice-programmable dialogue systems
first dialogue systems using ontologies for reconfigurability