Our objectives fall under four main research themes
1. Unifying multimodality and multilinguality
Develop an abstract representation of information which is modality independent.
Develop criteria for presentation of information in different modalities.
Extending statistical Language Models to robustly map multimodal inputs into internal representations in the ISU approach.
2. Automatic generation and reconfiguration of multimodal interfaces
Reconfiguration by “plugging in” task and domain descriptions.
Can we reuse existing domain ontologies?
Explore the suitability of different knowledge representations for generation of multimodal dialogue systems.
Plug-and-play technology for devices and services.
Explore the relationship between domain processes and dialogue processes.
3. Multimodal presentation in the Information State Update approach
Generating user-tailored textual, tabular, or graphical presentations of information.
Sometimes in parallel with other modalities.
What is the best abstract representation of information committed to during a dialogue?
For each user and task, what information should be presented and what is its best mode of presentation?
4. Adaptivity and learning
Adapting to different users – their knowledge and preferences.
Multiple dialogue strategies available to the system, chosen depending on context.
Reinforcement Learning applied to the problem of automatic strategy optimisation.
What representations are most suited to adaptivity and learning?
What reward functions can be developed for learning about dialogue management?