Fork me on GitHub
MINT Framework

The Multimodal Interaction Framework

Multimodal systems realizing a combination of speech, gesture and graphical-driven interaction are getting part of our everyday life. Examples are in-car assistance systems or recent game consoles. Future interaction will be embedded into smart environments offering the user to choose and to combine a heterogeneous set of interaction devices and modalities based on his preferences realizing an ubiquitous and multimodal access.

This project is concerned with implementing a platform that enables designing and running multimodal web applications. These applications allow the user to switch devices and modalities or even combine them on demand. The basic idea is to offer a model-based design of custom interactors that are self-executable and that can be distributed and synchronized through several devices and modalities. Different to model-based user interface design (MBUID) that generates isolated interfaces for different platforms by a structured abstract-to-concrete modeling of interaction, we design interactors that are, once they have been designed, assembled by user interface builders to form a multi-modal interface. Therefore, we adapt classical user interface building instead of requiring the developer to learn new languages and processes to create interfaces.

Download and Install

installWe have just released the second version of the MINT-framework (2012) as Open Source on GitHub. Please check there for installation instructions. We are happy about every comment or bug report or message on GitHub. Please note that the development of the framework is an ongoing approach tightly coupled with our effort to research about modelling of multimodal interfaces.