We are involved both in fundamental and applied research projects.
AI4EU is the European Union’s landmark Artificial Intelligence project, which seeks to develop a European AI ecosystem, bringing together the knowledge, algorithms, tools and resources available and making it a compelling solution for users. Involving 80 partners, covering 21 countries, the €20m project kicked off in January 2019 and will run for three years. AI4EU will unify Europe’s Artificial Intelligence community. It will facilitate collective work in AI research, innovation and business in Europe. By sharing AI expertise, knowledge and tools with the Platform, AI4EU will make AI available to all.
AI4EU will nurture more adequacy between business needs and research results and accelerate growth. It will help the European Community to become a global leader in both highly advanced and human-centred AI, promising cutting-edge breakthroughs in this pivotal technological arena.
Consortium of more than 80 partners
The aim of the project is to further develop the communication portal HalloPflege+ into an Android app with an innovative speech interaction module. This will increase the usability for elderly people in nursing homes and ultimately the collaboration between them, their families and caregivers.
In order to improve the functionality of our portal as well as to implement new communication technologies, we plan to transfer our portal from the web-based state to a native Android app. In the first version, the Android portal should contain the same functional modules as the web version. In the second step, new modules such as food selection, specifically for nursing homes or diverse other services such as ordering flowers or professional questionnaires will be developed. This will increase the usage rates of the platform by offering more and easier ways to access and use HalloPflege+.Partners:
Silent Speech Interfaces (SSI) are a revolutional field of speech technologies, having the main idea of recording the soundless articulatory movement (e.g. tongue and lips), and automatically generating speech from the movement information, while the original subject is not producing any sound. This research area has a large potential impact in a number of domains, including communication aid for the impaired people, military and/or defense applications. Recently, deep neural networks have demonstrated accuracy better than or equivalent to human performance in several different recognition tasks. Despite this, in the field of Silent Speech Interfaces, only few solutions have investigated deep learning, especially in the case of Ultrasound Tongue Imaging. Together with the experts involved in this project, we have the aim to solve the above challenges in the field of Silent Speech Interfaces with 1) 2D ultrasound, 2) lip video, and 3) electromagnetic articulography (EMA). We plan to model the articulatory-to-acoustic mapping in various ways, and finally evaluate the systems in objective tests and subjective experiments with real users, resulting in a final SSI prototype.Partners: