Generated User Interfaces (GenUIn)

What are Generated User Interfaces?

AI-generated user interfaces (GenUIn), also often called generative user interfaces, leverage artificial intelligence to create dynamic, adaptive, and personalized user experiences. By analyzing user behavior and preferences, generated user interfaces can select and adjust interface elements in real-time. This approach enables the development of intelligent interfaces that anticipate user needs, streamline interactions to user intentions, and provide context-aware assistance. The integration of AI in UI design is transforming traditional human-computer interaction paradigms, leading to more intuitive and effective systems. In our department Human-AI Interaction at Fraunhofer IOSB we conduct and offer research with this technology. Our implementation of generated user interfaces is called GenUIn.

What are Multimodal Generated User Interfaces?

Multimodal generated user interfaces interact with users via multiple differnet input and output modalities. They are based on one multimodal foundation model or on a system of models with capabilities of different modalities, such as large language models (LLM) for text understanding and generation. Or visual language models (VLM) for understanding images and videos and generating images and videos, etc.. They are particularly effective when understanding the situation holistically and providing coherent multimodal output. The input is not limited to active user prompts, but may also read context information like weather forecast from the internet, data of a vehicle’s CAN bus with user data on board and routing data from navigation devices. Multimodal GenUIns are very effective due to their rich context awareness.

Why do we need Generated User Interfaces in passenger cars?

In the closed cabin of a passenger car, user intentions and needs are restricted. Recognizing use behaviour is therefore easier than in open spaces. The car provides a rich source of sensor data about the exterior and interior. In manual driving mode users need effective support which calls for GenUIn. GenUIn can adapt to driving complexity and the level of automation, supporting level compliant driver behavior. Passengers and drivers in automated mode profit from hyper-individualized assistants. Multimodal GenUIn cater for minimizing motion sickness inducing activities, avoiding disturbing other passengers and interrupting conversations or activities. Different from current static interfaces, generated user interfaces are highly adaptive to driving situations.

GenUIn can adapt to different configurations of the car, use available user interface elements and adapt if an element is currently not available or suitable. GenUIn can even adapt driver assistance systems, automated driving modes or passive safety configurations when it proves sufficient accuracy. Besides this practical advantage, generated user interfaces create highly individualized in-cabin experiences, providing the potential of achieving more suitable results for each vehicle occupant. Users will soon expect this individual support, induced by their experience with generated user interfaces in other products, such as smartphones. Predefined and non-individualized menus will feel old-fashioned. GenUIn will be able to adapt to future user expectations, possibly even without a dedicated update, but inherently due to their architecture that is able to generate new interaction philosophies. It is time to start investigation and implementation of AI in automotive user interfaces.

GenUIn creating a large „do not read“ recommendation with a continuous sound that increases in loudness before curves. This generated user interface was designed to avoid motion sickness of drivers when reading in level 3 automation.

Are users ready for adaptive interactions?

Users prefer predictable interactions – both in human-machine interaction as well as in human-human interaction. In the past, adaptive user interfaces have widely failed user acceptance. Research and user testing is still needed to teach generated user interfaces the right balance of predictable user interfaces and results, as well as generated, unpredictable but effective interactions and results.

Now is the time to design adaptive interactions

AI is changing the capability of machines from carrying out predefined tasks to individually generated, custom made and highly effective solutions. Those generated results are often more suitable for the user’s needs. Due to the new level of effectiveness, AI interaction is currently introduced to PCs, smartphones and basically any new devices. In a short time, this trend will change user expectations on the effectiveness of interaction also within cars. If human machine interaction in cars is lacking behind general and popular trends, we will witness a bring-your-own device culture again, which is less safe, less integrated, and less secure and safe than OEM interfaces. We can expect a shift of user expectations towards more helpful results of AI driven systems.

What distinguishes generated user interfaces from traditional interfaces?

Generated user interfaces differ from traditional predefined interfaces by generating new output for every interaction – tailored to the user’s intent and condition. They provide a custom interaction and individual results. Instead of offering a predefined menu with a predefined interaction tree and providing eventually one of the predefined results, generated user interfaces initiate the interaction either as a reaction to a user input or proactively by a context event. They generate suitable interaction steps, crafting menus or inquiries for more information until they generate one or several custom-made results.

Traditional-UIGenerated User Interfaces (GenUIn)
User initiated or based on predefined eventUser initiated or based on context reasoning
Predefined menuGenerated menu
Predefined (micro-) interaction treeGenerated interaction steps if needed
Predefined deterministic resultGenerated probabilistic result(s)
Not context sensitive, and if: sensitive to predefined contexts and in a predefined wayContext and reasoning sensitive by architecture

Probabilistic versus deterministic UI

Generated user interfaces are probabilistic. Their interaction is based on multimodal and graduate input and on AI black box reasoning about the user intent. The output shows variations and there is no deterministic relation between input and output. Traditional user interfaces are deterministic: Designers have predefined every input possibility and a corresponding output.

Choosing from different results vs. selecting predefined results

Generated user interfaces can generate several individualized results, from which the users select the most suitable. Meanwhile, traditional interfaces only guide the user to select one of the predefined generic results.

Solving the problem vs. helping to solve the problem

Generative user interfaces are more straightforward to the solution or to several solutions, and act proactively while traditional user interfaces assist the user step by step to solve the problem by themself.

Less interaction, faster results

Generated user interfaces reduce menu options and interaction steps because they infer the user intent and provide matching solutions for it. Traditional user interfaces do not infer user intent. 

Generated user interfaces reduce the need for step-by-step information retrieval from the user. They infer the intent based on a holistic interpretation of all available information in a prompt or from a proactive analysis of the situation. Due to the capability of inferring user intent, generated user interfaces can proceed faster to a possible result in a one-shot or few-shot approach. Generating several results is easy, so the user may select from a variety of results, rather than specify all details (typical in image generation).

What are the benefits of Generated User Interfaces?

Generated User Interfaces are adaptive and responsive

Generated user interfaces are adaptive to the input they receive and to the context they automatically recognize. Generated user interfaces can adapt to the user’s intention and cognitive capacity, and they can respond to available and fitting output modalities. They can also respond to every-changing or available input modalities. Even to changing reliabilities of input.

Generated User Interfaces offer presonalized interaction and customized results

Generated user interfaces are personalized, not only adaptive to each user (not only personalized) but adaptive to the available information about user and context, and adaptive to the reasoning of the LLM about user intent and current needs.

Generated User Interfaces offer up-to-date responsiveness and learning ability

Generated user interfaces adapt to changing trends, moods or needs and to the history of interaction with each individual user. Generated interfaces learn from user feedback and reactions and previously preferred results for future interactions and the production of future solutions.

Generated User Interfaces can be multimodal

Generated user interfaces can interpret multimodal input and select suitable multimodal output modalities and create a coherent response. They can use modular output architectures, adapting the output to the available and suitable modalities.

What are the challenges of Generated User Interfaces?

Generated User Interfaces still face different challenges.

First of all is the user’s acceptance of adaptive user interfaces and unpredictable outputs. Users will need to learn new interface philosophies, maybe slowing down the interaction at first. The probabilistic nature may lead to inconsistencies, requiring iteration and refinement. While generated user interfaces tend to be more effective in achieving an individual user’s goal, they may be less efficient if the results need to be refined several times.
Second are technological limitations. AI generation requires a significant amount of processing power, which is limited in many applications. Energy consumption may be a problem. The responsiveness of the interface may also be slower than predefined interactions. The good news is, that all those issues are researched and being worked on. And prototypical solutions are already available for many applications.  

How can I get GenUIn in my car, prototype and user study?

Contact Fraunhofer IOSB experts for generated user interfaces to integrate latest generative AI models and innovative generated user interfaces into your proof-of-concept or demonstrator. Let us carry out your user evaluations to get it right from the beginning and optimize your architecture and tools for a future proof prototype and product.

Our department Human-AI Interaction is your ideal partner for AI research and development, especially with AI solutions for multimodal interfaces, multimodal perception, and user interfaces.

Kommentar schreiben

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert