Role

Concept, Product Design, Interaction Design, Prototyping

Why did I do this?

The Figma Make-a-thon theme was cultural relevance, and I thought, what could be more culturally relevant than this?

Tools

Figma Make

Overview

It's become very common to see people use AI chatbots as their personal therapists. Those experiences can be…interesting to say the least. But when we talk to these chatbots, what is it actually thinking about us? Because that decides how it chooses to respond.

This project exposes that emotional inference layer so you can actually see what judgments AI is making about you and whether it's right or not.

Try the Prototype

Try free typing in the input field or explore one of the scripted scenarios to get started. The system analyzes tone and context, visualizes detected emotions with confidence levels, dynamically selects an advice approach, and adapts its response in real time. You can switch between advice styles to see how reframing changes the output.

Let's break down the interface

This interface is designed to make the reasoning behind AI responses visible. Instead of generating advice immediately, the system first interprets the situation, surfaces emotional signals it detects, and then frames the response through different advice approaches. Each part of the interface exposes a different stage of that process.

Inference Layer

The inference layer is supposed to embody the user's emotions in a visual form. At the top of the layer is a lens object that shifts in color tones according to the emotions that have been analyzed. In the future, we can have more play and expression with form and motion.

The detected emotions change dynamically based on context. The emotions are not the sam every time. A possible idea is to let users change the sliders to be closer to their perception of their own emotions which again, gives them more control over their experience.

Advice Approach:

The prototype cycled through 4 advices approaches, namely: Solution-focused, Empathetic, Reflective and Challenge my thinking. The AI would pick a advice approach based on context and give one other option that the user could switch between. The idea was to give more flexibility and control to the user over their own experience.

Custom Scenario Interaction:

In addition to suggested prompts, users can describe their own situation in natural language. The system analyzes the tone and context of the input, updates the detected emotions, and generates a response based on the selected advice approach.

This allows users to experiment with different scenarios and see how the system’s interpretation and guidance change depending on how the situation is described.

Reflection

This project explored how AI systems could make their internal interpretations more visible to users. Many conversational tools generate responses without revealing how they arrived there. Designing the inference layer was an attempt to expose that reasoning process in a way that felt understandable rather than technical.

One key takeaway was how difficult emotional interpretation actually is. Even when the system identifies possible signals, those signals are inherently ambiguous. Visualizing them with confidence levels and multiple advice approaches helped highlight that uncertainty rather than hiding it.

If I continued developing this concept, I would explore richer ways of embodying emotion in the interface. The lens currently uses color to reflect emotional tone, but motion, shape, and interaction could further communicate how the system is interpreting the situation.

This project ultimately reinforced my interest in designing AI systems that are transparent, interpretable, and collaborative, rather than treating the model as an invisible authority.

Thanks for reading!

Thanks for reading!