Accessibility

Accessibility

Tailor your digital experience to your specific needs by adjusting the accessibility preferences below
tool-icon
Reduce motion
tool-icon
Dark mode
tool-icon
Large text
tool-icon
Large spacing
Get in touch
Main menu
Accessibility

Accessibility

Tailor your digital experience to your specific needs by adjusting the accessibility preferences below
tool-icon
Reduce motion
tool-icon
Dark mode
tool-icon
Large text
tool-icon
Large spacing
Get in touch

UX and AI: Designing Interfaces for the Future

User Experience Consultant

In this article, we’ll touch upon how UX and AI intersect, and how UX designers can stay ahead. We’ll explore topics such as how to ensure AI interfaces still feel intuitive and human, how to leverage AI personalisation, how to design for natural language and emotional responses, and discuss the shift from designing interfaces to creating AI-powered experiences. 

The challenge of keeping AI interfaces human and intuitive

As we know, artificial intelligence is becoming more embedded in digital products and shapes how users experience technology. Traditionally, UX design has been all about understanding human behaviour and using this data to build interfaces that are clear, efficient, and delightful. But static user interfaces are now just a thing of the past, as we’re designing fluid, intelligent systems that adapt, learn, and even talk back – they’re less predictable and more adaptive. But what does this mean for the role of the UX designer and the future of UX? Designing for context, conversation, and cognition has become more of a focus as opposed to solely thinking about screens and buttons.

Maintaining the human touch seems to be one of the biggest challenges of integrating AI into user experiences. You’ve probably encountered an AI-powered product and felt that it came off as robotic, confusing, or perhaps even intrusive (if not designed thoughtfully). There are a few ways that designers can make intelligent systems still feel human: 

Transparency in responses

Ever had an app or website recommend something to you, but you didn’t know why? When AI makes a decision such as recommending a product or changing your user experience, it’s essential to communicate the “why”. For instance, Netflix has a simple yet effective way of showing viewers the logic behind content recommendations with the heading “Because you watched…” – it might make users even more inclined to check those recommendations out because they’ve been informed that those suggestions are based on content that they’ve already enjoyed, rather than just shown content that they wouldn’t have even known was relevant. 

Specificity 

Let’s take the Netflix example again. If a film is recommended to you in the “Because you watched” section, but you only watched 15 minutes of a film of a similar genre before giving up on it because it bored you, you probably aren’t going to click on that recommendation. Being more specific and evaluating which films you watched the entirety of, and then making recommendations on those instead would improve the effectiveness of the AI feature even more. Another example could be a user making an accidental click on an item during online shopping, and then making similar suggestions to the user based on that item. It would be more useful to analyse which items the user spent most of their time interacting with, such as the product description page. 

Tone and voice

Thoughtful micro-copy means everything. Even if AI is driving the interaction, the personality of the interface (its tone, voice, and overall behaviour) plays a huge role in whether users feel comfortable and understood. The more intelligent a system becomes, the more important it is to anchor it in human communication principles, rather than feeling robotic or impersonal. Avoiding jargon and using conversational, clear language is key, and there should be a clear connection with the user’s emotional state and context. “Oops! Something went wrong. Shall we fix it together?” feels a lot better than “Error code: 305”. Using polite phrasing such as “Would you like me to…” rather than “I’m doing this now” makes a difference, too. 

 

Personalisation and AI

AI can collect, analyse, and act on vast amounts of data in a split second – this opens the door to highly personalised and tailor-made user experiences that enhance usability, increase engagement, and make digital interactions much more meaningful. However, another challenge is that there’s a fine line between personalisation and intrusion – just because you can personalise something doesn’t always mean you should. The key is to ensure that personalisation is consensual, transparent, and relevant. Here are a few ways that achieve the perfect balance of personalisation and respect for users’ boundaries: 

Create content that learns and grows with the user

Personalisation isn’t just about adjusting in the moment, it’s also about shifting over time. User experiences should feel progressive and intelligent rather than repetitive or static, and the way to achieve this is by observing user behaviour and learning from patterns. For instance, Spotify’s ‘Discover Weekly’ playlist analyses a user’s listening history to generate a mix of songs they’re likely to enjoy but haven’t heard yet. It’s an exploratory and surprising feature rooted in personalisation! 

Duolingo’s learning path also adjusts based on what words the user struggles with, how frequently the user engages, and their preferred learning style so that making progress feels rewarding. To ensure transparency in processes like these, it’s important to make the evolution visible. Let them know that the system is learning by displaying progress bars, offering feedback like “We’ve updated your recommendations”, or visually highlighting new content tailored to them. It should also feel natural, so exploring new topics or resetting the algorithm if tailored recommendations become too narrow or repetitive. Overall, the user should feel like the product is growing alongside them, not putting them into a predefined box. 

Prioritise user control 

One of the core principles of ethical AI design is giving users agency over how their data is used, how their experience is shaped, and how much personalisation they want. If users don’t know why they’re seeing certain content, or worse, they can’t change it, they quickly lose trust. A great way to uphold that user trust is by offering granular control settings – for example, allowing users to choose if they want AI-generated suggestions, personalised emails, or dynamic interfaces. Providing reset and feedback options would also strengthen their experience by allowing them to correct bad suggestions, reset their preferences, or even retrain the system. This empowers users and helps to improve their experience over time. YouTube has ‘Not interested’ buttons and shows users why videos are being recommended. These kinds of tools don’t just refine the user experience but build long-term trust in a brand – because when people feel like they’re in control, they’re more likely to stay engaged and loyal. 

 

How can we design for natural language and emotion-aware responses?

Chatbots, voice assistants, and other variations of AI companions are becoming part of everyday digital interactions, so users now expect natural language interfaces to be fluid, conversational, and emotionally intelligent. This means that the interfaces transcend mere command-response loops, and rather create experiences that understand tone, intent, and emotional nuance without overstepping ethical boundaries or breaking trust. So, what is the recipe for designing a great conversational experience?

  • Clarity: Responses should be clear, concise, and easy to act on, and free of rambling or ambiguity 
  • Tone matching: The bot should be speaking in a way that suits the brand and the situation. For instance, a healthcare bot should sound calm and reassuring, while a fashion brand bot should sound playful and energetic
  • Predictability with flexibility: Users should know what kinds of responses or actions to expect, and the system should adapt its replies based on the user’s inputs to avoid sounding robotic or repetitive 
  • Account for the possibility of misinterpretation: Use gentle, non-assumptive language and always give users a way to correct the system 

Beyond simple command recognition, emotion-aware AI takes things further – analysing user sentiment based on tone, text, or behaviour to offer more supportive and personalised responses. The sensitivity of a chatbot concerning a user’s inputs is important too – being able to detect frustration or confusion and intelligent enough to shift its tone or slow down makes a user even more trusting of a brand. 

 

Final thoughts

As UX professionals, it’s our task to make things go beyond looking good or flowing well – interactions need to be intelligent and responsive, yet trustworthy, empathetic, and aligned with human values. To thrive in this space, UX designers need to build a deep understanding of AI capabilities (as they grow!), advocate for ethics and user control, and design in a way that adapts. The overarching goal is to make AI work with people instead of against them, and bridge relationships between humans and intelligent systems. 


Back to top