Most conversations about AI and UX are about process. How do we use AI to do research faster. How do we use it to generate design variants. How do we use it to automate the parts of our workflow that are tedious and repetitive.
These are reasonable conversations. They’re also fairly surface-level.
The more interesting question, and the one I’ve seen fewer people genuinely engage with, is what working alongside machine learning systems teaches you about user behaviour itself. Not how AI changes the design process. How it changes what you understand about the people you’re designing for.
The starting point is personalisation. When you’re working on a product that uses ML to personalise the experience, you have to confront something that UX research has always known but rarely had to operationalise at this scale: users are not a coherent group. They don’t have shared preferences that a single design can satisfy. The same pattern that works for one segment actively creates friction for another. The persona was always a simplification, but personalisation at scale makes the simplification undeniable.
Designing for ML-driven personalisation forces a level of nuance in user understanding that most design processes don’t naturally produce. You can’t design a single journey and call it done. You have to think in terms of conditions. Under what conditions does this experience work well? Under what conditions does it fail? What is the user trying to do, what do they already know, and how does the system’s model of them compare to the reality of them? The gap between those things is where the interesting design problems live.
Working on these systems has also changed how I think about intent. Traditional UX tends to treat user intent as relatively stable and legible. You do research to understand what users want, and you design to support that. But behaviour data from ML systems tells a more complicated story. People’s stated preferences and their revealed preferences regularly diverge. What they say they want and what they actually engage with are often different. And what they engage with in one context is different from what they engage with in another.
This isn’t news to anyone who’s done behavioural research. But seeing it at scale, in the form of model outputs that are trying to predict behaviour and sometimes failing in instructive ways, makes the complexity more concrete. You stop thinking about users as having preferences and start thinking about them as being in states. Curious, distracted, time-pressured, exploring, executing. The design question becomes less “what do they want” and more “what state are they in and what does this experience need to do in that state.”
That’s a more useful frame for design, whether or not there’s a machine learning system involved.
The third thing working with ML teaches you is about feedback loops. Digital products create them. Most teams don’t design them intentionally. The product shows you certain things, you engage with certain things, the product learns from your engagement and shows you more of those things, your behaviour narrows. This is the recommendation engine problem writ large, but it applies to any experience that adapts to behaviour.
Designing for feedback loops means thinking about the long arc of the experience, not just the immediate transaction. What does this experience look like six months in for someone who uses it daily? How has the system’s model of them evolved? Is it converging on an accurate understanding of what they need, or is it reinforcing a narrow version of them that they’re now trapped in?
These are UX questions that most design processes aren’t equipped to ask, because they require thinking at a timescale and a level of systemic complexity that a journey map doesn’t capture. Getting comfortable with this kind of thinking, being able to trace the downstream experience implications of a model design decision made today, is one of the more genuinely new skills that the AI era requires of designers.
You don’t need to be a data scientist. You need to understand enough about how these systems work to ask the right questions and to recognise when the model’s behaviour is creating an experience problem that design needs to address.
The designers doing this well aren’t the ones who’ve learned to use AI tools most efficiently. They’re the ones who’ve let working with AI change how they think about users. More dynamic, more contextual, more honest about the gap between what people say and what they do.
That shift in thinking is harder to teach than a new tool. It’s also more valuable.