I’ve spent a significant part of my career working on digital products in the automotive sector. Finance journeys, customer-facing configurators, the systems that sit between a person trying to buy or understand a car and the organisation trying to sell or service one. It’s a space where the technology moves fast and the human implications tend to get discussed afterwards, if at all.
Writing Who Holds the Wheel gave me a chance to put into words something I’d been thinking about across those years of work, mostly in rooms where these conversations weren’t happening loudly enough.
The chapter is about automotive AI specifically, but the underlying questions apply anywhere AI is becoming embedded in products that people live with daily. The questions are about power. About who holds it, how it shifts, and what happens when the people using a system don’t fully understand what the system is doing on their behalf.
Most conversations inside automotive transformation programmes revolve around capability. Engineers talk about prediction models, sensing accuracy, automation, and performance optimisation. Those are important conversations. But almost nobody stops to ask the more human question: what does this system feel like to live with?
A car is not just another digital product. It’s a space people occupy every day. It carries their families. It holds moments of stress, quiet, routine, and sometimes real vulnerability. When intelligence gets embedded into that environment, it changes the relationship between the person and the machine in ways that go well beyond the feature list.
Modern vehicles are no longer mechanical machines with digital add-ons. They’re rolling software platforms. They sense their environment in real time. They collect behavioural data continuously. They make decisions alongside the driver, and increasingly they make decisions instead of the driver. Lane keeping, fatigue detection, predictive navigation, remote software updates that change how the car behaves after you’ve already bought it. All of this is already happening, largely in the background, largely without clear communication to the person sitting behind the wheel.
What cars are actually collecting about drivers tends to surprise people when it’s laid out plainly. It goes well beyond location. A vehicle can detect how sharply someone accelerates, how quickly they brake, how often they correct the steering wheel, whether their attention appears to drop, and how long they’ve been driving without a break. Some systems monitor posture and fatigue signals through interior sensors. When those signals are combined, the vehicle is effectively building a behavioural profile of the driver. Most people using these systems are never clearly told that. The data collection sits quietly in the background of the experience, and the trade being made is largely invisible.
That invisibility is one of the central problems I wanted to write about. The hidden cost of AI in automotive isn’t primarily financial. It’s informational. Drivers receive smoother experiences and improved safety features, but they often have very little visibility into how their behavioural data is analysed, stored, or shared across wider ecosystems. The technology understands the driver far better than the driver understands the technology. That asymmetry is worth naming, and it’s worth designing against.
Physically, the driver still holds the wheel. But decision authority is gradually becoming shared between the human, the software embedded in the vehicle, and the organisations that continue to update that software remotely. We are, in effect, negotiating a new relationship between human judgement and machine decision-making, and most of that negotiation is happening without the driver’s meaningful participation.
I also wanted to write about care, because it kept coming up as the thing missing from the conversations I was sitting in. Not care as a marketing concept. Care as a technical discipline. The idea that designing AI systems responsibly means building in very practical questions from the start. Can the driver understand what the system is doing? Can they intervene when something feels wrong? Does the system communicate clearly when it’s uncertain? Does it return control when its own confidence drops? Are the people most likely to be affected by edge cases, vulnerable road users, older drivers, people in less familiar environments, considered in the design rather than treated as exceptions?
Those aren’t soft questions. They’re design requirements. And in my experience, they’re the ones most likely to get deprioritised when the pressure is on performance and capability.
The accountability question sits underneath all of this. When a critical error occurs in an AI-driven vehicle, legal responsibility usually sits with the vehicle manufacturer as the entity that puts the product on the road. But ethically the picture is more distributed. Every organisation involved in designing the sensors, the models, and the decision logic contributes to how that system behaves. As vehicles become more software-defined, the industry will need governance frameworks that treat accountability as a shared responsibility across that ecosystem rather than a single point of blame.
I came into the automotive sector as an outsider in some respects, a digital transformation and experience strategist rather than a lifelong automotive specialist. What that gave me was a clear eye for the gap between what the industry was building and what customers were actually experiencing.
The car-buying journey now spans digital research, retailer interaction, connected vehicle onboarding, and an ongoing post-purchase relationship managed through apps and data agreements most customers never fully read. Getting that experience right, making it trustworthy, clear, and genuinely human, requires people who understand both the system and the person on the other side of it. Those two perspectives are not always in the same room.
Writing Who Holds the Wheel gave me a chance to articulate something I’ve been thinking about across years of design programmes, mostly in rooms where these conversations weren’t happening loudly enough. AI in automotive isn’t coming. It’s already here, already shaping what people experience, already making decisions that most users don’t know are being made. The question isn’t whether to engage with that. It’s whether the people building these systems are asking the right questions about power, cost, and care.
I’m glad this chapter exists. I’m glad to be part of a book with 25 other writers who are approaching AI with this kind of rigour and honesty. The chapter sits alongside work on inclusive UX design, cybersecurity accountability, AI in agriculture, education, healthcare, and creativity, each one bringing a different lens to the same underlying question: how do we shape AI so that it genuinely serves the people living with it?
If you read it, or if it sparks something in your own work, I’d genuinely like to hear about it.
You can find the book at shewritesai.org or on Amazon.
