INDUSTRY INSIGHTS
AX, EX, OX, and PX: Towards a Frictionless Publishing Experience
A reflection from our session at the London Book Fair 2026
Neelanjan Sinha
VP – Product & Technology
At a time when nearly every conversation begins (and sometimes ends) with AI, we wanted to anchor our talk at this year’s London Book Fair around experience instead. Not because AI is unimportant. In 2026, that hardly needs saying.
In real publishing workflows, how people experience the system might be the more urgent problem, and the more powerful lever for change.
A wrong map that works
The evening before our session, we were on the London Underground, staring at the Tube map above the doors, and a thought landed that ended up reshaping the next morning’s talk.
The Tube map is wrong. Distances are distorted, angles are invented. Laid over an actual map of London, the two would barely agree.
And yet, millions of people navigate one of the world’s most complex transit systems with it every day.
In 1933, an engineering draughtsman named Harry Beck made a radical choice. He threw out the geography, straightened the lines, and stripped the map down to what a traveller actually needs: how to get from one place to another. Technically wrong, experientially right.
In publishing too, the instinct is to make things more accurate, more detailed, more complete. Often, the better intervention is to redesign how people move through the work.
The friction that metrics miss
Publishing systems work. Often quite well. But underneath the surface, there’s a layer of effort that rarely shows up on any dashboard: copy editors working around inconsistent style guides, operations teams managing exceptions invisible to everyone else, and authors struggling with submission guidelines that were written for the system rather than for them.
The metrics look fine, but they don’t show how much effort it took to get there.
The useful kind of friction in this system, the kind that shows up when an editor questions a reference or catches an error, or reflects some king of judgement? That friction should stay.
The dangerous kind is the friction that forces skilled people to absorb effort that systems should carry. And the better they are at it, the more invisible it becomes.
That is the gap between efficiency and experience. A system can look fast and still feel broken to the people inside it. When the people carrying that hidden effort leave, or when volume scales, or when AI changes the nature of the task, the fragility surfaces. By then, the problem becomes an expensive one to fix.
Four lenses, one system
We’ve been thinking about experience through four connected lenses. We call them AX, EX, OX, and PX.
Can authors, editors, and operations teams navigate the publishing process with ease?
Can publishers see the system clearly enough to steer it?
No single persona sees the whole system. Each experiences a part. When you optimise one layer in isolation, friction doesn’t disappear. It simply moves elsewhere.
Shanthi saw this firsthand while trying to simplify a publisher’s 12-page author submission guidelines. From the author’s perspective, every section felt necessary. From an operations perspective, it should have been a one-pager. That tension between what’s right for one persona and what works across the system doesn’t get solved by better tools, it gets solved by understanding how the layers connect, and what each persona actually needs in order to do their work well. That’s a design problem, not a technology problem.
These layers reinforce each other. When author experience is poor, editorial inherits the mess. When editorial comes under strain, operations absorb it. And when operations are constantly patching invisible gaps, the publisher’s view of the system becomes less reliable.
It compounds the other way too. Improve the experience at one layer thoughtfully, and the benefits propagate across the others.
Why this matters even more with AI
AI can perform well on individual tasks. But inside a real workflow, with handoffs, edge cases, and judgement calls, the problem changes.
Think of a Formula One pit stop: 21 people, under 2 seconds, it works because everyone knows when to move and when to stay still. That’s orchestration, and it’s exactly what’s missing when AI enters most publishing workflows.
The question isn’t whether the model works. It’s when the system should proceed, pause, or involve a human. A confident answer isn’t always a safe one.
AI also changes the work itself. In copy editing, understanding builds as you go, the argument, the structure, the author’s voice. Reviewing AI output requires that same understanding, plus judging what’s right, what’s missing, and what context was lost. It’s a different task. Often harder.
So a human in the loop is not enough. Unless the workflow helps people apply judgement well, the loop becomes theatre. Orchestration is what builds trust: handling uncertainty, surfacing risk, and making human judgement count. That is why our position is experience-first, AI-enabled.
Before reaching for engineering, we ask whether the problem is really one of design, clarity, or workflow. It is also why we are investing in Responsible AI. Our ongoing assessment toward ISO 42001 is a way to formalise how the system behaves around transparency, bias, data handling, and human oversight. Trust has to be designed into a domain built on research integrity.
What we keep coming back to
Friction reveals where systems leak effort.
Experience shows where people absorb it.
Design helps remove the wrong kind of friction and keep the right kind.
AI doesn’t change these principles, it raises the stakes.
We don’t have it all figured out, but if any of this resonates, here’s a question worth sitting with:
Where in your workflow are people absorbing effort that the system should carry?
And is the fix you’re reaching for aimed at the right layer?
If any of this sparked a thought, a question, or a disagreement, we’d love to hear from you.
This post is based on our session at the London Book Fair 2026, presented by Shanthi, Jessica, and Neel.

