Beyond the screen - agentic lifts lid on life without displays

Screens have come a long way and hold a firm grip on our interactions with technology, but AI Assistants are offering a possible alternative… Apadmi’s co-founder and Chief Innovation Officer Adam Fleming takes a look.

Adam F headshot

It seems like a long time ago when I held my first mobile phone — a Nokia 5110 — an almost indestructible device with the ergonomics (and resilience) of a brick. The display was a dizzying 88×44-pixel monochrome postage stamp, and the battery lasted for days. It could make phone calls, but that was about it (unless you wanted to play Snake).

Fast forward to today: the iPhone on my desk has around 750 times more full-colour pixels, spread across a screen roughly 16 times bigger. So what’s the future for displays? They can’t just keep getting bigger, so how are they likely to evolve and what might be beyond the screen?

Innovation in displays is continuing apace, and has been for years. Folding and rollable screens have been at trade shows, and occasionally in the wild, for some time now. Holographic or volumetric displays have been shown and even deployed in limited contexts. Virtual reality rigs have gone from huge, ponderous installations to widely available consumer electronics. Smart glasses made a splash at this year’s CES, and smart contact lenses are inching ever closer to reality.

From viewing and swiping, to talking

But as visual displays approach saturation — of clarity, size, depth, and pervasiveness — the new frontier isn’t visual at all. It’s conversational.

The rise of LLMs and associated technologies means that language and voice is rapidly becoming a more direct interaction channel. Why search for an option, or even an app, when you can simply ask your assistant for what you want? Which then begs the question, do you even need a screen?

There are already players exploring this direction; at CES in January 2025, there were seven smart glasses announcements, two of which featured built-in AI assistants.

But embracing this shift comes with real challenges.

In a modern smartphone, the screen is both an input and output device. Without it, something else has to carry that load. Voice is a natural fit until you’re in a situation where speaking out loud isn’t possible, or simply isn’t appropriate.

Apple’s Vision Pro, and other players in the VR/MR market, have demonstrated that gesture-based input is possible, although for now, that’s still not ideal in many situations. Could this point to a new model for interaction? One that’s more intuitive, embodied, and ambient?

The logical endpoint of this path is direct brain interfaces. Work is underway, but we’re still likely years, if not decades, away from meaningful implementations.

It’s also worth recognising that AI assistants themselves are still in their infancy. LLMs have come a long way since their earliest (and often hilariously creative) versions, but there are still challenges around cost, accuracy, reliability, and a slew of legal and cultural issues.

AI agents take on the screens

But an AI assistant is more than just an LLM. It needs reasoning, planning, and, most importantly, the ability to effect real change. It needs agency.

In a world optimised for human users, with interfaces built for eyes, fingers, and attention, AI agents remain second-class citizens. Given how much today’s user interfaces are designed, and monetised, around the human user, is there even an incentive for service providers or retailers to support, let alone welcome, AI agents?

Viewed one way, the AI agent is a perfect antidote to the ad-driven systems built to extract maximum value from every eyeball. That alone suggests it won’t be universally welcomed.

We’ve seen this pattern before. In the early days of the internet, many retailers and service providers were deeply sceptical. Selling online was seen as a threat to physical stores, a dilution of brand, or simply a technical distraction. In the UK, companies like HMV and Woolworths struggled to adapt, while others such as WHSmith treated e-commerce as an afterthought. In the US, Borders famously outsourced its online sales to Amazon, a decision that helped accelerate its own decline.

Disruptive technology doesn’t wait for permission

Meanwhile, the trailblazers blazed on. Amazon, born as a scrappy online bookseller, embraced the internet’s disruptive potential and went on to reshape global commerce. Ocado built a grocery business around logistics and software from day one and now licenses its technology to supermarkets around the world. Resistance didn’t stop the shift. It only reshaped where the opportunities landed, often in the hands of those who moved first and moved fast.

We may be approaching a similar moment. The rise of agentic systems (AI assistants that act on behalf of users) represents a fundamental shift in how digital services are discovered, evaluated, and used. It breaks with the visibility-first, engagement-driven logic that underpins today’s digital economy.

And like every disruptive wave before it, this one won’t wait for permission. For those willing to rethink their role in a world where the interface disappears and the user might not even be human, there’s opportunity. For everyone else, there’s disruption.

At Apadmi, we’ve spent years building exceptional mobile experiences for the world as it is, and we’re already thinking about what’s next. If you’re exploring the future of interaction, from voice to vision to AI-powered agents, we’d love to help you shape it. Contact us today.

Other insights

Loading...

Newsletter

Sign up for our latest updates:

By signing up, you accept the terms of Apadmi's Privacy Policy and consent to receive our emails. You can unsubscribe at an time.