From AI to Z: Dissecting Google I/O 2024 from a mobile perspective
It’s no surprise there was a definite AI undercurrent at Google’s I/O event this past week. From enhanced search to live assistant, there was plenty to digest.
The question many are grappling with is how AI can be more seamlessly integrated into our daily lives. And it was hard not to be impressed with some of the live demos from the Alphabet folk.
But what is it likely to mean from a digital product perspective and where does that leave us with the big questions around how AI is making its presence felt in the world of mobile? Time to cast a critical eye over some of the key news.
Gemini takes over
One of Google’s standout announcements was the integration of Gemini across its entire suite of products. This marks a pivotal and unsurprising step forward for the tech giant.
Folding Gemini into as much of Google's estate as possible isn't just a tactical enhancement; it's a move that sets the stage for more intuitive and intelligent user experiences across all Google platforms, including mobile.
With Google already having a huge number of users in its ecosystem - getting them onboard with Gemini as smoothly and quickly as possible has the potential to set up a flywheel effect, driving adoption and locking customers into Google’s wider tooling.
It's a smart play to embed AI deeply within the everyday experiences of its users, making Gemini an integral part of their digital lives.
What does that mean for mobile?
Where that gets interesting is the potential to bring more of Gemini’s capabilities to the mobile space. The balance between edge and cloud inference will certainly be one to watch.
How Google navigates this will determine the efficiency and responsiveness of AI applications on mobile devices.
This also casts a spotlight on Apple’s AI strategy. Quite which way it will go on this is something the tech world awaits intently, with more details likely to be shared at the upcoming WWDC on 10 June.
Keeping up with the AI giants
Despite OpenAI's simultaneous announcement of GPT-4o being faster and free, slightly upstaging Google, its live demonstration of AR and glasses still managed to showcase the real-world benefits of its AI advancements.
Exploring the synergies between AI and hardware could allow Google to maintain a slight edge in the market. Keeping up with OpenAI won’t be easy, with ChatGPT receiving approximately 1.6 billion views monthly, but Google won’t be taking anything lying down.
AI for accessibility
Beyond these shinier, brassier announcements, Google’s focus on leveraging AI for accessibility was also interesting. This week (16th May) was Global Accessibility Awareness Day. Features such as TalkBack image description and Project Gameface could mark a more inclusive future for mobile products.
Project Gameface will allow developers to integrate a hands-free mouse that can be controlled by making faces, allowing users to control their cursor with facial gestures or head movement. These innovations have the potential to significantly enhance the mobile experiences of users with accessibility requirements.
What are the immediate implications?
The AI debate rumbles on and we’re doing our bit to contribute to the discussion of current and future trends influencing the world of mobile experiences with a panel event on 22 May. There are still a few places left if you want to sign up here.
Share