Apple’s virtual assistant Siri, which has long struggled with basic tasks, is expected to undergo significant updates as part of Apple’s upcoming iOS 18 release. This announcement comes as Apple aims to enhance Siri’s capabilities and address its limitations in contextual awareness and seamless conversation. The question of whether Siri will become more like ChatGPT raises important considerations about its contextual awareness.
A recent interaction highlighted Siri’s ongoing challenges. When asked about the dates for the Olympics, Siri correctly provided the information. However, when asked to add the event to the calendar, Siri faltered, asking unnecessary follow-up questions like, “What should I call it?” and “When should I schedule it?” This demonstrated Siri’s difficulty in understanding context and maintaining a conversation.
Apple’s annual Worldwide Developers Conference (WWDC) on June 10 is expected to showcase major improvements for Siri. These updates aim to make Siri more context-aware and capable of handling conversations more naturally, similar to its competitors, Amazon’s Alexa and Google’s Assistant.
The Evolution of Voice Assistants
Siri made a significant impact when it debuted with the iPhone 4S in 2011, offering a humanlike voice interaction experience. Although other Android phones had basic voice search and commands, Siri’s intuitive interface set a new standard. However, over the years, Alexa and Google Assistant have surpassed Siri in understanding and answering questions, as well as integrating into smart home systems.
In recent weeks, advancements in generative AI have introduced a new wave of virtual assistants with multimodal capabilities. OpenAI’s latest AI model, GPT-4o, can hold two-way conversations, understand tone, and even respond to images and videos. Google has also unveiled its multimodal assistant, Project Astra, which can identify objects and remember details about the user’s environment.
While Apple has not yet released a multimodal digital assistant, it has published research on a model called Ferret. This AI model can understand screen content and perform various tasks based on visual input. Apple’s focus on privacy remains a core value, with plans to process Siri’s requests on-device and use cloud processing for more complex tasks.
Apple is reportedly close to finalizing a deal with OpenAI to bring ChatGPT to the iPhone.
Siri’s Struggles and Competitive Landscape
Will Siri become more like ChatGPT with the adoption of advanced AI capabilities? Siri’s inability to understand context is a significant drawback. For instance, when asked about the dates for the Olympics, Siri provided accurate information. However, it stumbled when asked to add the event to the calendar, failing to grasp the obvious follow-up action. This illustrates Siri’s difficulty in maintaining a coherent conversation, which diminishes its usefulness as a virtual assistant.
Since Siri’s debut, other virtual assistants like Amazon’s Alexa and Google Assistant have rapidly advanced. These competitors excel in understanding and answering complex questions and have expanded their functionalities into smart home systems. Siri, on the other hand, seems to lag in both contextual understanding and integration capabilities. Alexa and Google Assistant have set higher standards by being more intuitive and versatile, which highlights Siri’s stagnant progress.
Looking Forward
Recent advancements in generative AI have introduced multimodal assistants that understand and respond to various inputs, such as text, images, and videos. OpenAI’s GPT-4o and Google’s Project Astra are notable examples. These assistants can engage in natural conversations, recognize objects through a camera, and remember details about the user’s environment. This technological leap underscores Siri’s current limitations and the need for significant updates to stay competitive.
Will Siri become more like ChatGPT as Apple adopts advanced multimodal capabilities? Apple is expected to unveil major updates for Siri with iOS 18. These updates aim to enhance Siri’s contextual awareness and conversation abilities. Apple’s research on the Ferret model, which can understand screen content and perform tasks based on visual input, indicates a potential shift towards multimodal capabilities.
Also Read: OpenAI Concerns Spark Debate Over Ethical AI Practices.