Apple's iPhone AI Plans Confirmed With New Software Upgrade – Forbes

Apple Phone 15 Pro on sale (Photographer: Dhiraj Singh/Bloomberg)
Updated April 27: article originally posted April 25.
How Apple will improve the next iPhone 16 and iPhone 16 Pro with artificial intelligence is one of 2024’s big questions. Now we know more about Apple’s plans to use AI in the iPhone, its approach and how it will sell it to consumers.
Apple has submitted eight large language models to the Hugging Face hub, an online resource for open-source AI implementations. LLMs are datasets that generative AI applications use to process the inputs and work through as many iterations as necessary to arrive at a suitable solution.
The larger the LLM, the more data is available, and it should not be surprising that those data sets were originally built in the cloud to be accessed as an online service. There has been a push to create LLMs with a small enough data footprint to run on a mobile device.
This requires new software techniques, but it will also place a demand on the hardware to allow for more efficient processing. Android-focused chipset manufacturers such as Qualcomm, Samsung and MediaTek offer system-on-chip packages optimized for generative AI. Apple is expected do the same with the next generation of Axx chips to allow more AI routines to take place on this year’s iPhone 16 family rather than in the cloud.
Running on the device means user data would not need to be uploaded and copied away from the device to be processed. As the public becomes more aware of the concerns around AI privacy, this will become a key marketing point.
Microsoft store is seen In Manhattan, New York (Photo by Beata Zawrzel/NurPhoto via Getty Images)
Update: Saturday, April 27: Apple is not the only company hard at work on smaller scale yet effective language models for mobile devices. This weekend, Microsoft has published details and developer guides for Phi-3. The smallest of these three generative AI models, Phi-3 Mini, is available through Microsoft’s Azure AI Studios, Ollama and Hugging Face. Phi-3 Small and Phi-3 Medium are still in their developmental phase.
Phi-3 is a large language model that works within a small footprint. Microsoft claims it can outperform models twice its size “on key benchmarks” and draws a direct and favourable comparison to GPT-3.5T. Crucially, Phi-3 Mini will comfortably run on Apple’s A16 bionic chip, which means third-party developers can target the iPhone 14 Pro and 14 Pro Max as well as the iPhone 15 family and any future models.
2024 will see the launch of many LLMs, from hobbyists right through to the majors of Silicon Valley (and Redmond). Some will be licensed by their developers out to hardware manufacturers; there is a realistic chance that Apple will work with AI models from Google and Microsoft for iOS 18 and the upcoming iPhones.
The models are easily available to third-party developers. They will have a wide choice of AI tools and will be looking for cross-platform support to ease the development process. As manufacturers lean into AI for marketing and differentiation, the apps that users crave can join the AI revolution without being locked into a single choice made by the manufacturer.
The Apple retail store in Grand Central Terminal (Photo by Drew Angerer/Getty Images)
Alongside the code of these open-source efficient language models, Apple has published a research paper (PDF Link) on the techniques used and the rationale behind the choices, including the decision to open-source all of the training data, evaluation metrics, checkpoints and training configurations.
This follows the release of another LLM research paper by Cornell University, working alongside Apple’s research and development team. This paper described Ferret-UI, an LLM that would help understand a device’s user interface and what is happening on screen and offer numerous interactions. Examples include using voice to navigate to a well-hidden setting or describing what is shown on the display for those with impaired vision.
Three weeks after Apple released the iPhone 15 family in 2023, Google launched the Pixel 8 and Pixel 8 Pro. Proclaiming them as the first smartphones with AI built-in, the handsets signaled a rush to use and promote the benefits of generative AI in mobile devices. Apple has been on the back foot, at least publicly, ever since.
The steady release of research papers on new techniques has kept Apple’s AI plans visible to the industry if not yet to consumers. By providing the open-source code for these efficient language models and emphasizing on-device processing, Apple is quietly signaling how it hopes to stand out against the raft of Android-powered AI devices, even as it talks to Google about licensing Gemini to power some of the iPhone’s AI features.
Now take a closer look at the leaked design of the iPhone 16 and iPhone 16 Pro…


Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top