ST Explainer: What will happen to my data when Apple AI comes to the iPhone?
![Android](https://static1.straitstimes.com.sg/s3fs-public/styles/large30x20/public/articles/2024/06/18/412772281.jpg?VersionId=xCRNYD4NI9DdheDqVEpwMytUANNbbFgQ)
SINGAPORE - Starting later in 2024, Apple will be adding artificial intelligence (AI) to its iPhones, iPads and Macs. This includes a souped-up version of its Siri virtual assistant and allowing Siri to forward questions and prompts to OpenAI’s ChatGPT.
The announcements, made at Apple’s 35th annual Worldwide Developers Conference (WWDC) from June 10 to June 14 in California, have raised questions about how users’ personal data will be handled by the company which has made privacy a hallmark of its brand.
Could incorporating OpenAI’s technology result in users’ data being shared and used for AI model training? What is in Apple’s AI? The Straits Times answers these questions and others.
Q: What is in Apple’s AI?
At WWDC, Apple announced its own AI, called ‘Apple Intelligence’. It is a new suite of AI features that includes a more conversational and smarter Siri, as well as an image generator called Image Playground.
Siri will be able to interpret commands and carry them out between both Apple and third-party apps. These include helping users dig up a photo taken on a particular holiday, finding relevant files to attach to an email, or scheduling a meeting mentioned in a text message.
ChatGPT will also come to Apple products as a chatbot that Siri can turn to for more complex tasks, such as writing documents that require general information in the world that are less directly linked to the user.
Siri will ask users for permission before it sends questions, documents or photos to ChatGPT. After ChatGPT responds, Siri will present its answers.
Meanwhile, Image Playground will feature metadata details stating whether images are created by AI in an effort to stop fake pictures being passed off as real ones. The Image Playground app will pop up in Messages or Notes to allow users to create cartoons and illustrations rather than photos, in another way to protect against these tools being misused.
Q: Why does Apple need ChatGPT?
Market observers said that OpenAI is on Apple devices because the latter was too slow in the AI chatbot race.
Free AI chatbots by Google and Meta have already appeared in popular productivity and communication tools.
For instance, Meta AI is now a fixture in most users’ WhatsApp accounts and accessible through search bars across Instagram and Facebook.
Google has also integrated its AI chatbot Gemini (formerly Bard) into the messaging apps of Android users, who can also set Gemini as the default AI assistant on their mobile devices.
Apple said it was starting with the best chatbot but that it would support other third-party chatbots down the road.
Q: Will my personal data be shared?
Apple’s suite of AI tools is meant to be a personal assistant that draws on users’ private data in emails, calendars, text messages, pictures and apps, with processing of the information done mostly on users’ devices.
OpenAI, on the other hand, processes users’ information in the cloud if users decide to use ChatGPT through Apple.
However, OpenAI made an important concession as part of its agreement with Apple, agreeing not to store any prompts from Apple users or to collect their IP addresses.
This concession will be invalid if users connect and log into an existing ChatGPT account on their Apple devices. Some users might want to do this to take advantage of their existing ChatGPT history or the benefits of ChatGPT’s paid account plans.
Q: Can Apple be trusted with data that leaves my device?
Apple has kept personal data processing such as FaceID verification on users’ devices to limit risky exposure. Similarly, Apple Intelligence will process AI prompts directly on users’ devices using smaller AI models as much as possible.
But some AI tasks such as those involving images may need more processing power. In such cases, Apple Intelligence will send users’ query and data to a cloud computing platform controlled by Apple, where a more capable and larger AI model will fulfil the request.
A major privacy breakthrough, announced at the 2024 WWDC, allows Apple’s cloud computing platform to run computations on sensitive data without itself being able to tell what data is being processed.
Apple’s new platform, dubbed Private Cloud Compute, borrows privacy-preserving concepts from the iPhone. After fulfilling a user’s AI request, Private Cloud Compute scrubs itself of any user data involved in the process, Apple said.
Apple claims Private Cloud Compute is only possible because of its tight control over everything from specialised, proprietary computer chips to software.
Q: So how does Apple train its AI if it does not use my personal data?
In a technical document released on June 10, Apple said its AI models are trained on licensed data. Similar to the AI models of OpenAI, Meta and Google, Apple also uses public data on the Internet for training but did not specify what they are.
Web publishers, who object to their content being used for training, have been instructed to add specific codes on their websites to prevent Apple’s web crawler from collecting their data.
“We never use our users’ private personal data or user interactions when training our foundation models,” Apple said.
The firm also applies filters to remove personally-identifiable information like social security and credit card numbers that are publicly available on the Internet. Profanity and other low-quality content have also been filtered out of the training corpus.