Skip to main content

Apple hasn’t answered the most important question about its AI features

Apple Intelligence features.
Apple

During the debut of Apple Intelligence at WWDC 2024 yesterday, Senior Vice President of Software Engineering Craig Federighi repeatedly touted the new feature’s security and delicate handling of sensitive user data. To protect user privacy, Apple Intelligence performs many of its generative operations on-device. And for those that exceed its onboard capabilities, the system will transfer the work up to the company’s newly developed Private Cloud Compute (PCC).

Recommended Videos

However, as Dr. Matthew Green, associate professor of Computer Science at Johns Hopkins University in Baltimore, asked in a thread Monday, Apple’s AI cloud may be secure, but is it trustworthy?

Apple Intelligence promises to empower your iPhone, iPad, and Mac with cutting-edge generative models that can create images, edit your writing, and perform actions on your behalf across any number of apps. Apple devices have already performed machine learning tasks on-device for a number of years; take the camera roll’s search function and optical character recognition (OCR) text recognition for example. However, as Green points out, even the latest generation of Apple processors aren’t yet powerful enough to handle some of the more complex and resource-intensive AI operations coming online, necessitating the use of cloud compute servers.

The problem is, these complex models also need unencrypted access to the user’s data in order to perform inference functions and transmitting that data to a public cloud leaves it at risk of being hacked, leaked, or outright stolen. Apple’s solution was to build its own standalone, hardened data centers specifically for processing Apple Intelligence data: the PCC.  This “groundbreaking cloud intelligence system,” according to a recent Apple Security Blog post, “extends the industry-leading security and privacy of Apple devices into the cloud, making sure that personal user data sent to PCC isn’t accessible to anyone other than the user — not even to Apple.”

The problem is that while modern phone “neural” hardware is improving, it’s not improving fast enough to take advantage of all the crazy features Silicon Valley wants from modern AI, including generative AI and its ilk. This fundamentally requires servers. 3/

— Matthew Green (@matthew_d_green) June 10, 2024

But ensuring that privacy is much harder in practice. “Building trustworthy computers is literally the hardest problem in computer security,” Green wrote. “Honestly ,it’s almost the only problem in computer security.” He commended the company on applying many of the same security features built into its mobile and desktop devices to its new servers, including Secure Boot, “stateless” software, and a Secure Enclave Processor (SEP), as well as “throwing all kinds of processes at the server hardware to make sure the hardware isn’t tampered with.”

Apple has gone to great lengths to ensure that the software running on its servers is legitimate, automatically wiping all user data from a PCC node as soon as the request has been completed and enabling the device’s operating system to “attest” to what software image it’s running.

“If you gave an excellent team a huge pile of money and told them to build the best ‘private’ cloud in the world, it would probably look like this,” Green wrote. “But now the tough questions. Is it a good idea? Is it as secure as what Apple does today,” and can users opt out? It doesn’t appear that users will even opt in to the new service. “You won’t necessarily even be told it’s happening,” he continued. “It will just happen. Magically. I don’t love that part.”

Green goes on to argue that hackers don’t even pose the biggest threat to user data: it’s the hardware and software companies themselves. As such, “this PCC system represents a real commitment by Apple not to ‘peek’ at your data,” Green concluded. “That’s a big deal.”

Apple Intelligence will reportedly begin rolling out later this summer. And in the coming weeks, Apple plans to invite security researchers for a first look at PCC software and the virtual research environment.

Andrew Tarantola
Former Computing Writer
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
The Gemini app is now the only way to access Google’s AI on iOS
The Google Gemini AI logo.

Google announced Wednesday that it is removing its Gemini AI model from the Google app on iOS, meaning that Apple users will need to download the dedicated Gemini app in order to use it.

When Google first introduced its Gemini AI to the Apple product ecosystem, it did so through its existing Google App, which had been available on iPhones and iPads since 2008. It wasn't until last November that Google released its dedicated Gemini app. Over the past three months, iOS users had their choice of which app through which to access the chatbot, but that is no longer the case.

Read more
Google Gemini arrives on iPhone as a native app
the Google extensions feature on iPhone

Google announced Thursday that it has released a new native Gemini app for iOS that will give iPhone users free, direct access to the chatbot without the need for a mobile web browser.

The Gemini mobile app has been available for Android since February, when the platform transitioned from the older Bard branding. However, iOS users could only access the AI on their phones through either the mobile Google app or via a web browser. This new app provides a more streamlined means of chatting with the bot as well as a host of new (to iOS) features.

Read more
Apple will pay up to $1M to anyone who hacks its AI cloud
Apple's Craig Federighi speaking about macOS security at WWDC 2022.

Apple just made an announcement that shows it means business when it comes to keeping Apple Intelligence secure. The company is offering a massive bug bounty of up to $1 million to anyone who is able to hack its AI cloud, referred to as Private Cloud Compute (PCC). These servers will take over Apple Intelligence tasks when the on-device AI capabilities just aren't good enough -- but there are downsides, which is why Apple's bug-squashing mission seems like a good idea.

As per a recent Apple Security blog post, Apple has created a virtual research environment and opened the doors to the public to let everyone take a peek at the code and judge its security. The PCC was initially only available to a group of security researchers and auditors, but now, anyone can take a shot at trying to hack Apple's AI cloud.

Read more