Skip to main content

Apple says it made ‘AI for the rest of us’ — and it’s right

An Apple executive giving a presentation at WWDC 2024.
Apple
Promotional logo for WWDC 2023.
This story is part of our complete Apple WWDC coverage

After many months of anxious waiting and salacious rumors, it’s finally happened: Apple has revealed its generative artificial intelligence (AI) systems to the world at its Worldwide Developers Conference (WWDC).

Yet unlike ChatGPT and Google Gemini, the freshly unveiled Apple Intelligence tools and features look like someone actually took the time to think about how AI can be used to better the world, not burn it down. If it works half as well as Apple promises it will, it could be the best AI system on the market.

Recommended Videos

Every aspect of Apple Intelligence bears the classic Apple hallmarks, from its intense focus on user privacy to its natural and seamless integration into the company’s devices and operating systems. Apple’s insistence on waiting until its AI was ready — rather than rushing some dangerous, half-baked product out of the door for maximum profit — is exactly what we’ve come to expect from Tim Cook and company. And we’re all better off for it.

Slow and steady

Apple Intelligence on iPhone pulling data from across apps.
Apple

I can’t blame Apple for being slow when it comes to launching generative AI tools. We’ve all seen the damage that unchecked AI can cause. From medical misinformation and deepfakes to job losses and revenge porn, getting it wrong comes with some serious side effects.

That means the race to the AI crown sometimes feels like a race to the bottom, with everyone so desperate to “win” that they resort to pumping out increasingly powerful and perilous tools with no oversight or thought beforehand.

Today, though, Apple got it spot-on. Apple Intelligence is baked into existing Apple apps — apps that you use every day and intimately understand how to use. Where others have tossed over the keys to the kingdom and said “go out there and have fun,” Apple has powered up your existing daily workflows with precisely the right tools in the right places.

There are several significant benefits to this method. For one thing, it brings the required learning curve right back down to earth. You already know how to write an email and edit a photo on your device — with Apple Intelligence, those same processes continue to exist, but now they have a few more generative bells and whistles to play around with.

Apple has also put its world-famous design sense to good use, integrating AI tools into apps that you use every day in ways that look totally natural. You don’t need to learn prompt engineering, you don’t need to load up any plug-ins, and you don’t need to pay for any new apps. In fact, you barely need to do anything different at all.

Privacy first

Apple talking about privacy with AI apps.
Apple

And there’s more. By constraining its generative AI tools within existing apps and operating system features, Apple can put a lid on dangerous and risky content that is far too easy to create in rival products.

But Apple’s not just looking to protect everyone else from what you might want to create in a dark moment — it’s looking to protect you as well. We’ve all seen what a privacy nightmare existing AI tools can be, with their propensity to leak the private data that they so voraciously vacuum up. Apple Intelligence takes a different approach.

For starters, Apple Intelligence processes most AI requests on your device, meaning no one else can even get a sniff of it — not Apple, not third-party app makers, not anyone. That’s been the case with other Apple-made features for years, but doing so with AI is a must. Of course, Apple has obliged.

When a cloud server really is required to process your queries and requests, Apple has tightly locked that down too. The cloud servers are Apple’s own, but the company has no access to your data. Better yet, it can all be reviewed by external experts to make sure Apple is keeping its word. Just try getting a similar promise from OpenAI or Google.

A better way

Apple's Craig Federighi talks about Apple Intelligence at the Worldwide Developers Conference (WWDC) 2024.
Apple

It’s unlikely that Apple’s approach is foolproof (nothing truly is, after all). But it’s a far more accessible approach than any of us are used to seeing. Not only does Apple Intelligence look more useable and understandable than anything we’ve seen before, but it looks safer and more private too. In taking this approach, Apple is showing that AI doesn’t have to mean the destruction of humanity. It could instead mean helpful everyday tools and fun little Genmoji. Who’d have thought?

Describing Apple Intelligence, the company’s software chief, Craig Federighi, summed it up as “AI for the rest of us.” I couldn’t have put it any better.

Alex Blake
Alex Blake has been working with Digital Trends since 2019, where he spends most of his time writing about Mac computers…
Turns out, it’s not that hard to do what OpenAI does for less
OpenAI's new typeface OpenAI Sans

Even as OpenAI continues clinging to its assertion that the only path to AGI lies through massive financial and energy expenditures, independent researchers are leveraging open-source technologies to match the performance of its most powerful models -- and do so at a fraction of the price.

Last Friday, a unified team from Stanford University and the University of Washington announced that they had trained a math and coding-focused large language model that performs as well as OpenAI's o1 and DeepSeek's R1 reasoning models. It cost just $50 in cloud compute credits to build. The team reportedly used an off-the-shelf base model, then distilled Google's Gemini 2.0 Flash Thinking Experimental model into it. The process of distilling AIs involves pulling the relevant information to complete a specific task from a larger AI model and transferring it to a smaller one.

Read more
ChatGPT just dipped its toes into the world of AI agents
OpenAI's ChatGPT blog post is open on a computer monitor, taken from a high angle.

OpenAI appears to be just throwing spaghetti at this point, hoping it sticks to a profitable idea. The company announced on Tuesday that it is rolling out a new feature called ChatGPT Tasks to subscribers of its paid tier that will allow users to set individual and recurring reminders through the ChatGPT interface.

Tasks does exactly what it sounds like it does: It allows you to ask ChatGPT to do a specific action at some point in the future. That could be assembling a weekly news brief every Friday afternoon, telling you what the weather will be like in New York City tomorrow morning at 9 a.m., or reminding you to renew your passport before January 20. ChatGPT will also send a push notification with relevant details. To use it, you'll need to select "4o with scheduled tasks" from the model picker menu, then tell the AI what you want it to do and when.

Read more
OpenAI teases its ‘breakthrough’ next-generation o3 reasoning model
Sam Altman describing the o3 model's capabilities

For the finale of its 12 Days of OpenAI livestream event, CEO Sam Altman revealed its next foundation model, and successor to the recently announced o1 family of reasoning AIs, dubbed o3 and 03-mini.

And no, you aren't going crazy -- OpenAI skipped right over o2, apparently to avoid infringing on the copyright of British telecom provider O2.

Read more