Skip to main content

Apple opens digital journal to showcase its machine learning developments

apple journal machine learning development machinelearning
Jakub Jirsak/123RF
Apple opened a new digital journal to showcase some of the developments it is making in the field of machine learning. In the first entry, it explains what it is doing to help improve the realism of synthetic images, which can, in turn, be used to teach algorithms how to classify images, without needing to painstakingly label them manually.

One of the biggest hurdles in artificial intelligence is teaching it things that humans take for granted. While you could conceivably hand-program an AI to understand everything, that would take a very, very long time and would be nigh on impossible to power. Instead, machine learning lets us teach algorithms much like you would a human, but that requires specialist techniques.

Apple

When it comes to teaching how to classify images, synthetic images can be used, but as Apple points out in its first blog post, that can lead to poor generalizations, because of the low quality of a synthetic image. That is why it’s been working on developing better, more detailed images for machines to learn from.

Although this is far from a new technique, it has traditionally been a costly one. Apple developed a much more economical “refiner” which is able to look at unlabeled real images and reference them to refine synthetic images into something much closer to reality.

However, how do you select the correct real image to give the refiner a strong source material to base its refinements on? That requires a secondary image identifier, known as the discriminator. It goes back and forth with the refiner attempting to “trick” the discriminator by gradually building up the synthetic image until it possesses far more of the details of the real images. Once the discriminator can no longer properly categorize them, the simulation halts and moves on to a new image.

Apple

This teaches both the discriminator and the refiner while they compete, thereby gradually enhancing the tools as they build up a strong library of detailed synthetic images.

The learning process is a detailed one, with Apple going to great lengths to preserve original aspects of images while avoiding the artifacts that can build up during image processing. It is worth it though, as further testing has shown vastly improved performance for image categorization based on refined synthetic images, especially when they have been refined multiple times.

Editors' Recommendations

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
Apple’s ChatGPT rival may automatically write code for you
A slide of Xcode running on MacOS Monterey at Apple's WWDC 2021 event

Artificial intelligence (AI) tools like ChatGPT and Bing Chat have exploded in popularity over the past year, yet industry titan Apple has remained conspicuously quiet on the matter. Now, though, we might know what could be in store for us if the Cupertino firm decides to launch its own AI chatbot.

In a recently granted patent (#US-11687830-B2), Apple explains how it could infuse machine learning (ML) tech into its Xcode app, which may allow it to automatically write code for developers. If successful, that could be a major boost for app builders who work within Apple’s ecosystem -- and could mean better apps for users.

Read more
Apple didn’t make Siri a ChatGPT killer at WWDC — and that scares me
The Siri activation animation on an iPhone running iOS 14.

Apple entered a new era at WWDC 2023 with the introduction of its Vision Pro mixed reality headset. Some are calling it the next iPhone moment. And yet, Siri — an iconic iPhone innovation that wowed the world in the early days of Apple’s smartphones before being eclipsed by Alexa and Google Assistant — barely left a mark at a conference that was otherwise brimming with ambitious announcements. 
Apple barely gave Siri an opportunity to shine, unlike how Google and Microsoft have gone about hawking their AI thingamajigs in the past few months. With the arrival of iOS 17 and iPadOS 17, the only two changes making their way to Siri are a shorter activation command and conversational capabilities. 
Instead of saying, “Hey, Siri,” users can now just utter the assistant’s name without any ”hello” or “hey,” and the AI assistant will do its job. Next, we got conversational capabilities, which means you can talk to Siri and shoot multiple voice prompts its way without having to say “Siri” or “Hey Siri” at the beginning of every follow-up request. 
It’s puzzling that an AI assistant — one that lives on over a billion phones, tablets, and smartwatches — hasn’t cracked the code for flowing conversations. 

Siri is a lazy sitting duck in the AI race 

Read more
Apple building an AI health coach for Apple Watch, report claims
Apple's Health app.

Apple is developing an "AI-powered health coach" that will offer personalized advice for improving exercise, diet, and sleep, according to a Bloomberg report on Tuesday.

Sources claiming to have knowledge of the matter told the news site’s Apple insider Mark Gurman that the new service -- reportedly codenamed Quartz -- will use AI technology with health data gathered by Apple’s smartwatch to make the company’s health platform even more useful.

Read more