Skip to main content

Apple opens digital journal to showcase its machine learning developments

apple journal machine learning development machinelearning
Jakub Jirsak/123RF
Apple opened a new digital journal to showcase some of the developments it is making in the field of machine learning. In the first entry, it explains what it is doing to help improve the realism of synthetic images, which can, in turn, be used to teach algorithms how to classify images, without needing to painstakingly label them manually.

One of the biggest hurdles in artificial intelligence is teaching it things that humans take for granted. While you could conceivably hand-program an AI to understand everything, that would take a very, very long time and would be nigh on impossible to power. Instead, machine learning lets us teach algorithms much like you would a human, but that requires specialist techniques.

Apple

When it comes to teaching how to classify images, synthetic images can be used, but as Apple points out in its first blog post, that can lead to poor generalizations, because of the low quality of a synthetic image. That is why it’s been working on developing better, more detailed images for machines to learn from.

Recommended Videos

Although this is far from a new technique, it has traditionally been a costly one. Apple developed a much more economical “refiner” which is able to look at unlabeled real images and reference them to refine synthetic images into something much closer to reality.

Please enable Javascript to view this content

However, how do you select the correct real image to give the refiner a strong source material to base its refinements on? That requires a secondary image identifier, known as the discriminator. It goes back and forth with the refiner attempting to “trick” the discriminator by gradually building up the synthetic image until it possesses far more of the details of the real images. Once the discriminator can no longer properly categorize them, the simulation halts and moves on to a new image.

Apple

This teaches both the discriminator and the refiner while they compete, thereby gradually enhancing the tools as they build up a strong library of detailed synthetic images.

The learning process is a detailed one, with Apple going to great lengths to preserve original aspects of images while avoiding the artifacts that can build up during image processing. It is worth it though, as further testing has shown vastly improved performance for image categorization based on refined synthetic images, especially when they have been refined multiple times.

Jon Martindale
Jon Martindale is a freelance evergreen writer and occasional section coordinator, covering how to guides, best-of lists, and…
I finally tried Apple Intelligence in macOS Sequoia to see if it lived up to the hype
The redeisgned Siri user interface in macOS Sequoia.

For the last few years, Apple’s macOS releases have been interesting, if not particularly exciting. But that’s all set to change this year with the launch of macOS Sequoia, and it’s all thanks to one feature: Apple Intelligence.

Apple’s artificial intelligence (AI) platform has the potential to completely change how you use your Mac on a daily basis. From generating images, rewriting emails, and summarizing your audio recordings to revamping Siri into a much more capable virtual assistant, Apple Intelligence could be the most significant new macOS feature in years.

Read more
I tried Apple’s AI writing tools on my iPhone. Here’s how they work
Apple Intelligence on iPhone 15 Pro.

“Apple does things practically.” Or, “Apple is late because it’s perfecting the tech.” “Would you prefer being the first or the best?” These are just some of the recurring arguments you will find in any heated Reddit thread or social media post hunting for some rage bait clout.

Yet, there’s some truth to it, as well. And a whole lot of hidden tech that sometimes takes a decade to come out. Apple Intelligence is the best example of one such leap, and it’s being seen as Apple’s answer to the generative AI rush.

Read more
We just learned something surprising about how Apple Intelligence was trained
Apple Intelligence update on iPhone 15 Pro Max.

A new research paper from Apple reveals that the company relied on Google's Tensor Processing Units (TPUs), rather than Nvidia's more widely deployed GPUs, in training two crucial systems within its upcoming Apple Intelligence service. The paper notes that Apple used 2,048 Google TPUv5p chips to train its AI models and 8,192 TPUv4 processors for its server AI models.

Nvidia's chips are highly sought for good reason, having earned their reputation for performance and compute efficiency. Their products and systems are typically sold as standalone offerings, enabling customers to construct and operate them as the best see fit.

Read more