Skip to main content

Google unveils a slew of new and improved machine learning APIs

The Google Cloud, Google’s eponymous artificial intelligence platform, is quite the capable little set of services. Its algorithms can handle everything from language translation to the identification of objects and landmarks. And now, it’s getting even better. On Tuesday, Google Cloud chief Diane Greene announced the formation of a new team, the Google Cloud Machine Learning group, that will manage the Mountain View, California-based company’s cloud intelligence efforts going forward.

Improved APIs

Recommended Videos

The group will be helmed by Jia Li, former head of research at Snapchat and pioneer behind the feature that lets you attach emojis to real-world objects, and Fe-Fei Li, former director of AI at Stanford. They will oversee a slew of upgrades to Google’s cloud services in the coming months, much of which will involve Google Cloud’s hardware infrastructure. New graphical processing units (GPUs), which Google said are especially good at accelerating the sort of self-training machine learning software that lives on the company’s servers, will join the existing network’s CPUs. And a novel security layer will better ensure that customers’ data remain anonymous — GPUs caches will be wiped before beginning a new task, a practice which Google said isn’t common among cloud platforms.

Google Cloud is improving in other ways, as well. Its Cloud Vision application programming interface (API) — a system now capable of identifying millions of logos, landmarks, and objects in images — now runs on Google’s custom “Tensor Processing Units,” the processors optimized to run Google’s TensorFlow machine learning platform. (APIs, for the uninitiated, are an extensible set of resources that let developers leverage third-party services like Cloud Vision.) The developer tools are now unified, which Google said will make it “simpler to implement,” and the company has reduced the price of “large-scale deployments” by 80 percent.

Google is also introducing Cloud Jobs API, a cloud-powered service that matches prospective employees with companies. “[The system] uses [AI] to understand how job titles and skills relate to one another and what job content, location, and seniority are the closest match to a [candidate’s] preferences,” Google said. It’s intended for job boards and career sites like LinkedIn and Jobseeker, for instance, and is already in use by three: Jibe, tech job listing site Dice, and CareerBuilder.

Another manifestation of Google’s machine learning API, Cloud Translation API, is now available globally after a months-long beta. It’s now capable of more accurately identifying the names of things such as people and locations, parsing the syntax of sentences, and analyzing morphology (the forms of and relationships between words), and it supports eight languages — English, Chinese, French, German, Japanese, Korean, Portuguese, Spanish, and Turkish — and 16 language pairs. The AI algorithms reduce errors by from 55 to 85 percent, Google said, and represent some of the largest improvements of machine learning in the past decade.

Google’s also introducing a new Premium translation service fit for “precise, long-form” applications like live-stream translations and “high volume[s] of  emails.” It will debut in the coming weeks.

Fun experiments

Google also took the opportunity to showcase AI-powered tools and apps on a new website: AI Experiments.

AI Experiments taps Google’s TensorFlow, the company’s open source machine learning platform. It’s the most popular machine learning framework on project host site GitHub, Google said, and one that has been used to transform images into psychedelic nightmares, teach computers to play Pong, and invent fake Chinese characters.

One app on the AI Experiments site, AI Duet, generates melodies that complement your own composition style, essentially acting as a sort of computer-driven musical partner. Another, Quick, Draw!, tasks you with depicting a written prompt in under 20 seconds. Google’s artificial intelligence attempts to identify it in real time, and, once the time has elapsed, shows which guesses it considered along the way.

Giorgio Cam identifies objects in rhyming form, pairing the result with an electronic soundtrack by Italian DJ and musician Giorgio Moroder. Bird Sounds organizes dozens of bird calls by such categories as tone and frequency. The Thing Translator identifies objects and gives the translated word for whatever you show it. And Infinite Drum Machine uses machine learning to sort everyday sounds into similar families.

Google is hoping to grow the website into a veritable collection of AI-powered utilities — and it’s accepting admissions starting today.

Kyle Wiggers
Former Digital Trends Contributor
Kyle Wiggers is a writer, Web designer, and podcaster with an acute interest in all things tech. When not reviewing gadgets…
I tested Google Gemini and Apple Intelligence. Here’s which one you should use
Pixel Studio tools on a Pixel.

This year feels like a turning point for smartphones with all the new AI features being rolled out by Apple, Google, and basically every other tech company. I've been particularly intrigued by Apple Intelligence, which has introduced some promising new AI capabilities that will be rolled out to select devices over the coming months.

Google's Gemini AI system for the Pixel 9 Pro and other Android devices also shows promise and seems slightly further along than Apple's product. Both tools claim to make our lives easier and enhance our interaction with our devices when fully implemented.

Read more
Google Gemini arrives on iPhone as a native app
the Google extensions feature on iPhone

Google announced Thursday that it has released a new native Gemini app for iOS that will give iPhone users free, direct access to the chatbot without the need for a mobile web browser.

The Gemini mobile app has been available for Android since February, when the platform transitioned from the older Bard branding. However, iOS users could only access the AI on their phones through either the mobile Google app or via a web browser. This new app provides a more streamlined means of chatting with the bot as well as a host of new (to iOS) features.

Read more
Google showed me its AI future for Google Home, and it blew me away
The Google Home logo on a Pixel phone.

Google's making a few announcements today ahead of its big Pixel event next Tuesday. In addition to revealing the new Nest Learning Thermostat and the Google TV Streamer, Google is also providing a sneak peek at some big Google Home and Google Assistant changes. And they're all really impressive.

We'll start with the Google Assistant. Google has revealed a new voice for the Assistant, and it sounds significantly more natural than the current one. It's difficult to describe in writing, but the gist is that the Assistant's voice now sounds more like a human and less like a robot. The Assistant takes natural pauses while speaking and has inflections in its voice.

Read more