Skip to main content

Gmail blocks 100 million spam messages daily with its A.I., Google says

Stock Photo Person Using Email
rawpixel.com/Pexels

Effective spam blocking is yet another thing we can add to the ever-growing list of uses for artificial intelligence.

Via a Google Cloud blog post published Wednesday, February 6, Google announced that it has been using an A.I. platform to further its spam-blocking endeavors with significant results.

The platform, known as TensorFlow, was developed by Google and is “an open-source machine learning (ML) framework.” (ML is a form of artificial intelligence that involves programming machines or programs to carry out tasks relatively independently, by relying on the analysis of data to make their own decisions about how and when to complete such tasks.) While the platform may sound like a new innovation, TensorFlow was actually launched and open-sourced in 2015.

According to Google, TensorFlow is allowing the technology company to block 100 million more spam messages from reaching the inboxes of Gmail users on a daily basis. This is in addition to the 99.9 percent of spam messages Google already claims Gmail blocks.

Google is apparently able to do this because the platform helps Google better detect the following types of harder-to-find spam: Mail from newly created domains, image-based messages, and even messages with hidden embedded content.

Although an extra 100 million spam messages per day sounds like a ridiculously high amount, as The Verge points out, when that number is put in perspective, blocking 100 million spam emails isn’t really very much at all. Especially when the context is, according to Google’s own estimation, that there are 1.5 billion current Gmail users. Spreading out 100 million messages over 1.5 billion users really only gets you roughly “one extra blocked spam email per 10 users.”

But the above caveat doesn’t lessen the overall impact of TensorFlow on blocking spam for Gmail users. Being able to block an additional 100 million messages is still an important achievement, as it suggests that the ML behind TensorFlow helped enhanced Gmail’s spam-blocking functionality as it worked in tandem with Gmail’s rule-based filters.

And TensorFlow’s spam-blocking ML might just continue to improve as time goes on; Google also mentioned in that blog post that the platform’s ML is intended to also help Gmail customize its spam protections for each individual user’s needs.

Editors' Recommendations

Anita George
Anita has been a technology reporter since 2013 and currently writes for the Computing section at Digital Trends. She began…
Google execs say we need a plan to stop A.I. algorithms from amplifying racism
Facial Recognition

Two Google executives said Friday that bias in artificial intelligence is hurting already marginalized communities in America, and that more needs to be done to ensure that this does not happen. X. Eyeé, outreach lead for responsible innovation at Google, and Angela Williams, policy manager at Google, spoke at (Not IRL) Pride Summit, an event organized by Lesbians Who Tech & Allies, the world’s largest technology-focused LGBTQ organization for women, non-binary and trans people around the world.

In separate talks, they addressed the ways in which machine learning technology can be used to harm the black community and other communities in America -- and more widely around the world.

Read more
Google blocking 18 million scam emails related to coronavirus daily
Gmail app icon.

It’s not just the coronavirus that's creating havoc. Related scams and malware are causing trouble, too, with cybercriminals seemingly intent on taking advantage of what is already a dire situation for many folks.

Highlighting the extent of the problem, Google has revealed that on each day over the past week, its Gmail-linked computer systems detected -- and blocked -- 18 million malware and phishing emails related to the coronavirus, also known as COVID-19.

Read more
Google CEO Sundar Pichai warns of dangers of A.I. and calls for more regulation
Google & Alphabet CEO Sundar Pichai

Citing concerns about the rise of deepfakes and the potential abuses of facial recognition technology, Google CEO Sundar Pichai declared in an op-ed in the Financial Times that artificial intelligence should be more tightly regulated: "We need to be clear-eyed about what could go wrong" with A.I., Pichai wrote.

The Alphabet and Google executive wrote about the positive developments that A.I. can bring, such as recent work by Google finding that A.I. can detect breast cancer more accurately than doctors, or Google's project to use A.I. to more accurately predict rainfall in local areas. But he also warned that "history is full of examples of how technology’s virtues aren’t guaranteed" and that "[t]he internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread."

Read more