Skip to main content

57% of the internet may already be AI sludge

a cgi word bubble
Google Deepmind / Pexels

It’s not just you — search results really are getting worse. Amazon Web Services (AWS) researchers have conducted a study that suggests 57% of content on the internet today is either AI-generated or translated using an AI algorithm.

The study, titled “A Shocking Amount of the Web is Machine Translated: Insights from Multi-Way Parallelism,” argues that low-cost machine translation (MT), which takes a given piece of content and regurgitates it in multiple languages, is the primary culprit. “Machine generated, multi-way parallel translations not only dominate the total amount of translated content on the web in lower resource languages where MT is available; it also constitutes a large fraction of the total web content in those languages,” the researchers wrote in the study.

Recommended Videos

They also found evidence of selection bias in what content is machine translated into multiple languages compared to content published in a single language. “This content is shorter, more predictable, and has a different topic distribution compared to content translated into a single language,” the researchers’ wrote.

What’s more, the increasing amount of AI-generated content on the internet combined with increasing reliance on AI tools to edit and manipulate that content could lead to a phenomenon known as model collapse, and is already reducing the quality of search results across the web. Given that frontier AI models like ChatGPT, Gemini, and Claude rely on massive amounts of training data that can only be acquired by scraping the public web (whether that violates copyright or not), having the public web stuffed full of AI-generated, and often inaccurate, content could severely degrade their performance.

“It is surprising how fast model collapse kicks in and how elusive it can be,” Dr. Ilia Shumailov from the University of Oxford told Windows Central. “At first, it affects minority data—data that is badly represented. It then affects diversity of the outputs and the variance reduces. Sometimes, you observe small improvement for the majority data, that hides away the degradation in performance on minority data. Model collapse can have serious consequences.”

The researchers demonstrated those consequences by having professional linguists classify 10,000 randomly selected English sentences from one of 20 categories. The researchers observed “a dramatic shift in the distribution of topics when comparing 2-way to 8+ way parallel data (i.e. the number of language translations), with ‘conversation and opinion’ topics increasing from 22.5% to 40.1%” of those published.

This points to a selection bias in the type of data that is translated into multiple languages, which is “substantially more likely” to be from the “conversation and opinion” topic.

Additionally, the researchers found that “highly multi-way parallel translations are significantly lower quality (6.2 Comet Quality Estimation points worse) than 2-way parallel translations.” When the researchers audited 100 of the highly multi-way parallel sentences (those translated into more than eight languages), they found that “a vast majority” came from content farms with articles “that we characterized as low quality, requiring little or no expertise, or advance effort to create.”

That certainly helps explain why OpenAI’s CEO Sam Altman keeps keening on about how its “impossible” to make tools like ChatGPT without free access to copyrighted works.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
OpenAI’s big, new Operator AI already has problems
OpenAI logo on a white board

OpenAI has announced its AI agent tool, called Operator, as a research preview as of Thursday, but the launch isn’t without its minor hiccups.

The artificial intelligence brand showcased features of the new tool in an online demo, explaining that Operator is a Computer Using Agent (CUA) based on the GPT-4o model, which enables multi-modal functions, such as the ability to search the web and being able to understand the reasoning of the search results.

Read more
Trump reverses ‘critical’ AI safety order on first day in office
Trump stylized image

Amid a flurry of repeals and rollbacks to his predecessor's executive orders during his first day back in office, Donald Trump has announced that he's reversed an executive order from former-President Biden once deemed "critical" by Microsoft. The order sought to protect workers, consumers and national security interests from potential harms caused by emerging AI technologies.

The previous administration released the order in October 2023, dubbed the "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." The order concedes that generative AI systems offer both "promise and peril" to America and could potentially, "exacerbate societal harms such as fraud, discrimination, [and] bias." Links to the newly-cancelled executive orders now return 404 error pages on the White House website.

Read more
Everything you need to know about AI agents and what they can do
a hollow man under light

The agentic era of artificial intelligence has arrived. Billed as "the next big thing in AI research," AI agents are capable of operating independently and without continuous, direct oversight, while collaborating with users to automate monotonous tasks. In this guide, you'll find everything you need to know about how AI agents are designed, what they can do, what they're capable of, and whether they can be trusted to act on your behalf.
What is an agentic AI?
Agentic AI is a type of generative AI model that can act autonomously, make decisions, and take actions towards complex goals without direct human intervention. These systems are able to interpret changing conditions in real-time and react accordingly, rather than rotely following predefined rules or instructions. Based on the same large language models that drive popular chatbots like ChatGPT, Claude, or Gemini, agentic AIs differ in that they use LLMs to take action on a user's behalf rather than generate content.

AutoGPT and BabyAGI are two of the earliest examples of AI agents, as they were able to solve reasonably complex queries with minimal oversight. AI agents are considered to be an early step towards achieving artificial general intelligence (AGI). In a recent blog post, OpenAI CEO Sam Altman argued that, “We are now confident we know how to build AGI as we have traditionally understood it,” and predicted, "in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies.”

Read more