If you sent someone a message, would you want them to think that you wrote it or that an AI chatbot did?
Sleuthy online users are calling out any traces of AI they find. Currently, many people are concerned that most of what they see online is artificially generated. People, companies, and even online platforms are issuing restrictions against AI-generated content. Even online communities are banning authentic humans just for being accused of using AI.
According to Google’s latest spam policy, websites that use generative AI tools to generate pages or content may be considered spam and can result in a penalty that lowers the website’s ranking. YouTube is also restricting certain types of AI content. Even Anthropic, a gen AI company known for making the “Claude” app, says using AI is NOT allowed when submitting a job application to their company.
The current sentiment is loud and clear: If you’re going to post anything online, even your own content, it better not smell like AI.
How are people spotting AI-generated content?
People are turning to online detection tools. There is also a growing trend of people associating certain words, phrases, and formatting styles with chatbot writing.
Last year, computer scientist Paul Graham tweeted about a cold email he received from someone proposing a new project. But there was one major problem. Graham noticed the word “delve” in the email, which he says is a sign of chatbot writing.
Graham isn’t alone, as many people online feel the same way. One Reddit post titled “what are the most common words chatgpt says?” received numerous responses from people saying words like ‘delve,’ ‘tapestry,’ and others were among the chatbot’s favorite words. The latest claim is that the em dash (—) proves that someone used AI. Now, they’re calling it the “ChatGPT dash.”
So what’s happening exactly? People are developing beliefs, either consciously or unconsciously, that certain words or phrases (ChatGPT’s perceived lexicon) are strong indicators of AI-generated authorship. That bias causes them to overlook or underweight other possibilities (e.g., a human writer simply chose words organically). But there’s another method people use to spot AI-written content.
Do AI detectors know or not?
If you’re plugged into the AI space, you’ve probably heard about AI detectors—tools that use text analysis aiming to spot patterns of artificially generated text content.
AI detection is widely used in academia, with services such as essay turn-in detectors scanning over 70 million student papers annually. The problem is that text detection isn’t perfect yet. Since the first detectors popped up in 2023, students have come out to say they were falsely accused of using AI to write. Ironically, AI chatbots are trained on millions of real writers, so if you’re a human who happens to write similarly to ChatGPT, you could be accused of having used it.
People are getting paid to make AI content sound human
A story on the BBC investigated writers getting paid to rewrite AI content to make it sound more human. It should be clear by now that the AI scare extends beyond teachers preventing students from using AI to write their essays. Attempts to call out and quash artificial content are rippling across creative and professional sectors alike.
A negative sentiment is growing in the public towards anything people see as ‘AI-generated.’
How people are dodging AI allegations
The current climate of suspicion — authentic work being flagged and genuine creators penalized — inevitably forces a response.
Faced with the risk of false accusations, damaged reputations, or lost opportunities, people are actively seeking ways to protect their content and ensure it passes muster, regardless of how it was originally written.
For some, this means painstaking manual editing. Writers find themselves consciously avoiding words or phrasing patterns they fear might trigger biased human readers or flawed detection algorithms. Terms like ‘delve’ or ‘tapestry,’ or even the humble em dash deemed literary pariahs— once pure, now adulterated vagabond words, sparking people to deny the probity of any place they’re found dwelling.
Content that can’t be detected as AI
Given the limitations and anxieties associated with manual edits, many are turning to technical solutions — specifically “AI-humanizers.” Humanizers are similar to paraphrasers. They rewrite a given piece of text, but unlike paraphrasers, they use special algorithms to identify and rewrite words flagged as AI-generated in a way that makes them appear human.
Platforms like Undetectable AI have emerged as prominent players in the humanizer category. On one hand, the existence and effectiveness reveal a significant challenge to the already struggling AI detection industry. On the other side, some see these tools as insurance against unfair AI accusations or penalties.
According to a spokesperson from Undetectable AI, “A big part of our mission is providing access to something that stops people from getting unfairly penalized, whether they used [AI] or not.” While services like Undetectable AI can help honest people protect the integrity of their work, they can also be used to generate swarms of AI-generated content that appears human-made.
But honest users see what they’re doing as restoring the intended authenticity of communication while treading systems that might otherwise block it.
Authenticity on the line
The adoption of new AI tools is also a clear symptom of the underlying problem: the pressure and potential harm caused by the pervasive climate of suspicion surrounding online content. As AI advances, authenticity has become a prized commodity; writers are taking extra steps to ensure their work appears genuinely human.
These days, one specific word typed —or em dash placed— can literally make or break your prose. Beyond the public, the eyes of algorithms will be watching.