Skip to main content

ChatGPT could threaten 300 million jobs around the world

The meteoric rise of artificial intelligence (AI) tools like ChatGPT has fueled a wide range of fears, from an increase in undetectable propaganda to the spread of racist and discriminatory speech. Experts have also raised the alarm over possible job losses, and a new report lays out precisely how disastrous AI tools could be for employment.

According to Goldman Sachs, up to 300 million full-time jobs could be lost around the world as a result of the automation that ChatGPT and other AI tools could usher in. That’s as much as 18% of the global workforce.

A person on the Google home page while using a MacBook Pro laptop on a desk. / Unsplash

The impact will be felt more keenly in advanced economies than in developing nations. That’s partly because much of the risk will be faced by white-collar workers compared to manual laborers. The professions most at risk include lawyers and administrative workers, while physically demanding work such as construction will fare better.

The situation appears worrying in the United States and Europe, where the report estimates roughly two-thirds of all work will face some form of automation, while up to a quarter of all jobs could be handled entirely by AI.

A risk or an opportunity?

The ChatGPT name next to an OpenAI logo on a black and white background.
Image used with permission by copyright holder

It isn’t all bleak. The report notes that since many jobs will be only partly impacted by AI, this work could be complemented by automation rather than being wholly replaced by it. Over the long term, the disruption caused by AI might help create new jobs and increase productivity in ways that other new technologies, like the electric motor and the personal computer, have done in the past.

That said, the report comes as over 1,000 scientists and business leaders signed an open letter calling on all development of AI models more advanced than GPT-4 to be paused for at least six months. This would allow the world to put safeguards in place to ensure AI tools are used “for the clear benefit of all.” Otherwise, the authors contended, artificial intelligence will “pose profound risks to society and humanity.”

What seems certain is that artificial intelligence could put huge numbers of jobs at risk. The question is whether that disruption will ultimately be a boost for workers — replacing tedious and repetitive work and opening up new job opportunities — or a threat that leaves everyone worse off. As the recent open letter warned, the frontiers of AI are largely unknown, with no guide to navigating their many potential perils.

Editors' Recommendations

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
OpenAI’s new tool can spot fake AI images, but there’s a catch
OpenAI Dall-E 3 alpha test version image.

Images generated by artificial intelligence (AI) have been causing plenty of consternation in recent months, with people understandably worried that they could be used to spread misinformation and deceive the public. Now, ChatGPT maker OpenAI is apparently working on a tool that can detect AI-generated images with 99% accuracy.

According to Bloomberg, OpenAI’s tool is designed to root out user-made pictures created by its own Dall-E 3 image generator. Speaking at the Wall Street Journal’s Tech Live event, Mira Murati, chief technology officer at OpenAI, claimed the tool is “99% reliable.” While the tech is being tested internally, there’s no release date yet.

Read more
Bing Chat just beat a security check to stop hackers and spammers
A depiction of a hacker breaking into a system via the use of code.

Bing Chat is no stranger to controversy -- in fact, sometimes it feels like there’s a never-ending stream of scandals surrounding it and tools like ChatGPT -- and now the artificial intelligence (AI) chatbot has found itself in hot water over its ability to defeat a common cybersecurity measure.

According to Denis Shiryaev, the CEO of AI startup, chatbots like Bing Chat and ChatGPT can potentially be used to bypass a CAPTCHA code if you just ask them the right set of questions. If this turns out to be a widespread issue, it could have worrying implications for everyone’s online security.

Read more
Bing Chat’s ads are sending users to dangerous malware sites
Bing Chat shown on a laptop.

Since it launched, Microsoft’s Bing Chat has been generating headlines left, right, and center -- and not all of them have been positive. Now, there’s a new headache for the artificial intelligence (AI) chatbot, as it’s been found it has a tendency to send you to malware websites that can infect your PC.

The discovery was made by antivirus firm Malwarebytes, which discussed the incident in a blog post. According to the company, Bing Chat is displaying malware advertisements that send users to malicious websites instead of filtering them out.

Read more