Skip to main content

Medical health experts the latest to sound alarm over AI development

A digital brain on a computer interface.
Pixabay/CC0 Content / Pixabay/CC0 Content

An international group of doctors and medical health experts is the latest to call for artificial intelligence (AI) to be regulated, saying that it “poses a number of threats to human health and well-being,” and claiming that the “window of opportunity to avoid serious and potentially existential harms is closing.”

The analysis follows other recent warnings from prominent tech figures who include Geoffrey Hinton, the so-called “godfather of AI,” and a group of experts who were among 1,000 signatories of a letter that called for a suspension on AI development until a set of rules can be established to ensure its safe use.

The latest warning comes in an article written by health professionals from the U.S., U.K., Australia, Costa Rica, and Malaysia, which was published online by BMJ Global Health this week.

The team highlighted three ways in which it believes AI poses a threat to human health, citing the “control and manipulation of people, use of lethal autonomous weapons, and the effects on work and employment.”

It goes on to look at how it believes that a more advanced version of AI technology “could threaten humanity itself.”

The article begins by noting how AI-driven systems are becoming increasingly used in society, organizing and analyzing huge amounts of data, but warned that it can be a potent tool for political candidates to “manipulate their way into power,” citing cases of AI-driven subversion of elections, including in the 2016 U.S. election.

“When combined with the rapidly improving ability to distort or misrepresent reality with deepfakes, AI-driven information systems may further undermine democracy by causing a general breakdown in trust or by driving social division and conflict, with ensuing public health impacts,” the article said.

It also noted how AI is being increasingly used in military and defense systems, with the “dehumanization of human warfare” having a myriad of consequences for human health as weapons become more sophisticated and easier to deploy.

The article also discusses how AI may one day replace countless jobs, noting that unemployment is known to be strongly associated with adverse health outcomes.

Finally, it touches on the nightmare scenario where AI becomes so advanced that it could pose a threat to humanity.

“We are now seeking to create machines that are vastly more intelligent and powerful than ourselves,” the article said. “The potential for such machines to apply this intelligence and power –whether deliberately or not — in ways that could harm or subjugate humans is real and has to be considered.”

While the article acknowledges that AI has many potential beneficial uses, it said there is also much to be concerned about as the technology rapidly advances.

The team concluded that effective regulation of the development and use of AI is needed “to avoid harm,” adding: “Until such effective regulation is in place, a moratorium on the development of self-improving artificial general intelligence should be instituted.”

Editors' Recommendations

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
Bing Chat just beat a security check to stop hackers and spammers
A depiction of a hacker breaking into a system via the use of code.

Bing Chat is no stranger to controversy -- in fact, sometimes it feels like there’s a never-ending stream of scandals surrounding it and tools like ChatGPT -- and now the artificial intelligence (AI) chatbot has found itself in hot water over its ability to defeat a common cybersecurity measure.

According to Denis Shiryaev, the CEO of AI startup, chatbots like Bing Chat and ChatGPT can potentially be used to bypass a CAPTCHA code if you just ask them the right set of questions. If this turns out to be a widespread issue, it could have worrying implications for everyone’s online security.

Read more
Bing Chat’s ads are sending users to dangerous malware sites
Bing Chat shown on a laptop.

Since it launched, Microsoft’s Bing Chat has been generating headlines left, right, and center -- and not all of them have been positive. Now, there’s a new headache for the artificial intelligence (AI) chatbot, as it’s been found it has a tendency to send you to malware websites that can infect your PC.

The discovery was made by antivirus firm Malwarebytes, which discussed the incident in a blog post. According to the company, Bing Chat is displaying malware advertisements that send users to malicious websites instead of filtering them out.

Read more
This powerful ChatGPT feature is back from the dead — with a few key changes
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

ChatGPT has just regained the ability to browse the internet to help you find information. That should (hopefully) help you get more accurate, up-to-date data right when you need it, rather than solely relying on the artificial intelligence (AI) chatbot’s rather outdated training data.

As well as giving straight-up answers to your questions based on info found online, ChatGPT developer OpenAI revealed that the tool will provide a link to its sources so you can check the facts yourself. If it turns out that ChatGPT was wrong or misleading, well, that’s just another one for the chatbot’s long list of missteps.

Read more