Skip to main content

Medical health experts the latest to sound alarm over AI development

A digital brain on a computer interface.
Pixabay/CC0 Content / Pixabay/CC0 Content

An international group of doctors and medical health experts is the latest to call for artificial intelligence (AI) to be regulated, saying that it “poses a number of threats to human health and well-being,” and claiming that the “window of opportunity to avoid serious and potentially existential harms is closing.”

Recommended Videos

The analysis follows other recent warnings from prominent tech figures who include Geoffrey Hinton, the so-called “godfather of AI,” and a group of experts who were among 1,000 signatories of a letter that called for a suspension on AI development until a set of rules can be established to ensure its safe use.

The latest warning comes in an article written by health professionals from the U.S., U.K., Australia, Costa Rica, and Malaysia, which was published online by BMJ Global Health this week.

The team highlighted three ways in which it believes AI poses a threat to human health, citing the “control and manipulation of people, use of lethal autonomous weapons, and the effects on work and employment.”

It goes on to look at how it believes that a more advanced version of AI technology “could threaten humanity itself.”

The article begins by noting how AI-driven systems are becoming increasingly used in society, organizing and analyzing huge amounts of data, but warned that it can be a potent tool for political candidates to “manipulate their way into power,” citing cases of AI-driven subversion of elections, including in the 2016 U.S. election.

“When combined with the rapidly improving ability to distort or misrepresent reality with deepfakes, AI-driven information systems may further undermine democracy by causing a general breakdown in trust or by driving social division and conflict, with ensuing public health impacts,” the article said.

It also noted how AI is being increasingly used in military and defense systems, with the “dehumanization of human warfare” having a myriad of consequences for human health as weapons become more sophisticated and easier to deploy.

The article also discusses how AI may one day replace countless jobs, noting that unemployment is known to be strongly associated with adverse health outcomes.

Finally, it touches on the nightmare scenario where AI becomes so advanced that it could pose a threat to humanity.

“We are now seeking to create machines that are vastly more intelligent and powerful than ourselves,” the article said. “The potential for such machines to apply this intelligence and power –whether deliberately or not — in ways that could harm or subjugate humans is real and has to be considered.”

While the article acknowledges that AI has many potential beneficial uses, it said there is also much to be concerned about as the technology rapidly advances.

The team concluded that effective regulation of the development and use of AI is needed “to avoid harm,” adding: “Until such effective regulation is in place, a moratorium on the development of self-improving artificial general intelligence should be instituted.”

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
The Academy Awards have new film rules. AI is now okay for the Oscars
Robots touching Oscar award.

In 2024, Hollywood was roiled by protests led by the SAG-AFTRA union, fighting for fair rights over their physical and voice identities in the age of AI. A deal was inked late last year to ensure that artists are fairly compensated, but the underlying current was obvious. 

AI in films is here to stay. 

Read more
Meta is training AI on your data. Users say opting out doesn’t work.
Meta AI WhatsApp widget.

Imagine a tech giant telling you that it wants your Instagram and Facebook posts to train its AI models. And that too, without any incentive. You could, however, opt out of it, as per the company. But as you proceed with the official tools to back out and prevent AI from gobbling your social content, they simply don’t work. 

That’s what users of Facebook and Instagram are now reporting. Nate Hake, publisher and founding chief of Travel Lemming, shared that he got an email from Meta about using his social media content for AI training. However, the link to the opt-out form provided by Meta doesn’t work.

Read more
Apple is hoping your emails will fix its misfiring AI
Categories in Apple Mail app.

Apple’s AI efforts haven’t made the same kind of impact as Google’s Gemini, Microsoft Copilot, or OpenAI’s ChatGPT. The company’s AI stack, dubbed Apple Intelligence, hasn’t moved the functional needle for iPhone and Mac users, even triggering an internal management crisis at the company. 

It seems user data could rescue the sinking ship. Earlier today, the company published a Machine Learning research paper that details a new approach to train its onboard AI using data stored on your iPhone, starting with emails. These emails will be used to improve features such as email summarization and Writing Tools. 

Read more