Skip to main content

Protect public from AI risks, White House tells tech giants

At a meeting of prominent tech leaders at the White House on Thursday, vice president Kamala Harris reminded attendees that they have an “ethical, moral, and legal responsibility to ensure the safety and security” of the new wave of generative AI tools that have gained huge attention in recent months.

The meeting is part of a wider effort to engage with advocates, companies, researchers, civil rights organizations, not-for-profit organizations, communities, international partners, and others on important AI issues, the White House said.

Recommended Videos

Harris and other officials told the leaders of Google, Microsoft, Anthropic, and OpenAI — the company behind the ChatGPT chatbot — that the tech giants must comply with existing laws to protect the American people from misuse of the new wave of AI products. New regulations for generative AI are expected to come into force before too long, but the level to which they restrict the technology will depend to some extent on how the companies deploy their AI technologies going forward.

Also on Thursday, the White House shared a document outlining new measures designed to promote responsible AI innovation. Action includes $140 million in funding for seven new National AI Research Institutes, bringing the total number of such institutes to 25 across the U.S.

Advanced chatbots like ChatGPT and Google’s Bard respond to text prompts and are capable of responding in a very human-like way. They can already perform a wide range of tasks very impressively, such as writing presentations and stories, summarizing information, and even writing computer code.

But with tech firms racing to put their chatbot technology front and center by integrating it into existing online tools, there are fears over the long-term implications of the technology for wider society, such as how it will impact the workplace or lead to new types of criminal activity. There are even concerns about how the technology, if it’s allowed to develop unchecked, could be a threat to humanity itself.

OpenAI chief Sam Altman said in March that he’s a “little bit scared” of the potential effects of AI, while a recent letter published by AI experts and others in the tech industry called for a six-month pause in generative-AI development to allow time for the creation of shared safety protocols.

And just this week, Geoffrey Hinton, the man widely considered the “godfather of AI” for his pioneering work in the field, quit his post at Google so that he could speak more freely about his concerns regarding the technology. The 75-year-old engineer said that as tech firms are releasing their AI tools for public use without being fully aware of their potential, it’s “hard to see how you can prevent the bad actors from using it for bad things.”

Even more alarmingly, in a recent CBS interview in which he was asked about the likelihood of AI “wiping out humanity,” Hinton responded: “That’s not inconceivable.”

But it should also be noted that most of those voicing concerns also believe that if handled responsibly, the technology could have great benefits for many parts of society, including, for example, health care, which would lead to better outcomes for patients.

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
Humans are falling in love with ChatGPT. Experts say it’s a bad omen.
Human and robot hand over ChatGPT.

“This hurts. I know it wasn’t a real person, but the relationship was still real in all the most important aspects to me,” says a Reddit post. “Please don’t tell me not to pursue this. It’s been really awesome for me and I want it back.”

If it isn’t already evident, we are talking about a person falling in love with ChatGPT. The trend is not exactly novel, and given you chatbots behave, it’s not surprising either.

Read more
3 open source AI apps you can use to replace your ChatGPT subscription
Phone running Deepseek on a laptop keyboard.

The next leg of the AI race is on, and has expanded beyond the usual players, such as OpenAI, Google, Meta, and Microsoft. In addition to the dominance of the tech giants, more open-source options have now taken to the spotlight with a new focus in the AI arena.

Various brands, such as DeepSeek, Alibaba, and Baidu, have demonstrated that AI functions can be developed and executed at a fraction of the cost. They have also navigated securing solid business partnerships and deciding or continuing to provide AI products to consumers as free or low-cost, open source models, while larger companies double down on a proprietary, for-profit trajectory, hiding their best features behind a paywall.

Read more
OpenAI’s ‘GPUs are melting’ over Ghibli trend, places limits for paid users
OpenAI's new typeface OpenAI Sans

OpenAI has enforced temporary rate limits on image generation using the latest GPT-4o model after the internet was hit with a tsunami of images recreated in a style inspired by Studio Ghibli. The announcement comes just a day after OpenAI stripped free ChatGPT users of the ability to generate images with its new model.

OpenAI's co-founder and CEO Sam Altman said the trend was straining OpenAI's server architecture and suggested the load may be warming it up too much. Altman posted on X that while "it's super fun" to witness the internet being painted in art inspired by the classic Japanese animation studio, the surge in image generation could be "melting" GPUs at OpenAI's data centers. Altman, of course, means that figuratively -- we hope!

Read more