Skip to main content

Protect public from AI risks, White House tells tech giants

At a meeting of prominent tech leaders at the White House on Thursday, vice president Kamala Harris reminded attendees that they have an “ethical, moral, and legal responsibility to ensure the safety and security” of the new wave of generative AI tools that have gained huge attention in recent months.

The meeting is part of a wider effort to engage with advocates, companies, researchers, civil rights organizations, not-for-profit organizations, communities, international partners, and others on important AI issues, the White House said.

Harris and other officials told the leaders of Google, Microsoft, Anthropic, and OpenAI — the company behind the ChatGPT chatbot — that the tech giants must comply with existing laws to protect the American people from misuse of the new wave of AI products. New regulations for generative AI are expected to come into force before too long, but the level to which they restrict the technology will depend to some extent on how the companies deploy their AI technologies going forward.

Also on Thursday, the White House shared a document outlining new measures designed to promote responsible AI innovation. Action includes $140 million in funding for seven new National AI Research Institutes, bringing the total number of such institutes to 25 across the U.S.

Advanced chatbots like ChatGPT and Google’s Bard respond to text prompts and are capable of responding in a very human-like way. They can already perform a wide range of tasks very impressively, such as writing presentations and stories, summarizing information, and even writing computer code.

But with tech firms racing to put their chatbot technology front and center by integrating it into existing online tools, there are fears over the long-term implications of the technology for wider society, such as how it will impact the workplace or lead to new types of criminal activity. There are even concerns about how the technology, if it’s allowed to develop unchecked, could be a threat to humanity itself.

OpenAI chief Sam Altman said in March that he’s a “little bit scared” of the potential effects of AI, while a recent letter published by AI experts and others in the tech industry called for a six-month pause in generative-AI development to allow time for the creation of shared safety protocols.

And just this week, Geoffrey Hinton, the man widely considered the “godfather of AI” for his pioneering work in the field, quit his post at Google so that he could speak more freely about his concerns regarding the technology. The 75-year-old engineer said that as tech firms are releasing their AI tools for public use without being fully aware of their potential, it’s “hard to see how you can prevent the bad actors from using it for bad things.”

Even more alarmingly, in a recent CBS interview in which he was asked about the likelihood of AI “wiping out humanity,” Hinton responded: “That’s not inconceivable.”

But it should also be noted that most of those voicing concerns also believe that if handled responsibly, the technology could have great benefits for many parts of society, including, for example, health care, which would lead to better outcomes for patients.

Editors' Recommendations

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
GPTZero: how to use the ChatGPT detection tool
A MidJourney rendering of a student and his robot friend in front of a blackboard.

In terms of world-changing technologies, ChatGPT has truly made a massive impact on the way people think about writing and coding in the short time that it's been available. Being able to plug in a prompt and get out a stream of almost good enough text is a tempting proposition for many people who aren't confident in their writing skills or are looking to save time. However, this ability has come with a significant downside, particularly in education, where students are tempted to use ChatGPT for their own papers or exams. That prevents them from learning as much as they could, which has given teachers a whole new headache when it comes to detecting AI use.

Teachers and other users are now looking for ways to detect the use of ChatGPT in students' work, and many are turning to tools like GPTZero, a ChatGPT detection tool built by Princeton University student Edward Tian. The software is available to everyone, so if you want to try it out and see the chances that a particular piece of text was written using ChatGPT, here's how you can do that.
What is GPTZero?

Read more
Is ChatGPT safe? Here are the risks to consider before using it
A response from ChatGPT on an Android phone.

For those who have seen ChatGPT in action, you know just how amazing this generative AI tool can be. And if you haven’t seen ChatGPT do its thing, prepare to have your mind blown! 

There’s no doubting the power and performance of OpenAI’s famous chatbot, but is ChatGPT actually safe to use? While tech leaders the world over are concerned over the evolutionary development of AI, these global concerns don’t necessarily translate to an individual user experience. With that being said, let’s take a closer look at ChatGPT to help you hone in on your comfort level.
Privacy and financial leaks
In at least one instance, chat history between users was mixed up. On March 20, 2023, ChatGPT creator OpenAI discovered a problem, and ChatGPT was down for several hours. Around that time, a few ChatGPT users saw the conversation history of other people instead of their own. Possibly more concerning was the news that payment-related information from ChatGPT-Plus subscribers might have leaked as well.

Read more
ChatGPT shortly devolved into an AI mess
A response from ChatGPT on an Android phone.

I've seen my fair share of unhinged AI responses -- not the least of which was when Bing Chat told me it wanted to be human last year -- but ChatGPT has stayed mostly sane since it was first introduced. That's changing, as users are flooding social media with unhinged, nonsensical responses coming from the chatbot.

In a lot of reports, ChatGPT simply spits out gibberish. For example, u/Bullroarer_Took took to the ChatGPT subreddit to showcase a response in which a series of jargon and proper sentence structure gives the appearance of a response, but a close read shows the AI spitting out nonsense.

Read more