Skip to main content

Most people distrust AI and want regulation, says new survey

Most American adults do not trust artificial intelligence (AI) tools like ChatGPT and worry about their potential misuse, a new survey has found. It suggests that the frequent scandals surrounding AI-created malware and disinformation are taking their toll and that the public might be increasingly receptive to ideas of AI regulation.

The survey from the MITRE Corporation and the Harris Poll claims that just 39% of 2,063 U.S. adults polled believe that today’s AI tech is “safe and secure,” a drop of 9% from when the two firms conducted their last survey in November 2022.

A person's hand holding a smartphone. The smartphone is showing the website for the ChatGPT generative AI.
Sanket Mishra / Pexels

When it came to specific concerns, 82% of people were worried about deepfakes and “other artificial engineered content,” while 80% feared how this technology might be used in malware attacks. A majority of respondents worried about AI’s use in identity theft, harvesting personal data, replacing humans in the workplace, and more.

Recommended Videos

In fact, the survey indicates that people are becoming more wary of AI’s impact across various demographic groups. While 90% of boomers are worried about the impact of deepfakes, 72% of Gen Z members are also anxious about the same topic.

Although younger people are less suspicious of AI — and are more likely to use it in their everyday lives — concerns remain high in a number of areas, including whether the industry should do more to protect the public and whether AI should be regulated.

Strong support for regulation

A laptop opened to the ChatGPT website.
Shutterstock

The declining support for AI tools has likely been prompted by months of negative stories in the news concerning generative AI tools and the controversies facing ChatGPT, Bing Chat, and other products. As tales of misinformation, data breaches, and malware mount, it seems that the public is becoming less amenable to the looming AI future.

When asked in the MITRE-Harris poll whether the government should step in to regulate AI, 85% of respondents were in favor of the idea — up 3% from last time. The same 85% agreed with the statement that “Making AI safe and secure for public use needs to be a nationwide effort across industry, government, and academia,” while 72% felt that “The federal government should focus more time and funding on AI security research and development.”

The widespread anxiety over AI being used to improve malware attacks is interesting. We recently spoke to a group of cybersecurity experts on this very topic, and the consensus seemed to be that while AI could be used in malware, it is not a particularly strong tool at the moment. Some experts felt that its ability to write effective malware code was poor, while others explained that hackers were likely to find better exploits in public repositories than by asking AI for help.

Still, the increasing skepticism for all things AI could end up shaping the industry’s efforts and might prompt companies like OpenAI to invest more money in safeguarding the public from the products they release. And with such overwhelming support, don’t be surprised if governments start enacting AI regulation sooner rather than later.

Alex Blake
Alex Blake has been working with Digital Trends since 2019, where he spends most of his time writing about Mac computers…
ChatGPT’s resource demands are getting out of control
a server

It's no secret that the growth of generative AI has demanded ever increasing amounts of water and electricity, but a new study from The Washington Post and researchers from University of California, Riverside shows just how many resources OpenAI's chatbot needs in order to perform even its most basic functions.

In terms of water usage, the amount needed for ChatGPT to write a 100-word email depends on the state and the user's proximity to OpenAI's nearest data center. The less prevalent water is in a given region, and the less expensive electricity is, the more likely the data center is to rely on electrically powered air conditioning units instead. In Texas, for example, the chatbot only consumes an estimated 235 milliliters needed to generate one 100-word email. That same email drafted in Washington, on the other hand, would require 1,408 milliliters (nearly a liter and a half) per email.

Read more
How you can try OpenAI’s new o1-preview model for yourself
The openAI o1 logo

Despite months of rumored development, OpenAI's release of its Project Strawberry last week came as something of a surprise, with many analysts believing the model wouldn't be ready for weeks at least, if not later in the fall.

The new o1-preview model, and its o1-mini counterpart, are already available for use and evaluation, here's how to get access for yourself.

Read more
OpenAI Project Strawberry: Here’s everything we know so far
a strawberry

Even as it is reportedly set to spend $7 billion on training and inference costs (with an overall $5 billion shortfall), OpenAI is steadfastly seeking to build the world's first Artificial General Intelligence (AGI).

Project Strawberry is the company's next step toward that goal, and as of mid September, it's officially been announced.
What is Project Strawberry?
Project Strawberry is OpenAI's latest (and potentially greatest) large language model, one that is expected to broadly surpass the capabilities of current state-of-the-art systems with its "human-like reasoning skills" when it rolls out. It just might power the next generation of ChatGPT.
What can Strawberry do?
Project Strawberry will reportedly be a reasoning powerhouse. Using a combination of reinforcement learning and “chain of thought” reasoning, the new model will reportedly be able to solve math problems it has never seen before and act as a high-level agent, creating marketing strategies and autonomously solving complex word puzzles like the NYT's Connections. It can even "navigate the internet autonomously" to perform "deep research," according to internal documents viewed by Reuters in July.

Read more