Skip to main content

Even Microsoft thinks ChatGPT needs to be regulated — here’s why

Artificial intelligence (AI) chatbots have been taking the world by storm, with the capabilities of Microsoft’s ChatGPT causing wonderment and fear in almost equal measure. But in an intriguing twist, even Microsoft is now calling on governments to take action and regulate AI before things spin dangerously out of control.

The appeal was made by BSA, a trade group representing numerous business software companies, including Microsoft, Adobe, Dropbox, IBM, and Zoom. According to CNBC, the group is advocating for the US government to integrate rules governing the use of AI into national privacy legislation.

A MacBook Pro on a desk with ChatGPT's website showing on its display.
Hatice Baran / Unsplash

More specifically, BSA’s argument has four main tenets. These include the assertions that Congress should clearly set out when companies need to determine the potential impact of AI, and that those requirements should come into effect when the use of AI leads to “consequential decisions” — which Congress should also define.

BSA also states that Congress should ensure company compliance using an existing federal agency and that the development of risk-management programs must be a requirement for any company dealing with high-risk AI.

According to Craig Albright, vice president of U.S. government relations at BSA, “We’re an industry group that wants Congress to pass this legislation, so we’re trying to bring more attention to this opportunity. We feel it just hasn’t gotten as much attention as it could or should.”

BSA believes the American Data Privacy and Protection Act, a bipartisan bill that is yet to become law, is the right legislation to codify its ideas on AI regulation. The trade group has already been in touch with the House Energy and Commerce Committee — the body that first introduced the bill — about its views.

Legislation is surely coming

A laptop opened to the ChatGPT website.
Shutterstock

The breakneck speed at which AI tools have developed in recent months has caused alarm in many corners about the potential consequences for society and culture, and those fears have been heightened by the numerous scandals and controversies that have dogged the field.

Indeed, BSA is not the first body to have advocated for tougher guardrails against AI abuse. In March 2023, a group of prominent tech leaders called on AI firms to pause research on anything more advanced than GPT-4. The group stated this was necessary because “AI systems with human-competitive intelligence can pose profound risks to society and humanity” and that society at large needed to catch up and understand what AI development could mean for the future of civilization.

It is clear that the rapid speed with which AI tools have developed has caused a lot of consternation among both industry leaders and the general public. And when even Microsoft is suggesting its own AI products should be regulated, it seems increasingly likely that some form of AI legislation will become law sooner or later.

Editors' Recommendations

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
ChatGPT shortly devolved into an AI mess
A response from ChatGPT on an Android phone.

I've seen my fair share of unhinged AI responses -- not the least of which was when Bing Chat told me it wanted to be human last year -- but ChatGPT has stayed mostly sane since it was first introduced. That's changing, as users are flooding social media with unhinged, nonsensical responses coming from the chatbot.

In a lot of reports, ChatGPT simply spits out gibberish. For example, u/Bullroarer_Took took to the ChatGPT subreddit to showcase a response in which a series of jargon and proper sentence structure gives the appearance of a response, but a close read shows the AI spitting out nonsense.

Read more
The best custom GPTs to make ChatGPT even more powerful
A person typing on a laptop that is showing the ChatGPT generative AI website.

The introduction of Custom GPTs was one of the most exciting additions to ChatGPT in recent months. These allow you to craft custom chatbots with their own instructions and data by feeding them documents, weblinks, and more to make sure they know what you need and respond how you would like them to.

But you don't have to make your own Custom GPT if you don't want to. Indeed, there are tens of thousands of Custom GPTs already made by engineers around the world, and many of them are very impressive.

Read more
I used ChatGPT to help me make my first game. Don’t make the same mistakes I did
A person typing on a laptop that is showing the ChatGPT generative AI website.

Alongside writing articles about ChatGPT, coming to terms with AI chatbot has been a major mission of mine for the past year. I've found it useful for coming up with recipe ideas from a list of ingredients, writing fun alternate history ideas, and answering board game rules clarifications. But I wanted to see if it could do something more impressive: teach me how to make a game.
The first hurdle
I've wanted to make a game for a while now. I programmed a bunch of basic Flash games when I was a kid -- if you can find my Newgrounds profile, you can have a good laugh at them -- but I've had a few ideas ticking in my mind that have calcified into thoughts that will not shift. I need to make them someday and maybe someday is now.

But knowing how to start making a game isn't easy. I didn't really know what kind of game I was trying to make, or what engine I should use, or how you actually start making a game. Until recently, I just hadn't done it. I'd downloaded Unity once, became intimidated, and uninstalled it.

Read more