Skip to main content

Microsoft has a new way to keep ChatGPT ethical, but will it work?

Microsoft caught a lot of flak when it shut down its artificial intelligence (AI) Ethics & Society team in March 2023. It wasn’t a good look given the near-simultaneous scandals engulfing AI, but the company has just laid out how it intends to keep its future efforts responsible and in check going forward.

In a post on Microsoft’s On the Issues blog, Natasha Crampton — the Redmond firm’s Chief Responsible AI Officer — explained that the ethics team was disbanded because “A single team or a single discipline tasked with responsible or ethical AI was not going to meet our objectives.”

Bing Chat shown on a laptop.
Jacob Roach / Digital Trends

Instead, Microsoft adopted the approach it has taken with its privacy, security, and accessibility teams, and “embedded responsible AI across the company.” In practice, this means Microsoft has senior staff “tasked with spearheading responsible AI within each core business group,” as well as “a large network of responsible AI “champions” with a range of skills and roles for more regular, direct engagement.”

Recommended Videos

Beyond that, Crampton said Microsoft has “nearly 350 people working on responsible AI, with just over a third of those (129 to be precise) dedicated to it full time; the remainder have responsible AI responsibilities as a core part of their jobs.”

Please enable Javascript to view this content

After Microsoft shuttered its Ethics & Society team, Crampton noted that some team members were subsequently embedded into teams across the company. However, seven members of the group were fired as part of Microsoft’s extensive job cuts that saw 10,000 workers laid off at the start of 2023.

Navigating the scandals

Bing Chat saying it wants to be human.
Jacob Roach / Digital Trends

AI has hardly been free of scandals in recent months, and it’s those worries that fuelled the backlash against Microsoft’s disbanding of its AI ethics team. If Microsoft lacked a dedicated team to help guide its AI products in responsible directions, the thinking went, it would struggle to curtail the kinds of abuses and questionable behavior its Bing chatbot has become notorious for.

The company’s latest blog post is surely aiming to alleviate those concerns among the public. Rather than abandoning its AI efforts entirely, it seems Microsoft is seeking to ensure teams across the company have regular contact with experts in responsible AI.

Still, there’s no doubt that shutting down its AI Ethics & Society team didn’t go over well, and chances are Microsoft still has some way to go to ease the public’s collective mind on this topic. Indeed, even Microsoft itself thinks ChatGPT — whose developer, OpenAI, is owned by Microsoft — should be regulated.

Just yesterday, Geoffrey Hinton — the “godfather of AI — quit Google and told the New York Times he had serious misgivings about the pace and direction of AI expansion, while a group of leading tech experts recently signed an open letter calling for a pause on AI development so that its risks can be better understood.

Microsoft might not be disregarding worries about ethical AI development, but whether or not its new approach is the right one remains to be seen. After the controversial start Bing Chat has endured, Natasha Crampton and her colleagues will be hoping things are going to change for the better.

Alex Blake
Alex Blake has been working with Digital Trends since 2019, where he spends most of his time writing about Mac computers…
This open-source alternative to ChatGPT just got serious
The beta Canvas feature on Mistral

French AI startup Mistral announced Monday that it is incorporating a half-dozen new features and capabilities into its free generative AI work assistant, dubbed le Chat (French for "the cat"), that will put the open-source chatbot on par with leading frontier models from OpenAI and Anthropic.

Le Chat can now search the web and provide cited sources, similar to what Perplexity and SearchGPT both offer. Mistral's chatbot now also offers a Canvas feature akin to Claude's Artifacts where users can modify and edit content and code. What's more, le Chat can now generate images thanks to an integration with Black Forest Labs' Flux Pro, the same image generator that powers Grok-2's capabilities.

Read more
This massive upgrade to ChatGPT is coming in January — and it’s not GPT-5
ChatGPT on a laptop

OpenAI is set to launch a new AI agent in January, code-named Operator, that will enable ChatGPT to take action on the user's behalf. You may never have to book your own flights ever again.

The company's leadership made the announcement during a staff meeting Wednesday, reports Bloomberg. The company plans to roll out the new feature as a research preview through the company’s developer API.

Read more
Is AI already plateauing? New reporting suggests GPT-5 may be in trouble
A person sits in front of a laptop. On the laptop screen is the home page for OpenAI's ChatGPT artificial intelligence chatbot.

OpenAI's next-generation Orion model of ChatGPT, which is both rumored and denied to be arriving by the end of the year, may not be all it's been hyped to be once it arrives, according to a new report from The Information.

Citing anonymous OpenAI employees, the report claims the Orion model has shown a "far smaller" improvement over its GPT-4 predecessor than GPT-4 showed over GPT-3. Those sources also note that Orion "isn’t reliably better than its predecessor [GPT-4] in handling certain tasks," specifically coding applications, though the new model is notably stronger at general language capabilities, such as summarizing documents or generating emails.

Read more