Skip to main content

The Windows Copilot puts Bing Chat in every Windows 11 computer

Copilot in Windows being used in the side panel.

Announced at Microsoft Build 2023, Windows will now have its own dedicated AI “copilot” that can be docked right into a side panel that can stay persistent while using other applications and aspects of the operating system.

Microsoft has been highly invested in AI over these recent months, and it was only a matter of time before it came to Windows. The time is now — and it’s coming in a big way.

This “copilot” approach is the same that’s being included in specific Microsoft apps, such as Edge, Word, and the rest of the Office 365 suite. In Windows, the copilot will be able to do things like provide personalized answers, help you take actions within Windows, and most importantly, interact with open apps contextually.

The AI being used here is, of course, Microsoft’s own Bing Chat, which is based on the OpenAI GPT-4 large language model. More than that, the Copilot also has access to the different plugins available for Bing Chat, which Microsoft says can be used to improve productivity, help bring ideas to life, collaborate, and “complete complex projects.”

Microsoft also calls Windows the “first PC platform to announce centralized AI assistance for customers,” comparing it to options like macOS and ChromeOS. Right now, you have to install the latest version of the Edge browser to get access to Bing Chat, so in theory, the Windows Copilot effectively integrates generative AI into every Windows 11 computer.

Microsoft says the Windows Copilot will start to become sometime in June as a preview for Windows 11.

An AI-generated review summary shown in the Microsoft Store.

Microsoft is also creating a permanent spot for AI-driven apps in the Microsoft Store called “AI Hub.” Coming to the Microsoft Store soon, this will be a one-stop shop highlighting apps and experiences in the world of AI, both built by Microsoft and by third-party developers. There are even going to be AI-generated review summaries, that take the reviews of an application and compile them into a single summary.

Editors' Recommendations

Luke Larsen
Senior Editor, Computing
Luke Larsen is the Computing Editor at Digital Trends and manages all content covering laptops, monitors, PC hardware, and…
Here are 11 things that ChatGPT will refuse to do
ChatGPT refusing to do something.

ChatGPT is an amazing tool, a modern marvel of natural language artificial intelligence that can do incredible things. But with great power comes great responsibility, so ChatGPT developer OpenAI put some safeguards in place to prevent it from doing things it shouldn't. It also has some limitations based on its design, the data it was trained on, and the sheer limitations of a text-based AI.

There are, of course, differences between what GPT-3.5 can do compared to GPT-4, which is only available through ChatGPT Plus. Some of those things are just on hold while it develops further, but there are some things ChatGPT may never be able to do. Here's a list of 11 things that ChatGPT can't or won't do. -- for now.
It can't write about anything after 2021

Read more
Microsoft has a new way to keep ChatGPT ethical, but will it work?
Bing Chat shown on a laptop.

Microsoft caught a lot of flak when it shut down its artificial intelligence (AI) Ethics & Society team in March 2023. It wasn’t a good look given the near-simultaneous scandals engulfing AI, but the company has just laid out how it intends to keep its future efforts responsible and in check going forward.

In a post on Microsoft’s On the Issues blog, Natasha Crampton -- the Redmond firm’s Chief Responsible AI Officer -- explained that the ethics team was disbanded because “A single team or a single discipline tasked with responsible or ethical AI was not going to meet our objectives.”

Read more
Stop using generative-AI tools such as ChatGPT, Samsung orders staff
Samsung logo

Samsung has told staff to stop using generative AI tools such as ChatGPT and Bard over concerns that they pose a security risk, Bloomberg reported on Monday.

The move follows a string of embarrassing slip-ups last month when Samsung employees reportedly fed sensitive semiconductor-related data into ChatGPT on three occasions.

Read more