Skip to main content

Gemini’s new feature might make it your new favorite group project partner

EMBARGO: 18 March 2025 9am PT - new Gemini features.
Google

Google has released new content for its Gemini assistant called Canvas — a split-screen feature that lets you chat to Gemini on the left and see your changes appear in real-time on the right.

The idea is to make editing and iteration a smoother experience — instead of scrolling up and down the chat to copy sections of output you’re not happy with, you can just highlight the text in question on the right and tell Gemini what to change. The assistant will then edit the specified section and update the document, rather than generating a whole new version or spitting out additional paragraphs you need to splice together yourself.

EMBARGO: 18 March 2025 9am PT - Gemini Canvas document editing.
Google

Asking an LLM like Gemini to make revisions to its responses can be a bit of a chore, so this will hopefully make the process less painful.

Recommended Videos

Canvas also works with programming projects, allowing you to view code on the right and chat with Gemini to explain, revise, and debug it on the left. It can also display your HTML or React code as a visual representation of your software, allowing you to preview what your email subscription form might look like, for example. When you request changes, the preview will update, allowing you to try out different ideas quickly and efficiently.

EMBARGO: 18 March 2025 9am PT - Gemini Canvas coding feature.
Google

To try out these new features, you’ll need to be a Gemini or Gemini Advanced subscriber and click the Canvas button in the prompt bar. Google is marketing these updates as features for “collaboration” but just to be clear — it doesn’t mean collaboration with other people. The features are designed for you to collaborate with Gemini and, according to Google, “if you want to collaborate with others on the content you just made, you can export it to Google Docs with a click.”

The update also includes Audio Overview, a feature from NotebookLM that essentially transforms documents into podcasts. It’s similar to any summary and analysis generation tool in its purpose, but it presents the information in an audio format with two AI hosts holding a podcast-style discussion.

The feature has been popular with NotebookLM users who want to multitask while consuming information. To use it, upload your documents to Gemini and click the suggestion chip that appears.

Willow Roberts
Willow Roberts has been a Computing Writer at Digital Trends for a year and has been writing for about a decade. She has a…
Gemini’s rumored video generation could be here soon
Google Gemini on an iPhone.

For several months now, Google Gemini has teased generative video capabilities, but the latest beta suggests those features are closer than ever. In Google app beta 16.11, Android Authority's Abner Li found several strings that reveal a few details about the upcoming video generation features.

The first is a string that says "Get high-quality videos with Veo 2, Gemini's latest video generation model." Veo promises to create an eight-second video in about two minutes based on your idea. All users have to do is describe their vision in a few sentences. Don't get too excited, though; it seems there will be usage limits, so you aren't going to be creating feature-length films just yet.

Read more
Samsung might put AI smart glasses on the shelves this year
Google's AR smartglasses translation feature demonstrated.

Samsung’s Project Moohan XR headset has grabbed all the spotlights in the past few months, and rightfully so. It serves as the flagship launch vehicle for a reinvigorated Android XR platform, with plenty of hype from Google’s own quarters.
But it seems Samsung has even more ambitious plans in place and is reportedly experimenting with different form factors that go beyond the headset format. According to Korea-based ET News, the company is working on a pair of smart glasses and aims to launch them by the end of the ongoing year.
Currently in development under the codename “HAEAN” (machine-translated name), the smart glasses are reportedly in the final stages of locking the internal hardware and functional capabilities. The wearable device will reportedly come equipped with camera sensors, as well.

What to expect from Samsung’s smart glasses?
The Even G1 smart glasses have optional clip-on gradient shades. Photo by Tracey Truly / Digital Trends
The latest leak doesn’t dig into specifics about the internal hardware, but another report from Samsung’s home market sheds some light on the possibilities. As per Maeil Business Newspaper, the Samsung smart glasses will feature a 12-megapixel camera built atop a Sony IMX681 CMOS image sensor.
It is said to offer a dual-silicon architecture, similar to Apple’s Vision Pro headset. The main processor on Samsung’s smart glasses is touted to be Qualcomm’s Snapdragon AR1 platform, while the secondary processing hub is a chip supplied by NXP.
The onboard camera will open the doors for vision-based capabilities, such as scanning QR codes, gesture recognition, and facial identification. The smart glasses will reportedly tip the scales at 150 grams, while the battery size is claimed to be 155 mAh.

Read more
NotebookLM’s new Mind Maps could help you learn more efficiently
Google video of NotebookLM mind map feature.

NotebookLM users have a few new features to play around with starting this week, including a visual summary feature dubbed "Mind Map."

For anyone who isn't familiar with NotebookLM, it's an AI tool from Google designed specifically for summarizing, searching, and analyzing libraries of information. The idea is that you provide it with a selection of documents and files, and it will help you interact with them using natural language.

Read more