Skip to main content

ChatGPT’s new upgrade finally breaks the text barrier

OpenAI is rolling out new functionalities for ChatGPT that will allow prompts to be executed with images and voice directives in addition to text.

The AI brand announced on Monday that it will be making these new features available over the next two weeks to ChatGPT Plus and Enterprise users. The voice feature is available in iOS and Android in an opt-in capacity, while the images feature is available on all ChatGPT platforms. OpenAI notes it plans to expand the availability of the images and voice features beyond paid users after the staggered rollout.

OpenAI image prompt.
Twitter/X

The voice chat functions as an auditory conversation between the user and ChatGPT. You press the button and say your question. After processing the information, the chatbot gives you an answer in auditory speech instead of in text. The process is similar to using virtual assistants such as Alexa or Google Assistant and could be the preamble to a complete revamp of virtual assistants as a whole. OpenAI’s announcement comes just days after Amazon revealed a similar feature coming to Alexa.

To implement voice and audio communication with ChatGPT, OpenAI uses a new text-to-speech model that is able to generate “human-like audio from just text and a few seconds of sample speech.” Additionally, its Whisper model can “transcribe your spoken words into text.”

OpenAI says it’s aware of the issues that could arise due to the power behind this feature, including, “the potential for malicious actors to impersonate public figures or commit fraud.”

This is one of the main reasons the company plans to limit the use of its new features to “specific use cases and partnerships.” Even when the features are more widely available they will be accessible mainly to more privileged users, such as developers.

ChatGPT can now see, hear, and speak. Rolling out over next two weeks, Plus users will be able to have voice conversations with ChatGPT (iOS & Android) and to include images in conversations (all platforms). https://t.co/uNZjgbR5Bm pic.twitter.com/paG0hMshXb

— OpenAI (@OpenAI) September 25, 2023

The image feature allows you to capture an image and input it into ChatGPT with your question or prompt. You can use the drawing tool with the app to help clarify your answer and have a back-and-forth conversation with the chatbot until your issue is resolved. This is similar to Microsoft’s new Copilot feature in Windows, which is built on OpenAI’s model.

OpenAI has also acknowledged the challenges of ChatGPT, such as its ongoing hallucination issue. When aligned with the image feature, the brand decided to limit certain functionalities, such as the chatbot’s “ability to analyze and make direct statements about people.”

ChatGPT was first introduced as a text-to-speech tool late last year; however, OpenAI has quickly expanded its prowess. The original chatbot based on the GPT-3 language model has since been updated to GPT-3.5 and now GPT-4, which is the model that is receiving the new feature.

When GPT-4 first launched in March, OpenAI announced various enterprise collaborations, such as Duolingo, which used the AI model to improve the accuracy of the listening and speech-based lessons on the language learning app. OpenAI has collaborated with Spotify to translate podcasts into other languages while preserving the sound of the podcaster’s voice. The company also spoke of its work with the mobile app, Be My Eyes, which works to aid blind and low-vision people. Many of these apps and services were available ahead of the images and voice update.

Editors' Recommendations

Fionna Agomuoh
Fionna Agomuoh is a technology journalist with over a decade of experience writing about various consumer electronics topics…
OpenAI needs just 15 seconds of audio for its AI to clone a voice
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

In recent years, the listening time required by a piece of AI to clone someone’s voice has been getting shorter and shorter.

It used to be minutes, now it’s just seconds.

Read more
How much does an AI supercomputer cost? Try $100 billion
A Microsoft datacenter.

It looks like OpenAI's ChatGPT and Sora, among other projects, are about to get a lot more juice. According to a new report shared by The Information, Microsoft and OpenAI are working on a new data center project, one part of which will be a massive AI supercomputer dubbed "Stargate." Microsoft is said to be footing the bill, and the cost is astronomical as the name of the supercomputer suggests -- the whole project might cost over $100 billion.

Spending over $100 billion on anything is mind-blowing, but when put into perspective, the price truly shows just how big a venture this might be: The Information claims that the new Microsoft and OpenAI joint project might cost a whopping 100 times more than some of the largest data centers currently in operation.

Read more
Nvidia turns simple text prompts into game-ready 3D models
A colorful collage of images generated by Nvidia's LATTE3D.

Nvidia just unveiled its new generative AI model, dubbed Latte3D, during GTC 2024. Latte3D appears to be ChatGPT on extreme steroids. I's a text-to-3D model that accepts simple, short text prompts and turns them into 3D objects and animals within a second. Much faster than its older counterparts, Latte3D works like a virtual 3D printe that could come in handy for creators across many industries.

Latte3D was made to simplify the creation of 3D models for many types of creators, such as those working on video games, design projects, marketing, or even machine learning and training for robotics. In Nvidia's demo of the model, it appears super simple to use. Following a quick text prompt, the AI generates a 3D model and shortly after finishes it off with much more detail. While the end result is nowhere near as lifelike as OpenAI's Sora, it's not meant to be -- this is a way to speed up creating assets instead of having to build them from the ground up.

Read more