Skip to main content

DALL-E 3 could take AI image generation to the next level

DALL-E 2DALL-E 2 Image on OpenAI.
OpenAI

OpenAI might be preparing the next version of its DALL-E AI text-to-image generator with a series of alpha tests that have now been leaked to the public, according to the Decoder.

An anonymous leaker on Discord shared details about his experience, having access to the upcoming OpenAI image model being referred to as DALL-E 3. He first appeared in May, telling the interest-based Discord channel that he was part of an alpha test for OpenAI, trying out a new AI image model. He shared the images he generated at the time.

We've NEVER seen Image Generation This Good! | SNEAK PEAK

The May alpha test version had the ability to generate images of multiple aspect ratios inside the image model. YouTuber, MattVidPro AI then showcased several of the images that were generated in a 16:9 aspect ratio. This version also showed the model’s prowess for high-quality text production, which continues to be a pain point for rival models, even for top generators such as Stable Diffusion and Midjourney.

Some examples showcased images, such as text melded into a brick wall, a neon sign of words, a billboard sign in a city, a cake decoration, and a name etched into a mountain. The model maintains that DALL-E is good at generating people. One such image displayed a woman eating spaghetti at a party from a fisheye point of view.

The leaker returned to the Discord channel in mid-July with more details and new images. He claimed to be a part of a “closed alpha” test version that included approximately 400 subjects. He added that he was invited to the trial via email and was also included in the testing of the original DALL-E and DALL-E 2. This is what led to the conclusion that the alpha test might be for DALL-E 3, though it has not been confirmed.

The model has been updated considerably between May and July. The leaker has showcased this by sharing images generated based on the same prompt, showing how powerful DALL-E 3 has gotten over time. The prompt reads a painting of a pink jester giving a high five to a panda while in a cycling competition. The bikes are made of cheese and the ground is very muddy. They are driving in a foggy forest. The panda is angry.

The May alpha produces the general scene that hits most of the points of the prompt. There’s a little distortion in the hands connecting, and the wheels of the bikes are yellow as opposed to being made of cheese. However, the July alpha is far more detailed, with the pink jester and the panda clearly high-fiving and the bicycle wheels made of cheese in several generations.

Meanwhile, in Midjourney, the jester is missing from the scene, the pandas are on motorcycles instead of bicycles. There are roads, instead of mud. The pandas are happy instead of angry.

There are a host of DALL-E 3 July alpha image examples that show the potential of the model. However, with the alpha test being uncensored, the leaker noted that also has the potential to generate scenes of “violence and nudity or copyrighted material such as company logos.”

Some examples include a gory anime girl, a Game of Thrones character, a Grand Theft Auto V cover, a zombie Jesus eating a Subway sandwich, also suggesting mild gore, and Shrek being dug up from an archeological dig, among others.

MattVidPro AI noted that the image model generates images as if they’re supposed to be in a specific style.

DALL-E 2 launched in April 2022 but was heavily regulated with a waitlist due to its popularity and concerns about ethics and safety. The AI image generator became accessible to the public in September 2022.

Fionna Agomuoh
Fionna Agomuoh is a Computing Writer at Digital Trends. She covers a range of topics in the computing space, including…
I tested the future of AI image generation. It’s astoundingly fast.
Imagery generated by HART.

One of the core problems with AI is the notoriously high power and computing demand, especially for tasks such as media generation. On mobile phones, when it comes to running natively, only a handful of pricey devices with powerful silicon can run the feature suite. Even when implemented at scale on cloud, it’s a pricey affair.
Nvidia may have quietly addressed that challenge in partnership with the folks over at the Massachusetts Institute of Technology and Tsinghua University. The team created a hybrid AI image generation tool called HART (hybrid autoregressive transformer) that essentially combines two of the most widely used AI image creation techniques. The result is a blazing fast tool with dramatically lower compute requirement.
Just to give you an idea of just how fast it is, I asked it to create an image of a parrot playing a bass guitar. It returned with the following picture in just about a second. I could barely even follow the progress bar. When I pushed the same prompt before Google’s Imagen 3 model in Gemini, it took roughly 9-10 seconds on a 200 Mbps internet connection.

A massive breakthrough
When AI images first started making waves, the diffusion technique was behind it all, powering products such as OpenAI’s Dall-E image generator, Google’s Imagen, and Stable Diffusion. This method can produce images with an extremely high level of detail. However, it is a multi-step approach to creating AI images, and as a result, it is slow and computationally expensive.
The second approach that has recently gained popularity is auto-regressive models, which essentially work in the same fashion as chatbots and generate images using a pixel prediction technique. It is faster, but also a more error-prone method of creating images using AI.
On-device demo for HART: Efficient Visual Generation with Hybrid Autoregressive Transformer
The team at MIT fused both methods into a single package called HART. It relies on an autoregression model to predict compressed image assets as a discrete token, while a small diffusion model handles the rest to compensate for the quality loss. The overall approach reduces the number of steps involved from over two dozen to eight steps.
The experts behind HART claim that it can “generate images that match or exceed the quality of state-of-the-art diffusion models, but do so about nine times faster.” HART combines an autoregressive model with a 700 million parameter range and a small diffusion model that can handle 37 million parameters.

Read more
ChatGPT app could soon generate AI videos with Sora
Depiction of OpenAI Sora video generator on a phone.

OpenAI released its Sora text-to-video generation tool late in 2024, and expanded it to the European market at the end of February this year. It seems the next avenue for Sora is the ChatGPT app.

According to a TechCrunch report, which cites internal conversations, OpenAI is planning to bring the video creation AI tool to ChatGPT. So far, the video generator has been available only via a web client, and has remained exclusive to paid users.

Read more
Microsoft nixes its Dall-E upgrade after image quality complaints
Robot holding a video camera, generated by Bing.

Microsoft has had to roll back its latest update to its Bing Image Generation system, which installed the latest iteration of OpenAI's Dall-E model, called PR16, after Bing users vociferously complained about a decline in image quality.

https://x.com/JordiRib1/status/1869425938976665880

Read more