Skip to main content

Click and drag AI image editing could change everything

The latest development in artificial intelligence is a tool that allows you to edit an already-generated image to your specifications.

Say you wanted to “change the dimensions of a car or manipulate a smile into a frown with a simple click and drag,” you could do so with this model called DragGAN.

Recommended Videos

Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold

paper page: https://t.co/Gjcm1smqfl pic.twitter.com/XHQIiMdYOA

— AK (@_akhaliq) May 19, 2023

The Generative Adversarial Network (GAN) is currently in the form of a research paper, however, it has garnered such attention from those interested in viewing its demos that the research team’s homepage has experienced crashing due to the heavy traffic.

The Verge compared DragGAN to the Warp tool in Photoshop, adding that it is much more powerful since it doesn’t “smush pixels around,” but rather “re-generates the underlying object,” and can even rotate 3D images.

The potential of such a tool lies in the fact that text-to-image generative AI doesn’t always output what you might want. So you can go back in afterward and make edits to an existing image, instead of automatically having to generate a new image.

Some demos that are a part of the research paper include adding height to a mountain, changing the positioning of a model and editing the length and shape of her clothes, opening or closing a lion’s mouth, and changing a person’s face from a plain look to a smile. With many AI tools currently available, users have to regenerate an image with a more specific prompt to get a more desirable result.

The research team noted in its paper that new details can be added within the regeneration of the edited aspects of images that are beneficial to the update. “Our approach can hallucinate occluded content, like the teeth inside a lion’s mouth, and can deform following the object’s rigidity, like the bending of a horse leg.”

There are many brands that are attempting to offer editing options for generative AI content. However, most do not go as far as allowing for the actual editing of images, but rather for aspects such as editing around images. For example, Microsoft’s Designer app allows you to generate AI images from a text prompt, and you can select your favorite from three results, then take it to the design studio where you can create a host of creativity and productivity-based projects, such as social media posts, invitations, digital postcards, or graphics with the image as the focal point. However, you cannot edit the AI-generated image.

With the DragGAN tool still being a demo for now, there is no telling what the quality of a readily available technology would be, or if it would even be possible, especially since the demos are based on low-resolution videos. However, it is an interesting example of how quickly AI continues to develop.

Fionna Agomuoh
Fionna Agomuoh is a Computing Writer at Digital Trends. She covers a range of topics in the computing space, including…
OpenAI’s latest model creates life like images and readable text, try it free
ChatGPT and OpenAI logos.

OpenAI has introduced its 4o model into ChatGPT to enable native image generation within the chatbot atmosphere. This upgrade makes it so you don’t have to use OpenAI’s Dall-E image generation model as a separate entity, though Dall-E remains available for those as a preference. The AI brand has also enabled its Sora AI video generator within ChatGPT. 

The new features are currently available for ChatGPT free users, as well as for ChatGPT Plus, Team, and Pro users. Availability will be coming to enterprise and education users next week.

Read more
Nvidia G-Assist uses AI to configure game settings so you don’t have to
An MSI gaming monitor at CES 2025.

Nvidia's new G-Assist AI assistant is now available on the Nvidia app, ready for GeForce RTX desktop users to try out. The concept first appeared in 2017 as an April Fool's joke before becoming a real tech demo last year, and now, a real-life product.

The assistant is designed to take care of the ever-growing selection of settings PC users need to deal with, including system settings, game settings, and charting performance statistics.

Read more
I tested the future of AI image generation. It’s astoundingly fast.
Imagery generated by HART.

One of the core problems with AI is the notoriously high power and computing demand, especially for tasks such as media generation. On mobile phones, when it comes to running natively, only a handful of pricey devices with powerful silicon can run the feature suite. Even when implemented at scale on cloud, it’s a pricey affair.
Nvidia may have quietly addressed that challenge in partnership with the folks over at the Massachusetts Institute of Technology and Tsinghua University. The team created a hybrid AI image generation tool called HART (hybrid autoregressive transformer) that essentially combines two of the most widely used AI image creation techniques. The result is a blazing fast tool with dramatically lower compute requirement.
Just to give you an idea of just how fast it is, I asked it to create an image of a parrot playing a bass guitar. It returned with the following picture in just about a second. I could barely even follow the progress bar. When I pushed the same prompt before Google’s Imagen 3 model in Gemini, it took roughly 9-10 seconds on a 200 Mbps internet connection.

A massive breakthrough
When AI images first started making waves, the diffusion technique was behind it all, powering products such as OpenAI’s Dall-E image generator, Google’s Imagen, and Stable Diffusion. This method can produce images with an extremely high level of detail. However, it is a multi-step approach to creating AI images, and as a result, it is slow and computationally expensive.
The second approach that has recently gained popularity is auto-regressive models, which essentially work in the same fashion as chatbots and generate images using a pixel prediction technique. It is faster, but also a more error-prone method of creating images using AI.
On-device demo for HART: Efficient Visual Generation with Hybrid Autoregressive Transformer
The team at MIT fused both methods into a single package called HART. It relies on an autoregression model to predict compressed image assets as a discrete token, while a small diffusion model handles the rest to compensate for the quality loss. The overall approach reduces the number of steps involved from over two dozen to eight steps.
The experts behind HART claim that it can “generate images that match or exceed the quality of state-of-the-art diffusion models, but do so about nine times faster.” HART combines an autoregressive model with a 700 million parameter range and a small diffusion model that can handle 37 million parameters.

Read more