Skip to main content

New ‘poisoning’ tool spells trouble for AI text-to-image tech

Professional artists and photographers annoyed at generative AI firms using their work to train their technology may soon have an effective way to respond that doesn’t involve going to the courts.

Generative AI burst onto the scene with the launch of OpenAI’s ChatGPT chatbot almost a year ago. The tool is extremely adept at conversing in a very natural, human-like way, but to gain that ability it had to be trained on masses of data scraped from the web.

Similar generative AI tools are also capable of producing images from text prompts, but like ChatGPT, they’re trained by scraping images published on the web.

It means artists and photographers are having their work used — without consent or compensation — by tech firms to build out their generative AI tools.

To fight this, a team of researchers has developed a tool called Nightshade that’s capable of confusing the training model, causing it to spit out erroneous images in response to prompts.

Outlined recently in an article by MIT Technology Review, Nightshade “poisons” the training data by adding invisible pixels to a piece of art before it’s uploaded to the web.

“Using it to ‘poison’ this training data could damage future iterations of image-generating AI models, such as DALL-E, Midjourney, and Stable Diffusion, by rendering some of their outputs useless — dogs become cats, cars become cows, and so forth,” MIT’s report said, adding that the research behind Nightshade has been submitted for peer review.

While the image-generating tools are already impressive and are continuing to improve, the way they’re trained has proved controversial, with many of the tools’ creators currently facing lawsuits from artists claiming that their work has been used without permission or payment.

University of Chicago professor Ben Zhao, who led the research team behind Nightshade, said that such a tool could help shift the balance of power back to artists, firing a warning shot at tech firms that ignore copyright and intellectual property.

“The data sets for large AI models can consist of billions of images, so the more poisoned images can be scraped into the model, the more damage the technique will cause,” MIT Technology Review said in its report.

When it releases Nightshade, the team is planning to make it open source so that others can refine it and make it more effective.

Aware of its potential to disrupt, the team behind Nightshade said it should be used as “a last defense for content creators against web scrapers” that disrespect their rights.

In a bid to deal with the issue, DALL-E creator OpenAI recently began allowing artists to remove their work from its training data, but the process has been described as extremely onerous as it requires the artist to send a copy of every single image they want removed, together with a description of that image, with each request requiring its own application.

Making the removal process considerably easier might go some way to discouraging artists from opting to use a tool like Nightshade, which could cause many more issues for OpenAI and others in the long run.

Editors' Recommendations

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
2023 was the year of AI. Here were the 9 moments that defined it
A person's hand holding a smartphone. The smartphone is showing the website for the ChatGPT generative AI.

ChatGPT may have launched in late 2022, but 2023 was undoubtedly the year that generative AI took hold of the public consciousness.

Not only did ChatGPT reach new highs (and lows), but a plethora of seismic changes shook the world, from incredible rival products to shocking scandals and everything in between. As the year draws to a close, we’ve taken a look back at the nine most important events in AI that took place over the last 12 months. It’s been a year like no other for AI -- here’s everything that made it memorable, starting at the beginning of 2023.
ChatGPT’s rivals rush to market

Read more
OpenAI’s new tool can spot fake AI images, but there’s a catch
OpenAI Dall-E 3 alpha test version image.

Images generated by artificial intelligence (AI) have been causing plenty of consternation in recent months, with people understandably worried that they could be used to spread misinformation and deceive the public. Now, ChatGPT maker OpenAI is apparently working on a tool that can detect AI-generated images with 99% accuracy.

According to Bloomberg, OpenAI’s tool is designed to root out user-made pictures created by its own Dall-E 3 image generator. Speaking at the Wall Street Journal’s Tech Live event, Mira Murati, chief technology officer at OpenAI, claimed the tool is “99% reliable.” While the tech is being tested internally, there’s no release date yet.

Read more
This powerful ChatGPT feature is back from the dead — with a few key changes
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

ChatGPT has just regained the ability to browse the internet to help you find information. That should (hopefully) help you get more accurate, up-to-date data right when you need it, rather than solely relying on the artificial intelligence (AI) chatbot’s rather outdated training data.

As well as giving straight-up answers to your questions based on info found online, ChatGPT developer OpenAI revealed that the tool will provide a link to its sources so you can check the facts yourself. If it turns out that ChatGPT was wrong or misleading, well, that’s just another one for the chatbot’s long list of missteps.

Read more