Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

Adobe gets called out for violating its own AI ethics

Ansel Adams' panorama of Grand Teton National Park with the peak in the background and a meandering river in the forest.
Ansel Adams / National Archives

Last Friday, the estate of famed 20th century American photographer Ansel Adams took to Threads to publicly shame Adobe for allegedly offering AI-generated art “inspired by” Adams’ catalog of work, stating that the company is “officially on our last nerve with this behavior.”

While the Adobe Stock platform, where the images were made available, does allow for AI generated images, The Verge notes that the site’s contributor terms prohibit images “created using prompts containing other artist names, or created using prompts otherwise intended to copy another artist.”

Recommended Videos

Adobe has since removed the offending images, conceding in the Threads conversation that, “this goes against our Generative AI content policy.”

A screenshot of Ansel Adams images put in Adobe Stock.
Adobe

However, the Adams estate seemed unsatisfied with that response, claiming that it had been “in touch directly” with the company “multiple times” since last August. “Assuming you want to be taken seriously re: your purported commitment to ethical, responsible AI, while demonstrating respect for the creative community,” the estate continued, “we invite you to become proactive about complaints like ours, & to stop putting the onus on individual artists/artists’ estates to continuously police our IP on your platform, on your terms.”

The ability to create high-resolution images of virtually any subject and in any visual style by simply describing the idea with a written prompt has helped launch generative AI into the mainstream. Image generators like Midjourney, Stable Diffusion and Dall-E have all proven immensely popular with users, though decidedly less so with the copyright holders and artists whose styles those programs imitate and whose existing works those AI engines are trained on.

Adobe’s own Firefly generative AI platform was, the company claimed, trained on the its extensive, licensed Stock image library. As such, Firefly was initially marketed as a “commercially safe” alternative to other image generators like Midjourney, or Dall-E, which trained on datasets scraped from the public internet.

However, an April report from Bloomberg found that some 57 million images within the Stock database, roughly 14% of the total, were AI generated, some of which were created by their data-scraping AI competitors.

“Every image submitted to Adobe Stock, including a very small subset of images generated with AI, goes through a rigorous moderation process to ensure it does not include IP, trademarks, recognizable characters or logos, or reference artists’ names,” a company spokesperson told Bloomberg at the time.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
OpenAI uses its own models to fight election interference
chatGPT on a phone on an encyclopedia

OpenAI, the brains behind the popular ChatGPT generative AI solution, released a report saying it blocked more than 20 operations and dishonest networks worldwide in 2024 so far. The operations differed in objective, scale, and focus, and were used to create malware and write fake media accounts, fake bios, and website articles.

OpenAI confirms it has analyzed the activities it has stopped and provided key insights from its analysis. "Threat actors continue to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences," the report says.

Read more
Zoom debuts its new customizable AI Companion 2.0
overhead shot of a person taking a zoom meeting at their desk

Zoom unveiled its AI Companion 2.0 during the company's Zoomtopia 2024 event on Wednesday. The AI assistant is incorporated throughout the Zoom Workplace app suite and is promised to "deliver an AI-first work platform for human connection."

While Zoom got its start as a videoconferencing app, the company has expanded its product ecosystem to become an "open collaboration platform" that includes a variety of communication, productivity, and business services, both online and in physical office spaces. The company's AI Companion, which debuted last September, is incorporated deeply throughout Zoom Workplace and, like Google's Gemini or Microsoft's Copilot, is designed to automate repetitive tasks like transcribing notes and summarizing reports that can take up as much as 62% of a person's workday.

Read more
Adobe is giving creators a way to prove their art isn’t AI slop
Zoom blur background in Photoshop on a MacBook.

With AI slop taking over the web, being able to confirm a piece of content's provenance is more important than ever. Adobe announced on Tuesday that it will begin rolling out a beta of its Content Authenticity web app in the first quarter of 2025, enabling creators to digitally certify their works as human-made, and is immediately launching a Content Authenticity browser extension for Chrome to help protect content creators until the web app arrives.

Adobe's digital watermarking relies on a combination of digital fingerprinting, watermarking, and cryptographic metadata to certify the authenticity of images, video, and audio files. Unlike traditional metadata that is easily circumvented with screenshots, Adobe's system can still identify the creator of a registered file even when the credentials have been scrubbed. This enables to company to “truly say that wherever an image, or a video, or an audio file goes, on anywhere on the web or on a mobile device, the content credential will always be attached to it,” Adobe Senior Director of Content Authenticity Andy Parsons told TechCrunch.

Read more