Skip to main content

The U.S. finally put its foot down on AI image copyright

Théâtre D'opéra Spatial, a Midjourney image that won first prize in a digital art competition
Midjourney

AI generated works of art may be eligible to win awards at state fairs, but they are not protected under American copyright law, according to new guidance issued by the U.S. Copyright Office (USCO) on Wednesday.

The report details ways in which AI-generated video, images, and text may, and may not, be copyright protected. It finds that while generative AI is a new technology, its outputs largely fall under existing copyright rules meaning that no new laws will need to be enacted to address the issue. Unfortunately for AI content creators, the protections that are available are thin.

Recommended Videos

The courts have already ruled that AI systems themselves cannot hold copyright. The Supreme Court specified in the 1989 case, Cmty. for Creative Non-Violence v. Reid (“CCNV”), that “the author [of a copyrighted work] is . . . the person [emphasis added] who translates an idea into a fixed, tangible expression entitled to copyright protection.”

Pointing to the inherent unpredictability of an AI’s output to a given query, the USCO’s guidance argues that AI prompts don’t offer the user a sufficient degree of control over the generative process to “make users of an AI system the authors of the output.” That’s regardless of how complex and expansive the prompt is.

“No matter how many times a prompt is revised and resubmitted, the final output reflects the user’s acceptance of the AI system’s interpretation, rather than authorship of the expression it contains,” the report reads. In short, “the issue is the degree of human control, rather than the predictability of the outcome.”

That denial of protection does have its limits however. For example, the 2024 Robert Zemekis film “Here,” which featured digitally de-aged Tom Hanks and Robin Wright, has been copyrighted, despite its use of generative technologies to do the de-aging. This is because the AI is wielded as a tool rather than treated like a producer. Similarly, the USCO argues that “a film that includes AI-generated special effects or background artwork is copyrightable, even if the AI effects and artwork separately are not.”

Artists are also covered, to a degree, if they’re using an AI system to further modify their existing human-made creative works. The AI-generated elements in the resulting content wouldn’t be copyrightable (since they were generated by the AI) but the overall artistic piece, and its “perceptible human expression,” would be.

This issue is not a new one. As far back as 1965 with the advent of computers, the USCO has been wrestling with the question of authorship as to whether content produced on digital platforms is the work of human authors or simply “written” by the computers.

“The crucial question appears to be whether the “work” is basically one of human authorship, with the computer merely being an assisting instrument,” Then-Register of Copyrights Abraham Kaminstein noted at the time, “or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine.”

The USCO notes that its guidance on the issue could evolve in the coming years as the technology further matures. “In theory, AI systems could someday allow users to exert so much control over how their expression is reflected in an output that the system’s contribution would become rote or mechanical,” the report reads. However, the USCO has found that modern AI prompts simply do not yet rise to that level.

Andrew Tarantola
Former Computing Writer
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Samsung might put AI smart glasses on the shelves this year
Google's AR smartglasses translation feature demonstrated.

Samsung’s Project Moohan XR headset has grabbed all the spotlights in the past few months, and rightfully so. It serves as the flagship launch vehicle for a reinvigorated Android XR platform, with plenty of hype from Google’s own quarters.
But it seems Samsung has even more ambitious plans in place and is reportedly experimenting with different form factors that go beyond the headset format. According to Korea-based ET News, the company is working on a pair of smart glasses and aims to launch them by the end of the ongoing year.
Currently in development under the codename “HAEAN” (machine-translated name), the smart glasses are reportedly in the final stages of locking the internal hardware and functional capabilities. The wearable device will reportedly come equipped with camera sensors, as well.

What to expect from Samsung’s smart glasses?
The Even G1 smart glasses have optional clip-on gradient shades. Photo by Tracey Truly / Digital Trends
The latest leak doesn’t dig into specifics about the internal hardware, but another report from Samsung’s home market sheds some light on the possibilities. As per Maeil Business Newspaper, the Samsung smart glasses will feature a 12-megapixel camera built atop a Sony IMX681 CMOS image sensor.
It is said to offer a dual-silicon architecture, similar to Apple’s Vision Pro headset. The main processor on Samsung’s smart glasses is touted to be Qualcomm’s Snapdragon AR1 platform, while the secondary processing hub is a chip supplied by NXP.
The onboard camera will open the doors for vision-based capabilities, such as scanning QR codes, gesture recognition, and facial identification. The smart glasses will reportedly tip the scales at 150 grams, while the battery size is claimed to be 155 mAh.

Read more
I tested the future of AI image generation. It’s astoundingly fast.
Imagery generated by HART.

One of the core problems with AI is the notoriously high power and computing demand, especially for tasks such as media generation. On mobile phones, when it comes to running natively, only a handful of pricey devices with powerful silicon can run the feature suite. Even when implemented at scale on cloud, it’s a pricey affair.
Nvidia may have quietly addressed that challenge in partnership with the folks over at the Massachusetts Institute of Technology and Tsinghua University. The team created a hybrid AI image generation tool called HART (hybrid autoregressive transformer) that essentially combines two of the most widely used AI image creation techniques. The result is a blazing fast tool with dramatically lower compute requirement.
Just to give you an idea of just how fast it is, I asked it to create an image of a parrot playing a bass guitar. It returned with the following picture in just about a second. I could barely even follow the progress bar. When I pushed the same prompt before Google’s Imagen 3 model in Gemini, it took roughly 9-10 seconds on a 200 Mbps internet connection.

A massive breakthrough
When AI images first started making waves, the diffusion technique was behind it all, powering products such as OpenAI’s Dall-E image generator, Google’s Imagen, and Stable Diffusion. This method can produce images with an extremely high level of detail. However, it is a multi-step approach to creating AI images, and as a result, it is slow and computationally expensive.
The second approach that has recently gained popularity is auto-regressive models, which essentially work in the same fashion as chatbots and generate images using a pixel prediction technique. It is faster, but also a more error-prone method of creating images using AI.
On-device demo for HART: Efficient Visual Generation with Hybrid Autoregressive Transformer
The team at MIT fused both methods into a single package called HART. It relies on an autoregression model to predict compressed image assets as a discrete token, while a small diffusion model handles the rest to compensate for the quality loss. The overall approach reduces the number of steps involved from over two dozen to eight steps.
The experts behind HART claim that it can “generate images that match or exceed the quality of state-of-the-art diffusion models, but do so about nine times faster.” HART combines an autoregressive model with a 700 million parameter range and a small diffusion model that can handle 37 million parameters.

Read more
Apple’s hardware can dominate in AI — so why is Siri struggling so much?
Apple's Craig Federighi presents the Image Playground app running on macOS Sequoia at the company's Worldwide Developers Conference (WWDC) in June 2024.

Over the past year or so, a strange contradiction has emerged in the world of Apple: the company makes some of the best computers in the world, whether you need a simple consumer laptop or a high-powered workstation. Yet Apple’s artificial intelligence (AI) efforts are struggling so much that it’s almost laughable.

Take Siri, for example. Many readers will have heard that Apple has taken the highly unusual (and highly embarrassing) step of publicly admitting the new, AI-backed Siri needs more time in the oven. The new Siri infused with Apple Intelligence just isn’t living up to Apple’s promises.

Read more