Skip to main content

In the hilarious Project Gucciberg, a deepfaked Gucci Mane reads classic novels

“Gucci Mane crazy, I might pull up on a zebra/ Land on top a eagle, smoke a joint of reefa.”

That’s a Gucci Mane lyric from his 2010 track “It’s Gucci Time” from the album The Appeal: Georgia’s Most Wanted.

“It is a truth universally acknowledged/ that a single man in possession of a good fortune, must be in want of a wife.” That’s also, now, a Gucci bar, albeit one originally written by Jane Austen in her 1813 novel of manners, Pride and Prejudice, although Gucci imbues it with a level of trap rap swagger that doesn’t quite come across in other readings of the classic English text. (By comparison, the top Audible entry for the same novel is read by the decidedly non-trap rap superstar Rosamund Pike.)

Gucci, as it turns out, has been busy — busier even than he was during the 2010-2015 period when he was issuing mixtapes at a dizzying rate of roughly one per month. Today, the 41-year-old rapper debuted voice readings of himself reading an assortment of classic novels under the somewhat brilliant title “Project Gucciberg.” A smattering of the novels include Alice’s Adventures in Wonderland, Little Women, A Modest Proposal, Dracula, and The Importance of Being Earnest.

Only he didn’t. Well, not exactly.

MSCHF

It’s more deepfake audio wizardry, this time courtesy of the folks at New York-based digital arts collective MSCHF. Fresh off their last project — in which they attached a paintball gun to one of Boston Dynamics’ Spot robots, and allowed users to remotely control it over the internet — the team has lent their button-pushing, tech-savvy brand of prankster irreverence to a project in which the rapper born Radric Delantic Davis is, himself, remote-controlled (at least, his words are) to narrate a slew of vintage novels.

Evil geniuses

MSCHF’s Daniel Greenberg told Digital Trends: “Gucci Mane is one of the most impactful musicians in the history of rap. Project Gutenberg is one of the last bastions of public domain texts on the internet. By combining the two, using the power of A.I. technology, we have created the most impactful rapper-read public domain audiobooks in the history of the internet.”

To create their (totally unauthorized) literature-loving A.I. rapper, the team crafted a training dataset of around six hours of Gucci’s speech, pulled from interviews, podcasts, and whatever other publicly accessible audio footage they could scavenge from YouTube. This source material was then edited, trimmed down into 10-second segments, EQ’d, transcribed, and labeled.

MSCHF

“Additionally, our team built out a Gucci pronunciation key/dictionary to better capture the idiosyncrasies of Gucci Mane’s particular argot,” Greenberg said. He added, “Seriously, this thing is the equivalent of a linguistics thesis.”

The dataset was then used to train an A.I. model, repeatedly massaged so that it improved the output, and then augmented with human touches to add flair like pregnant pauses into the text where required.

“It may sound like Gucci is speaking into a broken microphone at times, or on a bad audio stream — because he was in a lot of our source material,” Greenberg admitted. “However, barring these environmental factors, we feel the actual voice emulation is extremely successful. It is both amazing and scary how good this technology is to make anyone say whatever you want.”

MSCHF

The real Gucci Mane did not respond to a request for comment. However, this is, as Greenberg acknowledged, something of a “gray area” when it comes to copyright. “The copyright implications of deepfakes have not yet been legislated,” he said. “All of the audio samples we trained our model on were publicly available through interviews. At the end of the day, we have a voice that is not ours, reading public domain text that we didn’t write, but we are creating our ‘own’ audiobooks.”

Deepfake-A-Thon

Last year, Jay-Z’s Roc Nation LLC entertainment agency took issue with an audio deepfaker who used the rapper’s voice to spout gibberish like the Navy Seal Copypasta on YouTube. It was, as I noted at the time, a brain-teasing conundrum for a rapper who once rapped the line “I sampled your voice, you was usin’ it wrong” during his early 2000s beef with Nas. But Roc Nation wasn’t getting into the ironic complexity of the case. They were just annoyed about someone “unlawfully [using] an A.I. to impersonate our client’s voice.”

It’s not difficult to see why an artist might be perturbed by such a thing. Like the visual deepfakes that place actors in movies in which they never appeared (or, as is doing the rounds recently, Tom Cruise in a series of hyperactive TikTok videos), an audio deepfake of an artist takes their most valuable asset — their voice, in this case — and uses it to create something they never consented to perform in. There are both ethical and financial issues at stake.

MSCHF

“The history of rap is the history of self-reference,” Greenberg maintained. “Throughout the entire canon of the tradition, throughout the body of a given performer’s work. When you peek under the hood of an A.I. learning model, there’s an uncannily similar process occurring — a kind of hyper-self-reference. Oblique as it may seem, this all dovetails quite nicely.”

Should we be worried about the risk of audio deepfakes in a world where real and fake can be blurred to a startling degree?

“Absolutely, but alarm won’t stop deepfakes from becoming more and more mainstream,” he said. “This technology is here to stay — we should be so lucky if it’s only ever used for fun. Maybe doing fun things with it will help keep us in that realm. We have reached an inflection point where truth and fiction are becoming impossible to discern on the internet. Thus, we realized it was crucial that we soothe our ears with Gucci Mane’s gentle A.I.-generated reading voice.”

As siren songs to usher us onto the rocks of Skynet go, maybe Gucci isn’t so bad, as it happens. Especially if it could be 2009-era Gucci, circa The State vs. Radric Davis.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Digital Trends’ Tech For Change CES 2023 Awards
Digital Trends CES 2023 Tech For Change Award Winners Feature

CES is more than just a neon-drenched show-and-tell session for the world’s biggest tech manufacturers. More and more, it’s also a place where companies showcase innovations that could truly make the world a better place — and at CES 2023, this type of tech was on full display. We saw everything from accessibility-minded PS5 controllers to pedal-powered smart desks. But of all the amazing innovations on display this year, these three impressed us the most:

Samsung's Relumino Mode
Across the globe, roughly 300 million people suffer from moderate to severe vision loss, and generally speaking, most TVs don’t take that into account. So in an effort to make television more accessible and enjoyable for those millions of people suffering from impaired vision, Samsung is adding a new picture mode to many of its new TVs.
[CES 2023] Relumino Mode: Innovation for every need | Samsung
Relumino Mode, as it’s called, works by adding a bunch of different visual filters to the picture simultaneously. Outlines of people and objects on screen are highlighted, the contrast and brightness of the overall picture are cranked up, and extra sharpness is applied to everything. The resulting video would likely look strange to people with normal vision, but for folks with low vision, it should look clearer and closer to "normal" than it otherwise would.
Excitingly, since Relumino Mode is ultimately just a clever software trick, this technology could theoretically be pushed out via a software update and installed on millions of existing Samsung TVs -- not just new and recently purchased ones.

Read more
AI turned Breaking Bad into an anime — and it’s terrifying
Split image of Breaking Bad anime characters.

These days, it seems like there's nothing AI programs can't do. Thanks to advancements in artificial intelligence, deepfakes have done digital "face-offs" with Hollywood celebrities in films and TV shows, VFX artists can de-age actors almost instantly, and ChatGPT has learned how to write big-budget screenplays in the blink of an eye. Pretty soon, AI will probably decide who wins at the Oscars.

Within the past year, AI has also been used to generate beautiful works of art in seconds, creating a viral new trend and causing a boon for fan artists everywhere. TikTok user @cyborgism recently broke the internet by posting a clip featuring many AI-generated pictures of Breaking Bad. The theme here is that the characters are depicted as anime characters straight out of the 1980s, and the result is concerning to say the least. Depending on your viewpoint, Breaking Bad AI (my unofficial name for it) shows how technology can either threaten the integrity of original works of art or nurture artistic expression.
What if AI created Breaking Bad as a 1980s anime?
Playing over Metro Boomin's rap remix of the famous "I am the one who knocks" monologue, the video features images of the cast that range from shockingly realistic to full-on exaggerated. The clip currently has over 65,000 likes on TikTok alone, and many other users have shared their thoughts on the art. One user wrote, "Regardless of the repercussions on the entertainment industry, I can't wait for AI to be advanced enough to animate the whole show like this."

Read more
4 simple pieces of tech that helped me run my first marathon
Garmin Forerunner 955 Solar displaying pace information.

The fitness world is littered with opportunities to buy tech aimed at enhancing your physical performance. No matter your sport of choice or personal goals, there's a deep rabbit hole you can go down. It'll cost plenty of money, but the gains can be marginal -- and can honestly just be a distraction from what you should actually be focused on. Running is certainly susceptible to this.

A few months ago, I ran my first-ever marathon. It was an incredible accomplishment I had no idea I'd ever be able to reach, and it's now going to be the first of many I run in my lifetime. And despite my deep-rooted history in tech, and the endless opportunities for being baited into gearing myself up with every last product to help me get through the marathon, I went with a rather simple approach.

Read more