‘Wall-E With a Gun’: Midjourney Generates Videos of Disney Characters Amid Massive Copyright Lawsuit


Midjourney’s new AI-generated video tool will produce animated clips featuring copyrighted characters from Disney and Universal, WIRED has found—including video of the beloved Pixar character Wall-E holding a gun.

It’s been a busy month for Midjourney. This week, the generative AI startup released its sophisticated new video tool, V1, which lets users make short animated clips from images they generate or upload. The current version of Midjourney’s AI video tool requires an image as a starting point; generating videos using text-only prompts is not supported.

The release of V1 comes on the heels of a very different kind of announcement earlier in June: Hollywood behemoths Disney and Universal filed a blockbuster lawsuit against Midjourney, alleging that it violates copyright law by generating images with the studios’ intellectual property.

Midjourney did not immediately respond to requests for comment. Disney and Universal reiterated statements made by its executives about the lawsuit, including Disney’s legal head Horacio Gutierrez alleging that Midjourney’s output amounts to “piracy.”

It appears that Midjourney may have attempted to put up some video-specific guardrails for V1. In our testing, it blocked animations from prompts based on Frozen’s Elsa, Boss Baby, Goofy, and Mickey Mouse, although it would still generate images of these characters. When WIRED asked V1 to animate images of Elsa, an “AI moderator” blocked the prompt from generating videos. “Al Moderation is cautious with realistic videos, especially of people,” read the pop-up message.

These limitations, which appear to be guardrails, are incomplete. WIRED testing shows that V1 will generate animated clips of a wide variety of Universal and Disney characters, including Homer Simpson, Shrek, Minions, Deadpool, and Star Wars’ C-3PO and Darth Vader. For example, when asked for an image of Minions eating a banana, Midjourney generated four outputs with recognizable versions of the cute, yellow characters. Then, when WIRED clicked the “Animate” button on one of the outputs, Midjourney generated a follow-up video with the characters eating a banana—peel and all.

Although Midjourney seems to have blocked some Disney- and Universal-related prompts for videos, WIRED could sometimes circumvent the potential guardrails during tests by using spelling variations or repeating the prompt. Midjourney also lets users provide a prompt to inform the animation; using that feature, WIRED was able to to generate clips of copyrighted characters behaving in adult ways, like Wall-E brandishing a firearm and Yoda smoking a joint.

The Disney and Universal lawsuit poses a major threat to Midjourney, which also faces additional legal challenges from visual artists who allege copyright infringement as well. Although it focused largely on providing examples from Midjourney’s image-generation tools, the complaint alleges that video would “only enhance Midjourney ability to distribute infringing copies, reproductions, and derivatives of Plaintiffs’ Copyrighted Works.”

The complaint includes dozens of alleged Midjourney images showing Universal and Disney characters. The set was initially produced as part of a report on Midjourney’s so-called “visual plagiarism problem” from AI critic and cognitive scientist Gary Marcus and visual artist Reid Southen.

“Reid and I pointed out this problem 18 months ago, and there’s been very little progress and very little change,” says Marcus. “We still have the same situation of unlicensed materials being used, and guardrails that work a little bit but not very well. For all the talk about exponential progress in AI, what we’re getting is better graphics, not a fundamental-principle solution to this problem.”

Researchers suggest OpenAI trained AI models on paywalled O’Reilly books


OpenAI has been accused by many parties of training its AI on copyrighted content sans permission. Now a new paper by an AI watchdog organization makes the serious accusation that the company increasingly relied on nonpublic books it didn’t license to train more sophisticated AI models.

AI models are essentially complex prediction engines. Trained on a lot of data — books, movies, TV shows, and so on — they learn patterns and novel ways to extrapolate from a simple prompt. When a model “writes” an essay on a Greek tragedy or “draws” Ghibli-style images, it’s simply pulling from its vast knowledge to approximate. It isn’t arriving at anything new.

While a number of AI labs, including OpenAI, have begun embracing AI-generated data to train AI as they exhaust real-world sources (mainly the public web), few have eschewed real-world data entirely. That’s likely because training on purely synthetic data comes with risks, like worsening a model’s performance.

The new paper, out of the AI Disclosures Project, a nonprofit co-founded in 2024 by media mogul Tim O’Reilly and economist Ilan Strauss, draws the conclusion that OpenAI likely trained its GPT-4o model on paywalled books from O’Reilly Media. (O’Reilly is the CEO of O’Reilly Media.)

In ChatGPT, GPT-4o is the default model. O’Reilly doesn’t have a licensing agreement with OpenAI, the paper says.

“GPT-4o, OpenAI’s more recent and capable model, demonstrates strong recognition of paywalled O’Reilly book content … compared to OpenAI’s earlier model GPT-3.5 Turbo,” wrote the co-authors of the paper. “In contrast, GPT-3.5 Turbo shows greater relative recognition of publicly accessible O’Reilly book samples.”

The paper used a method called DE-COP, first introduced in an academic paper in 2024, designed to detect copyrighted content in language models’ training data. Also known as a “membership inference attack,” the method tests whether a model can reliably distinguish human-authored texts from paraphrased, AI-generated versions of the same text. If it can, it suggests that the model might have prior knowledge of the text from its training data.

The co-authors of the paper — O’Reilly, Strauss, and AI researcher Sruly Rosenblat — say that they probed GPT-4o, GPT-3.5 Turbo, and other OpenAI models’ knowledge of O’Reilly Media books published before and after their training cutoff dates. They used 13,962 paragraph excerpts from 34 O’Reilly books to estimate the probability that a particular excerpt had been included in a model’s training dataset.

According to the results of the paper, GPT-4o “recognized” far more paywalled O’Reilly book content than OpenAI’s older models, including GPT-3.5 Turbo. That’s even after accounting for potential confounding factors, the authors said, like improvements in newer models’ ability to figure out whether text was human-authored.

“GPT-4o [likely] recognizes, and so has prior knowledge of, many non-public O’Reilly books published prior to its training cutoff date,” wrote the co-authors.

It isn’t a smoking gun, the co-authors are careful to note. They acknowledge that their experimental method isn’t foolproof and that OpenAI might’ve collected the paywalled book excerpts from users copying and pasting it into ChatGPT.

Muddying the waters further, the co-authors didn’t evaluate OpenAI’s most recent collection of models, which includes GPT-4.5 and “reasoning” models such as o3-mini and o1. It’s possible that these models weren’t trained on paywalled O’Reilly book data or were trained on a lesser amount than GPT-4o.

That being said, it’s no secret that OpenAI, which has advocated for looser restrictions around developing models using copyrighted data, has been seeking higher-quality training data for some time. The company has gone so far as to hire journalists to help fine-tune its models’ outputs. That’s a trend across the broader industry: AI companies recruiting experts in domains like science and physics to effectively have these experts feed their knowledge into AI systems.

It should be noted that OpenAI pays for at least some of its training data. The company has licensing deals in place with news publishers, social networks, stock media libraries, and others. OpenAI also offers opt-out mechanisms — albeit imperfect ones — that allow copyright owners to flag content they’d prefer the company not use for training purposes.

Still, as OpenAI battles several suits over its training data practices and treatment of copyright law in U.S. courts, the O’Reilly paper isn’t the most flattering look.

OpenAI didn’t respond to a request for comment.

Key ex-OpenAI researcher subpoenaed in AI copyright case


Alec Radford, a researcher who helped develop many of OpenAI’s key AI technologies, has been subpoenaed in a copyright case against the AI startup, according to a court filing Tuesday.

The filing, submitted by an attorney for the plaintiffs to the U.S. District Court in the Northern District of California, indicated that Radford was served a subpoena on February 25.

Radford, who left OpenAI late last year to pursue independent research, was the lead author of OpenAI’s seminal research paper on generative pre-trained transformers (GPTs). GPTs underpin OpenAI’s most popular products, including the company’s AI-powered chatbot platform, ChatGPT.

Radford joined OpenAI in 2016, a year after the firm’s founding. He worked on several models in the company’s GPT series, as well as a speech recognition model, Whisper, and DALL-E, the company’s image-generating model.

The copyright case, “re OpenAI ChatGPT Litigation,” was brought by book authors including Paul Tremblay, Sarah Silverman, and Michael Chabon, who alleged that OpenAI infringed their copyrights by using their work to train its AI models. The plaintiffs also argued that ChatGPT infringed their works by liberally quoting those works sans attribution.

Last year, the Court dismissed two of the plaintiffs’ claims against OpenAI, but allowed the claim for direct infringement to move forward. OpenAI maintains its use of copyrighted data for training is protected under fair use.

Redford isn’t the only high-profile figure who attorneys for the authors are attempting to wrangle. Plaintiffs’ lawyers have also moved to compel the deposition of Dario Amodei and Benjamin Mann, both ex-OpenAI employees who left the company to start Anthropic. Amodei and Mann have fought the motions, claiming they’re overly burdensome.

A U.S. magistrate judge ruled this week that Amodei must sit for hours of questioning about the work he did for OpenAI in two copyright cases, including a case filed by the Authors Guild.

1,000 artists release ‘silent’ album to protest UK copyright sell-out to AI


The U.K. government is pushing forward with plans to attract more AI companies to the region through changes to copyright law that would allow developers to train AI models on artists’ content on the internet — without permission or payment — unless creators proactively “opt out.” Not everyone is marching to the same beat, though.

On Monday, a group of 1,000 musicians released a “silent album,” protesting the planned changes. The album — titled “Is This What We Want?” — features tracks from Kate Bush, Imogen Heap, and contemporary classical composers Max Richter and Thomas Hewitt Jones, among others. It also features co-writing credits from hundreds more, including big names like Annie Lennox, Damon Albarn, Billy Ocean, The Clash, Mystery Jets, Yusuf / Cat Stevens, Riz Ahmed, Tori Amos, and Hans Zimmer. 

But this is not Band Aid part 2. And it’s not a collection of music. Instead, the artists have put together recordings of empty studios and performance spaces — a symbolic representation of what they believe will be the impact of the planned copyright law changes. 

“You can hear my cats moving around,” is how Hewitt Jones described his contribution to the album. “I have two cats in my studio who bother me all day when I’m working.”

To put an even more blunt point on it, the titles of the 12 tracks that make up the album spell out a message: “The British government must not legalize music theft to benefit AI companies.”

The album is just the latest move in the U.K. to bring attention to the issue of how copyright is being handled in AI training. Similar protests are underway in other markets, like the U.S., highlighting a global concern among artists.

Ed Newton-Rex, who organized the project, has simultaneously been leading a bigger campaign against AI training without licensing. A petition he started has now been signed by more than 47,000 writers, visual artists, actors, and others in the creative industries, with nearly 10,000 of them signing up in just the last five weeks since the U.K. government announced its big AI strategy. 

Newton-Rex said he has also been “running a nonprofit in AI for the last year where we’ve been certifying companies that basically don’t scrape and train on great work without permission.” 

Newton-Rex arrived at advocating for artists after having batted for both sides. Classically trained as a composer, he later built an AI-based music composition platform called Jukedeck that let people bypass using copyrighted works by creating their own. Its catchy pitch, where he rapped and riffed on the virtues of using AI to write music, won the TechCrunch Startup Battlefield competition in 2015. Jukedeck was eventually acquired by TikTok, where he worked for some time on music services. 

After several years at other tech companies like Snap and Stability, Newton-Rex is back to considering how to build the future without burning the past. He’s contemplating that idea from a pretty interesting vantage point: He now lives in the Bay Area with wife Alice Newton-Rex, VP of product at WhatsApp. 

The album release comes just ahead of the planned changes to copyright law in the U.K, which would force artists who do not want their work used for AI training purposes to proactively “opt out.”

Newton-Rex thinks this effectively creates a lose-lose situation for artists since there is no opt-out method in place, or any clear way of being able to track what specific material has been fed into any AI system. 

“We know that opt-out schemes are just not taken up,” he said. “This is just going to give 90% [to] 95% of people’s work to AI companies. That’s without a doubt.”

The solution, say the artists, is to produce work in other markets where there might be better protections for it. Hewitt Jones — who threw a working keyboard into a harbor in Kent at an in-person protest not long ago (he fished it out, broken, afterwards) — said he’s considering markets like Switzerland for distributing his music in the future. 

But the rock and hard place of a harbor in Kent are nothing compared to the Wild West of the internet. 

“We’ve been told for decades to share our work online because it’s good for exposure. But now AI companies and, incredibly, governments are turning around and saying, ‘Well, you put that online for free …” Newton-Rex said. “So now artists are just stopping making and sharing their work. A number of artists have contacted me to say this is what they’re doing.”

The album will be posted widely on music platforms sometime Tuesday, the organizers said, and any donations or proceeds from playing it will go to the charity Help Musicians.