Reddit may ask you to prove you’re human as it cracks down on bot accounts


Reddit is stepping up its fight against bots, and now your account could be asked to prove it is human if the platform detects fishy behaviour.

Reddit CEO Steve Huffman says these checks will be rare, but they are meant to protect what makes Reddit work in the first place – real people talking to real people.

As AI-generated content spreads, Reddit admits it is getting harder to tell who is behind a post. So instead of broad crackdowns, it is focusing on suspicious behavior and adding clearer signals across the platform.

How Reddit plans to separate humans from bots

If Reddit detects signs of automation or unusual behavior, it may trigger a human verification check. This could involve simple actions like passkeys or FaceID that confirm a human is present.

In some cases, third-party biometric systems like Sam Altman’s World ID may be used. The platform may also use government-issued IDs in regions where laws require them. However, Reddit says that your identity will stay separate from your account.

The company is also standardizing labels for automated accounts. Approved bots will carry an [APP] tag, making it obvious you are interacting with software. Developers will need to register their tools to get this label, which adds a layer of transparency.

What does this mean for your Reddit experience?

Since Reddit says this is not a sitewide verification system, most users might never be asked to prove anything. Even when such checks take place, the focus will be on confirming a human exists, not identifying who that person is.

At the same time, the platform will continue removing harmful bots at scale, already taking down around 100,000 accounts daily. It is also improving reporting tools so users can flag suspicious activity more easily.

Reddit is not banning AI-written posts outright, but it is drawing a firm line. For now, the platform cares less about how content is written and more about who is behind it.

Here’s how Google is making it easier to move from ChatGPT to Gemini


The Gemini logo on an Android phone.

Joe Maring / Android Authority

TL;DR

  • Google is making it easier to switch from a competing chatbot to Gemini in two ways.
  • You’ll be able to transfer user information from the other chatbot to Gemini by using a prompt and following a few extra steps.
  • Chats can also be imported, but they’ll need to be saved in a zip file that’s no larger than 5GB.

The more a chatbot knows about you, the better it gets at offering relevant responses. However, the more you use one AI service, the less likely you are to try out the other options. Who wants to spend all of that time training multiple chatbots? We learned back in February that Google is working on a solution that should make switching to Gemini from a competing chatbot less of a headache. Now we have more information on how the solution will work.

Our investigation into the Google app (version 17.11.54.sa.arm64) has revealed that there are two parts to this solution: import memory and import chats. Starting with the import memory option, you’ll be able to transfer user information from other platforms to Gemini.

When selecting the “Import memory to Gemini” option, you’ll be asked to copy a prompt and paste it into the input box of the other provider. The other provider will then give you a response with whatever it knows about you. You can then copy that response and paste it in the “Paste the response here” box within the Import memory to Gemini page. Tapping the “Add memory” button will tell Gemini to remember the following about you.

In the screenshots above, you can see an example of this process. The prompt provided by Gemini is pasted into ChatGPT and the response from ChatGPT is copied and pasted into paste here box on the Import memory to Gemini page. After tapping on Add memory, you’ll see the following response confirming that Gemini has stored the information into its own memory.

Don’t want to miss the best from Android Authority?

google preferred source badge light@2xgoogle preferred source badge dark@2x

As reported last month, Google is making it possible to import chats from other platforms to Gemini. This will require you to download your conversations from the other AI client and upload them to Google’s service.

Through our APK teardown, we’ve learned that you’ll need to store those chats in a zip file before uploading. You’ll also need to make sure the file isn’t too big, as there will be a 5GB limit.

⚠️ An APK teardown helps predict features that may arrive on a service in the future based on work-in-progress code. However, it is possible that such predicted features may not make it to a public release.

Thank you for being part of our community. Read our Comment Policy before posting.

Washington Post’s Surveillance Pricing Under Fire From Dems Who Want to Ban the Practice



Some subscribers to the Washington Post have been receiving emails that their subscription rates will be going up, according to the Washingtonian. That part isn’t surprising, given the fact that Post owner Jeff Bezos has reportedly been upset that the newspaper is losing money, especially since ditching about half of his workforce. But some folks who scrolled down to the bottom of the email were surprised when they read about how the new price was determined: “This price was set by an algorithm using your personal data.”

It’s a concept called surveillance pricing, and it’s not entirely new. People can often be charged different prices for the same product, depending on any number of factors. If your phone battery is low, rideshare companies like Uber or Lyft might charge more because they know you’re desperate. Instacart was recently caught charging up to 23% more to some shoppers based on unknown criteria.

Many Democrats aren’t happy about it, including Rep. Greg Casar of Texas. On Monday,  Casar wrote on Bluesky that surveillance pricing “should be illegal,” adding, “I have a bill to ban it.”

Last year, Casar and Rashida Tlaib of Michigan introduced legislation called the Stop AI Price Gouging and Wage Fixing Act. And last month, two other Democrats in the Senate, Ben Ray Luján from New Mexico and Jeff Merkley from Oregon, introduced very similar legislation called the Stop Price Gouging in Grocery Stores Act of 2026.

The Washington Post hasn’t explained how it determines pricing by using personal data. But there could be a number of factors, including zip code, estimated income, and purchase history. Bezos, the founder of Amazon, presumably has more data on what people buy than just about anyone in the country. And he’s a big supporter of utilizing AI to maximize profits.

The problem is that AI can’t really make up for losses any business might incur by offering a bad product. The newspaper first hemorrhaged subscribers—250,000 in one week alone—after Bezos stopped the Washington Post editorial board from endorsing Kamala Harris in the 2024 presidential election against Donald Trump.

The Washington Post had no reporters at the Academy Awards on Sunday, according to the paper’s former culture writer. And it was the last of the major news outlets to report that the U.S. had started bombing Iran late last month. The paper has purged any writer on the opinion side deemed to be liberal and has instead become a mouthpiece for the Bezos worldview—a worldview that happens to align perfectly with that of the Trump regime.

Bezos has been criticized for buying the distribution rights to First Lady Melania Trump’s “documentary” Melania for a whopping $40 million, but the movie itself helps explain why he’d bother. There are several shots of the Trumps with Big Tech oligarchs like Elon Musk, Tim Cook, and Bezos himself. All of these guys need something from Trump, whether it’s space contracts or just tariff relief.

News of what Bezos has in store for the future of his newspaper doesn’t instill confidence that it can survive much longer as a respected institution. The Washington Post’s news side still breaks major stories, but the New York Times reports Bezos’s big idea was to chop the newsroom’s budget in half and demand twice the productivity through AI. Columnist Dana Milbank and economics correspondent Jeff Stein both announced they were leaving the Post on Monday.

Businesses are increasingly turning to algorithms to set their prices, and it doesn’t look like that’s going to change anytime soon unless legislators get involved. At least a dozen states are considering legislation about surveillance pricing, but so far, only New York has passed a law in this area. Unfortunately, it doesn’t have much teeth since it only requires companies to notify consumers when a price has been set with AI.

On the other hand, New York’s law may be the only reason we know that the Washington Post is using AI for subscription rates. The paper has little other incentive to include the disclaimer: “This price was set by an algorithm using your personal data.” Notifying consumers may not fix the problem of surveillance pricing, but at least people can take it into account when deciding where they want to spend their money.

Hollywood’s biggest filmmaker just came out clean about using AI in movies


Legendary filmmaker Steven Spielberg voiced concerns about the growing role of artificial intelligence in creative industries during an appearance at SXSW in Austin. Speaking during an interview session at the 2026 event, Spielberg made it clear that while he supports technology in many fields, he strongly opposes AI replacing human creativity in filmmaking.

Spielberg Draws A Line On AI In Creative Work

During the discussion, Spielberg revealed that he has never used AI in any of his films, a statement that drew enthusiastic applause from the audience. The director emphasized that although artificial intelligence can be useful in certain disciplines, it should not replace the people responsible for storytelling and artistic expression.

“I am not for AI if it replaces a creative individual,” Spielberg said during the conversation.

The filmmaker explained that in his own creative process, including television writing rooms, he still relies entirely on human collaboration. According to Spielberg, there is no “empty chair with a laptop in front of it” representing an AI contributor. For him, the development of stories and characters remains a fundamentally human activity.

Spielberg’s stance reflects broader concerns across Hollywood, where writers, directors, and actors have increasingly debated how AI might affect jobs and creative control in the entertainment industry.

A Director Known For Exploring Technology

Despite his skepticism toward AI replacing creative professionals, Spielberg is not opposed to technology itself. Throughout his career, many of his films have explored futuristic technologies and their potential consequences.

His filmography includes classics such as Jaws, E.T. the Extra-Terrestrial, Close Encounters of the Third Kind, and Raiders of the Lost Ark. Spielberg has also examined the relationship between humans and advanced technology in projects like Minority Report, Ready Player One, and A.I. Artificial Intelligence.

These films often present technology as both a powerful tool and a potential threat, themes that echo Spielberg’s real-world perspective on artificial intelligence.

AI’s Growing Presence In The Entertainment Industry

Spielberg’s comments come at a time when AI tools are increasingly entering the filmmaking and television production landscape. Technology startups are developing AI-powered platforms designed to assist with script development, editing, and visual effects, often marketing them as tools that can reduce production costs.

Major streaming platforms are also exploring how artificial intelligence might streamline content creation. Amazon has reportedly begun testing AI tools for film and television production. Meanwhile, Netflix recently acquired an AI-focused filmmaking company associated with Ben Affleck in a deal reportedly valued at around $600 million.

While these developments could reshape how films and shows are produced, they have also sparked ongoing debates about whether AI will assist creative professionals or eventually replace them.

The Future Of AI In Hollywood

Spielberg’s remarks highlight a central question facing the entertainment industry: how to integrate new technologies without undermining the human creativity that defines filmmaking.

For independent filmmakers working with limited resources, AI tools may offer opportunities to reduce production costs or speed up certain tasks. However, many established creators argue that storytelling should remain driven by human imagination rather than automated systems.

As AI continues to evolve and spread across the entertainment industry, discussions like the one at SXSW suggest that Hollywood’s biggest names are determined to ensure technology enhances creativity rather than replacing it.

How to watch Jensen Huang’s Nvidia GTC 2026 keynote


Nvidia kicks off its annual GTC developer conference in San Jose, California, next week with CEO Jensen Huang’s keynote scheduled for Monday at 11am PT / 2pm ET.

GTC — which stands for GPU Technology Conference — is Nvidia’s flagship annual event, where the chipmaker typically uses the spotlight to announce new products, champion partnerships, and lay out its vision for the future of computing. Huang’s keynote will focus on Nvidia’s role in the future of computing and AI. You can watch the two-hour address in person at the SAP Center or livestream the talk on the event’s website.

The broader three-day event is focused on what’s coming next for AI across industries including healthcare, robotics, and autonomous vehicles, among others.

On the software side, it’s rumored that Nvidia will release an open source platform for enterprise AI agents, dubbed NemoClaw, as originally reported by Wired. The platform would give businesses a structured way to build and deploy AI agents (software that can carry out multi-step tasks autonomously) and would position Nvidia to mirror similar offerings from companies like OpenAI.

On the hardware side, the company is also rumored to be releasing a new chip designed to accelerate the AI inference process — the process by which an AI model applies what it has learned to generate responses or make decisions, as distinct from the initial training process, which requires far more computing power. Faster, cheaper inference is widely seen as one of the last bottlenecks to scaling AI applications broadly. The chip, if confirmed, would represent Nvidia’s latest bid to dominate not just the training market, where it already commands an estimated 80% share, but the inference market as well, where competition from custom chips built by Google, Amazon and others is fast intensifying.

Kevin Cook, a senior equity strategist at Zacks Investment Research, told TechCrunch that attendees should also expect to learn what the company plans to do with its relationship with Groq, the inference company Nvidia reportedly paid $20 billion late last year to license its technology. There’s a lot of curiosity around this tie-up, given that Jonathan Ross, Groq’s founder, Sunny Madra, Groq’s President, and other members of the Groq team agreed to join Nvidia to help advance and scale that licensed tech.

There will, of course, also be a range of partnership announcements and demonstrations showcasing Nvidia’s AI capabilities across industries.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Netflix Taps Ben Affleck to Help Get More Filmmakers to Use AI



Netflix is teaming up with Ben Affleck and leveraging the goodwill he’s earned with his views on AI to bring even more AI tools into filmmaking.

The streaming giant announced today that it has acquired Affleck’s filmmaking tech company, InterPositive, which the actor quietly founded in 2022 to develop AI-powered tools for filmmakers.

Netflix did not disclose the terms of the acquisition. But Variety reported that InterPositive’s 16-person team will join Netflix, with Affleck serving as a senior adviser. The company reportedly plans to offer InterPositive’s tools to its creative partners rather than selling commercial access to them.

In a press release, Affleck explained his motivation for founding InterPositive. He wrote that after spending time observing the early rise of AI in film production, many of the models fell short. So, he decided to take matters into his own hands.

“Together with a small team of engineers, researchers and creatives, I began filming a proprietary dataset on a controlled soundstage with all the familiarities of a full production,” Affleck said. “I wanted to build a workflow that captures what happens on a set, with vocabulary that matched the language cinematographers and directors already spoke and included the kind of consistency and controls they would expect.”

Affleck said the model was specifically trained to understand “visual logic and editorial consistency.”

In a video accompanying the announcement, Affleck emphasized that the tool is “not about text prompting or generating something from nothing.”

Instead, filmmakers can build their own model using their movie’s footage and then use it in post-production to make changes like removing stunt wires, creating missing shots, or adjusting backdrops, colors, and lighting.

The news is somewhat surprising after Affleck’s past comments expressing a more skeptical view of AI have gon viral. In particular, he has questioned AI’s ability to write, saying that “by its nature it goes to the mean, the average.”

“I don’t think it’s very likely that it’s gonna be able to write anything meaningful, or in particular, that it’s going to be making movies from whole cloth, like Tilly Norwood. That’s bullshit,” Affleck said on The Joe Rogan Podcast in January about AI, referencing the AI-generated actor. “Really, what it is, it’s going to be a tool just like visual effects.”

So it’s no surprise Netflix is tapping Affleck to help get filmmakers on the AI bandwagon.

The company said last year that it plans to expand its use of AI. In a letter to shareholders in October, Netflix wrote that it aims to focus on “empowering creators with a broad set of GenAI tools to help them achieve their visions.”

The company also highlighted some early examples of the technology in action. Netflix touted its use of de-aging AI in Happy Gilmore 2, and said the producers of Billionaires’ Bunker used AI tools to create concept art.

Even before that, Netflix announced in July that the Argentinian sci-fi series El Eternauta featured what the company described as the “very first GenAI final footage to appear on screen” in a Netflix show or film

With this new partnership, it’s pretty safe to assume that we’ll be seeing more AI show up in Netflix productions. Hopefully, Affleck can help prevent it from being too cringe.

Investors spill what they aren’t looking for anymore in AI SaaS companies


Investors have been pouring billions into AI companies over the past few years, as the technology continues to hold sway in the Valley and thus the world. But not all AI companies are grabbing investor attention.

Indeed, even as it seems every company these days is rebranding to include “AI” in its name, some startup ideas are just no longer in favor with investors. TechCrunch spoke with VCs to learn what investors aren’t looking for in AI software-as-a-service startups anymore.

Popular SaaS categories for investors now include startups building AI-native infrastructure, vertical SaaS with proprietary data, systems of action (those helping users complete tasks), and platforms deeply embedded in mission-critical workflows, according to Aaron Holiday, a managing partner at 645 Ventures. 

But he also gave a list of companies that are considered quite boring to investors these days: Startups building thin workflow layers, generic horizontal tools, light product management, and surface-level analytics — basically, anything an AI agent can now do. 

Abdul Abdirahman, an investor at the firm F Prime, added that generic vertical software “without proprietary data moats” is no longer popular, and Igor Ryabenky, a founder and managing partner at AltaIR Capital, went deeper on that point. He said investors aren’t interested in anything, really, that doesn’t have much product depth. 

“If your differentiation lives mostly in UI [user interface] and automation, that’s no longer enough,” he said. “The barrier to entry has dropped, which makes building a real moat much harder.” 

New companies entering the market now need to build around “real workflow ownership and a clear understanding of the problem from day one,” he said.  “Massive codebases are no longer an advantage. What matters more is speed, focus, and the ability to adapt quickly. Pricing also needs to be flexible: rigid per-seat models will be harder to defend, while consumption-based models make more sense in this environment.” 

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Jake Saper, a general partner at Emergence Capital, also had thoughts on ownership. To him, the differences between Cursor and Claude Code are the “canary in the coal mine.” 

“One owns the developer’s workflow, the other just executes the task,” Saper continued. “Developers are increasingly choosing the execution over process.” 

He said any product dealing with “workflow stickiness” — meaning trying to attract as many human customers as possible to continuously use the product — might find themselves in an uphill battle as agents takeover the workflow.

“Pre-Claude, getting humans to do their jobs inside your software was a powerful moat, but if agents are doing the work, who cares about human workflow?” he told TechCrunch.

He also thinks integrations are becoming less popular, especially as Anthropic’s model context protocol (MCP) makes it easier than ever to connect AI models to external data and systems. This means someone doesn’t need to download multiple integrations or build their own customer integrations; they can just use the MCP. 

“Being the connector used to be a moat,” Saper said. “Soon, it’ll be a utility.”

Also, no longer en vogue include the “workflow automation and task management tools that enable the coordination of human work become less necessary if, over time, agents just execute the tasks,” Abdirahman said, citing examples, mainly public SaaS companies whose stocks are down as new AI-native startups arise with better, more efficient technology. 

Ryabenky said the SaaS companies struggling to raise right now are the ones that can easily be replicated, he said.

“Generic productivity tools, project management software, basic CRM clones, and thin AI wrappers built on top of existing APIs fall into this category,” he said. “If the product is mostly an interface layer without deep integration, proprietary data, or embedded process knowledge, strong AI-native teams can rebuild it quickly. That is what makes investors cautious.”

Overa, what remains attractive about SaaS is depth and expertise, with tools embedded in critical workflows. He said companies should right now look into integrating AI deeply into their products and update their marketing to reflect that, Ryabenky continued.

“Investors are reallocating capital toward businesses that own workflows, data, and domain expertise,”  Ryabenky said. “And away from products that can be copied without much effort.” 

Nvidia has another record quarter amid record capex spends


Chip giant and world’s most valuable company Nvidia reported record profits in its most recent quarter on Wednesday, as demand for AI compute continues to skyrocket.

“The demand for tokens in the world has gone completely exponential,” CEO Jensen Huang said on a call with analysts following the results. “I think we’re all seeing that, to the point where even our six-year-old GPUs in the cloud are completely consumed and the pricing is going up.”

The company reported $68 billion in revenue in the most recent quarter, up 73% from the prior year, with $62 billion of that revenue coming from the company’s data center business.

Notably, Nvidia divided the data center revenue into $51 billion in compute revenue (largely GPUs) and $11 billion in networking products like NVLink. The company reported $215 billion in revenue for the full year.

As in previous quarters, the company did not report any revenue from chip exports to China, despite the recent lifting of export restrictions by the U.S. government. “While small amounts of H200 products for China-based customers were approved by the US government, they have yet to generate any revenue, and we do not know whether any imports will be allowed into China,” Colette Kress, the company’s chief financial officer, said.

“Our competitors in China, bolstered by recent IPOs, are making progress,” she continued, in an apparent reference to Moore Threads’ IPO in December, “and have the potential to disrupt the structure of the global AI industry over the long term.”

During the investor call, Huang also addressed the company’s pending investment in OpenAI, which has been reported at $30 billion.

Techcrunch event

Boston, MA
|
June 9, 2026

“We continue to work with OpenAI toward a partnership agreement. We believe we are close,” Huang said. He also referenced partnerships with Anthropic, Meta, and Elon Musk’s xAI. However, statements Nvidia filed with the U.S. Securities and Exchange Commission on Wednesday emphasized that there was “no assurance” an investment would take place.

Huang also addressed concerns about the sustainability of tech companies’ capex commitments, saying he believed the compute investments would soon bring revenue.

“In this new world of AI, compute is revenue. Without compute, there’s no way to generate tokens. Without tokens, there’s no way to grow revenues,” Huang said. “We’ve reached the inflection point and we’re generating profitable tokens that are productive for customers and profitable for the cloud service providers”

Know What Else Used a Lot of Energy? Human Civilization



At last week’s India AI Impact Summit in New Delhi, industry leaders convened to discuss the future of artificial intelligence and how best to squeeze it into parts of your life you haven’t even considered. Notably absent was Bill Gates, who dropped out hours before his scheduled keynote over the ongoing scrutiny about his presence in the Epstein Files (though he continues to deny any wrongdoing). While the convention was reportedly a bit chaotic, what with the protests and all, the luminaries from around the tech world present nonetheless kept things upbeat and optimistic, declaring “full steam ahead” on the technological hype train carrying our species and planet off a cliff.

Also in attendance was OpenAI’s Sam Altman, who earned numerous headlines over the course of the event for his words and antics. His buzz blitzkrieg started on Thursday at a seemingly easy photo-opp layup with Indian Prime Minister Narendra Modi and other AI executives all raising their joined hands in a celebratory display of industry-wide solidarity. Altman and the former colleague and present CEO of Anthropic to his left, Dario Amodei, notably refused to complete the chain and hold each other’s hands, making for an all-too-poignant moment. Altman would continue to make news throughout the summit for his comments on the industry’s “urgent” need for global regulation and his sneaking suspicion that companies might actually be using AI as a scapegoat to whitewash their layoffs.

Ever the yapper, Altman has bagged yet another round of earned media for an interview with The Indian Express’ Anant Goenka, during which he posited some controversial rebuttals to concerns about AI’s environmental impact.

Altman started off by saying the claims about ChatGPT consuming “‘17 gallons of water for each query’ or whatever,” are “completely untrue, totally insane, no connection to reality,” before qualifying that, OK, maybe it was a valid concern when his company “used to do evaporative cooling in data centers.”

He went on to say that there is “fair” concern about the amount of energy data centers eat to crank out the most soulless slop you’ve ever seen, but suggested the onus of responsibility for dealing with AI’s ravenous appetite falls to the energy sector itself, which Altman feels needs to “move towards nuclear or wind and solar very quickly.”

Altman then stunned the crowd and firmly re-entered the discourse with a mind-blowing truth bomb for those who still felt AI was consuming too much energy.

“It also takes a lot of energy to train a human,” Altman rejoined euphorically. “It takes like 20 years of life, and all the food you eat before that time, before you get smart. And not only that, it took like the very widespread evolution of the hundred billion people that have ever lived and learned not to get eaten by predators and learned how to figure out science and whatever to produce you, and then you took whatever you took.”

It is true that every person and the sum total of human civilization have consumed a sizable amount of energy (and water) to get to where we are today. While the value comparison of a nascent tech industry and its models to the entirety of civilization and human beings may have elicited adulation at the summit, Altman got an icier reception from the internet. Social media quickly took to roasting the remarks as “dystopian” and “deeply antisocial and antihuman.”

Perhaps further illuminating the backlash, Altman’s energy comments butt up against the frustrating lack of transparency within the industry our collective futures now hinge upon. There are currently no regulations in place requiring data centers to disclose their water and energy consumption. Furthermore, center employees and business partners are typically muzzled by nondisclosure agreements. This has made reporting and research on the true expenditure levels a tricky figure to pin down.

At least we’ve got Sam to keep us informed while waiting for some clarity about what’s actually going on and being used in those centers.

An ice dance duo skated to AI music at the Olympics


Czech ice dancers Kateřina Mrázková and Daniel Mrázek made their Olympic debut on Monday, an unfathomable feat that takes a lifetime of dedication and practice. But the sibling duo used AI music in their rhythm dance program, which doesn’t break any official rules, but serves as a depressing symbol of how absolutely cooked we are.

As Mrázek spun his sister in a crazy cartwheel lift sort of move that made them look superhuman, one of the NBC commentators mentioned in passing, “This is AI generated, this first part,” referring to the music. Somehow, that admission is even more baffling than the gravity-defying tricks that the siblings showed off on the pressure of Olympic ice.

The Olympic ice dance competition is split into two events: the rhythm dance, where pairs must perform a routine that meets a specific theme, and the free dance. This season’s theme is “The Music, Dance Styles, and Feeling of the 1990s.” British ice dancing duo Lilah Fear and Lewis Gibson paid tribute to the Spice Girls, while United States favorites Madison Chock and Evan Bates skated to a Lenny Kravitz medley.

But, for whatever reason — licensing issues? — Mrázková and Mrázek danced to a routine with music that’s half AC/DC and half AI. It’s weird. What’s even weirder is that this isn’t the duo’s first use of AI, nor is it the first time that this choice backfired.

Per the International Skating Union, the governing body that oversees competitive ice skating, the duo’s music choice for the rhythm dance this season has been “One Two by AI (of 90s style Bon Jovi)” and “Thunderstruck by AC/DC.” The official Olympics website confirms that the duo is using the AI-generated song for the rhythm dance portion.

The Czech siblings have faced backlash before for using AI-generated music. Earlier in the season, they played a ’90s-inspired song for their routine that began with a wailing declaration: “Every night we smash a Mercedes Benz!” If that sounds familiar, it’s because that lyric comes directly from the ’90s hit “You Get What You Give” by New Radicals (which, by the way, has an incredible music video shot in a Staten Island mall — the true essence of American suburbia!).

The AI-generated lyrics also include the lines, “Wake up, kids/We got the dreamer’s disease,” and “First we run, and then we laugh ’til we cry.” What a coincidence! Those lyrics also appear in the song “You Get What You Give” by New Radicals. The AI song is even titled “One Two,” which are the first words of… you can probably guess what song at this point.

Techcrunch event

Boston, MA
|
June 23, 2026

Before the Olympics, the duo changed the song, swapping out the New Radicals lyrics for other AI-generated lyrics that sound suspiciously like Bon Jovi lyrics, as journalist Shana Bartels noted in November. For example, “raise your hands, set the night on fire.” also appear in “Raise Your Hands” by Bon Jovi… and the AI “vocalist” sounds a lot like Bon Jovi, too. (Not to pour salt on the wound, but “Raise Your Hands” isn’t even from the ’90s!) This was the music that the duo danced to on Monday at the Olympics, before it transitioned into “Thunderstruck” by AC/DC, a real song from the 90s written by real people.

While it’s unclear what software the team used to generate this music, this is an LLM operating as it’s supposed to. These LLMs are trained on large libraries of music, often through legally dubious means. When prompted, LLMs produce the most statistically probable response to an input. That’s useful when writing code, but means a song “in the style of Bon Jovi,” will likely end up using some actual Bon Jovi lyrics..

And yet, the music industry seems at least temporarily enamored with the idea of “musicians” who aren’t totally real. Telisha Jones, a 31-year-old in Mississippi, used Suno to set her (hopefully real) poetry to music under the persona Xania Monet. Now she has a $3 million record deal.

It’s a shame that these Czech dancers’ accomplishment of skating at the Olympics may be marred by discourse around their use of AI music (discourse that I am actively contributing to). But come on! Isn’t this sport supposed to be creative?