Why Cohere’s ex-AI research lead is betting against the scaling race


AI labs are racing to build data centers as large as Manhattan, each costing billions of dollars and consuming as much energy as a small city. The effort is driven by a deep belief in “scaling” — the idea that adding more computing power to existing AI training methods will eventually yield superintelligent systems capable of performing all kinds of tasks.

But a growing chorus of AI researchers say the scaling of large language models may be reaching its limits, and that other breakthroughs may be needed to improve AI performance.

That’s the bet Sara Hooker, Cohere’s former VP of AI Research and a Google Brain alumna, is taking with her new startup, Adaption Labs. She co-founded the company with fellow Cohere and Google veteran Sudip Roy, and it’s built on the idea that scaling LLMs has become an inefficient way to squeeze more performance out of AI models. Hooker, who left Cohere in August, quietly announced the startup this month to start recruiting more broadly.

In an interview with TechCrunch, Hooker says Adaption Labs is building AI systems that can continuously adapt and learn from their real-world experiences, and do so extremely efficiently. She declined to share details about the methods behind this approach or whether the company relies on LLMs or another architecture.

“There is a turning point now where it’s very clear that the formula of just scaling these models — scaling-pilled approaches, which are attractive but extremely boring — hasn’t produced intelligence that is able to navigate or interact with the world,” said Hooker.

Adapting is the “heart of learning,” according to Hooker. For example, stub your toe when you walk past your dining room table, and you’ll learn to step more carefully around it next time. AI labs have tried to capture this idea through reinforcement learning (RL), which allows AI models to learn from their mistakes in controlled settings. However, today’s RL methods don’t help AI models in production — meaning systems already being used by customers — to learn from their mistakes in real time. They just keep stubbing their toe.

Some AI labs offer consulting services to help enterprises fine-tune their AI models to their custom needs, but it comes at a price. OpenAI reportedly requires customers to spend upwards of $10 million with the company to offer its consulting services on fine-tuning.

Techcrunch event

San Francisco
|
October 27-29, 2025

“We have a handful of frontier labs that determine this set of AI models that are served the same way to everyone, and they’re very expensive to adapt,” said Hooker. “And actually, I think that doesn’t need to be true anymore, and AI systems can very efficiently learn from an environment. Proving that will completely change the dynamics of who gets to control and shape AI, and really, who these models serve at the end of the day.”

Adaption Labs is the latest sign that the industry’s faith in scaling LLMs is wavering. A recent paper from MIT researchers found that the world’s largest AI models may soon show diminishing returns. The vibes in San Francisco seem to be shifting, too. The AI world’s favorite podcaster, Dwarkesh Patel, recently hosted some unusually skeptical conversations with famous AI researchers.

Richard Sutton, a Turing award winner regarded as “the father of RL,” told Patel in September that LLMs can’t truly scale because they don’t learn from real world experience. This month, early OpenAI employee Andrej Karpathy told Patel he had reservations about the longterm potential of RL to improve AI models.

These types of fears aren’t unprecedented. In late 2024, some AI researchers raised concerns that scaling AI models through pretraining — in which AI models learn patterns from heaps of datasets — was hitting diminishing returns. Until then, pretraining had been the secret sauce for OpenAI and Google to improve their models.

Those pretraining scaling concerns are now showing up in the data, but the AI industry has found other ways to improve models. In 2025, breakthroughs around AI reasoning models, which take additional time and computational resources to work through problems before answering, have pushed the capabilities of AI models even further.

AI labs seem convinced that scaling up RL and AI reasoning models are the new frontier. OpenAI researchers previously told TechCrunch that they developed their first AI reasoning model, o1, because they thought it would scale up well. Meta and Periodic Labs researchers recently released a paper exploring how RL could scale performance further — a study that reportedly cost more than $4 million, underscoring how expensive current approaches remain.

Adaption Labs, by contrast, aims to find the next breakthrough, and prove that learning from experience can be far cheaper. The startup was in talks to raise a $20 million to $40 million seed round earlier this fall, according to three investors who reviewed its pitch decks. They say the round has since closed, though the final amount is unclear. Hooker declined to comment.

“We’re set up to be very ambitious,” said Hooker, when asked about her investors.

Hooker previously led Cohere Labs, where she trained small AI models for enterprise use cases. Compact AI systems now routinely outperform their larger counterparts on coding, math, and reasoning benchmarks — a trend Hooker wants to continue pushing on.

She also built a reputation for broadening access to AI research globally, hiring research talent from underrepresented regions such as Africa. While Adaption Labs will open a San Francisco office soon, Hooker says she plans to hire worldwide.

If Hooker and Adaption Labs are right about the limitations of scaling, the implications could be huge. Billions have already been invested in scaling LLMs, with the assumption that bigger models will lead to general intelligence. But it’s possible that true adaptive learning could prove not only more powerful — but far more efficient.

Marina Temkin contributed reporting.



Rivian elects Cohere’s CEO to its board in latest signal the EV maker is bullish on AI


Aidan Gomez, the co-founder and CEO of generative AI startup Cohere, has joined the board of EV maker Rivian, according to a regulatory filing. The appointment is the latest sign that Rivian sees promises in applying AI to its own venture while positioning itself as a software leader — and even provider — within the automotive industry.

Rivian increased the size of the board and elected Gomez, whose term will expire in 2026, according to the filing.

Gomez has had a long career as a data scientist and AI expert. He launched Cohere in 2019 with co-founders Nick Frosst and Ivan Zhang with a focus on training AI foundation models for enterprises. The generative AI startup sells its services to companies such as Oracle and Notion.

Prior to starting Cohere, Gomez was a researcher at Google Brain, the deep learning division at Google led by Nobel Prize winner Geoffrey Hinton. Gomez is also known for “Attention Is All You Need,” a 2017 technical paper he co-authored that laid the foundation for many of the most capable generative AI models today.

Gomez’s skill set could be particularly useful for Rivian as the EV maker navigates a new $5.8 billion joint venture with Volkswagen Group to develop software. Under the joint venture, Rivian will share its electrical architecture expertise with Volkswagen Group — including its many brands — and is expected to license existing intellectual property rights to the joint venture.

It’s possible the joint venture will sell its tech to other companies in the future.

Rivian has also been working on an AI assistant for its EVs since 2023, Rivian’s chief software officer, Wassym Bensaid, told TechCrunch during an interview in March. The AI work, which is specifically on the orchestration layer or framework for an AI assistant, sits outside the joint venture with VW, Bensaid mentioned at the time.

Gomez’s expertise in AI and as a data scientist is clearly attractive to Rivian founder and CEO RJ Scaringe, who
noted in a statement that his “thinking and expertise will support Rivian as we integrate new, cutting-edge technologies into our products, services, and manufacturing.”

Cohere co-founder Nick Frosst’s indie band, Good Kid, is almost as successful as his AI company


Nick Frosst, the co-founder of $5.5 billion Canadian AI startup Cohere, has been a musician his whole life. He told TechCrunch that once he started singing, he never shut up. That’s still true today. In addition to his full-time job at Cohere, Frosst is also the front man of Good Kid, an indie rock band composed entirely of programmers.

Good Kid isn’t just a group of friends jamming on the weekends in someone’s garage. The band has 2.3 million monthly Spotify listeners and recently played at Lollapalooza. It was nominated for the Canadian Academy of Recording Arts and Sciences breakthrough group of the year at the Juno Awards this year and opened for Portugal. The Man’s Canadian tour last fall.

Good Kid was formed at the University of Toronto in 2015 as a hobby, Frosst told TechCrunch. All of the members were in the computer science program except one, guitar player David Wood, but they all convinced him to switch. Good Kid launched its first single, Nomu, at the end of 2015. Nomu’s musical medley sounds like a nod to indie pop rock group Two Door Cinema Club, with Frosst’s vocals ringing out in a style that could be compared to Bloc Party front man Kele Okereke. Both Bloc Party and Two Door Cinema Club are inspirations for the group.

“We didn’t really have high hopes for it,” Frosst admits about releasing that first single. “We just wanted to create something that we liked, instead of recording a bunch of songs. It did much better than we thought it would.”

Good Kid dropped a handful more singles until releasing its first self-titled EP in 2018. The band has gone on to release four more albums, the latest of which came out earlier this year.

About a year after the band’s debut album came out in 2018, Frosst launched Cohere with Aidan Gomez and Ivan Zhang. Cohere has since grown into a top-watched startup offering AI models for enterprises. The company has raised more than $970 million in venture capital from backers like Salesforce, Nvidia, Cisco, and Oracle, and is currently valued at $5.5 billion. Although Good Kid’s profile has continued to grow, Frosst said that he’s privileged to be able to be a musician at that level, but Cohere and working in AI is his real career.

“Cohere is my life’s work,” Frosst said. “I spend the vast majority of my time [on] Cohere and music is a thing I get to do and unwind and relax.”

Frosst said finding balance between the two hasn’t been too difficult. The band meets twice a week for two-hour practices. When Good Kid goes on tour, the band bangs out a full day of remote work — everyone works as a programmer — from the bus before taking the stage at night to play shows. Frosst said he actually feels he might be able to focus better on his work for Cohere when they go on tour because it prevents him from having too many meetings.

“I think they are additive,” Frosst said. “I really think being able to play music helps me with my job at Cohere. It clears my mind and gives me a dedicated time to focus and makes me a smarter person.”

But even when the members of the band are focused on making music they are still thinking about AI. In the band’s first single Nomu, produced years before Cohere was founded, that first song used the line “languages lost, tokens unknown,” a reference to the tech upon which Frosst’s company would one day be found.

When the band got to play on the last day of Chicago’s Lollapalooza festival in August, Frosst said it was an incredible experience. He admitted that prior to that, he actually had never even attended a musical festival, let alone played at one. Good Kid went on at 1:45 p.m. and opened the set with No Time to Explain, playing just hours before one of their inspirations, Two Door Cinema Club, took the stage.

Frosst says he feels grateful to be having such a successful musical career without the fear that it won’t work out, a dynamic not common in the music industry.

“Getting to come to music for fun, getting to come from creativity and not for career aspirations, I’m very lucky to have found myself in this situation,” he said.