As Cohere and Writer mine the ‘Live AI’ arena, Pathway joins the pack with a $10M round


As large enterprises grapple with how to incorporate AI into their platforms and processes, they have encountered a problem: Generative AI needs to have memory and its training data must be constantly updated for it to have any practical use. This area is now called ‘Live AI’ and a number of startups are working in the space, including Cohere and Writer. Another, Pathway, has just raised a $10 million Seed round to build live AI systems that, claims the company, think and learn in real-time as humans do. 

The round was led by TQ Ventures, with participation from Kadmos, Innovo, Market One Capital, Id4 and angel investors. Another investor in Pathway includes Lukasz Kaiser, the co-author of Transformers and a key researcher behind GPT o1 from OpenAI.

Pathway’s offering includes what it calls ‘infrastructure components’ that power live AI systems, feeding on structured and unstructured data, meaning that enterprise AI platforms can make decisions on up-to-date knowledge.  Customers so far include NATO and La Poste, the French post office. 

Zuzanna Stamirowska, Co-Founder and CEO of Pathway, told TechCrunch over a call: “The way deep learning and LLM assistants are working, is that you take the training data and then you train models. But the question is, how to deal with knowledge, how to deal with memory? Right now an LLM acts like a bit like a very smart intern on the first day of his job, being offered a book to read. But they can’t really memorize it. Plus, it’s not live, it’s stactic.”

To remedy this, she said Pathway “enables developers to build a pipeline where they can feed-in live data into the AI systems. Right now we do it during the prompting stage of when you build LLM applications or Gen AI applications.”

Stamirowska — who is moving to Menlo Park, California — has assembled an impressive, highly technical team to achieve the startup’s goals. Her co-founders are CSO Adrian Kosowski and CTO Jan Chorowski, who previously worked with recent physics Nobel Prize winner and the “Godfather of AI”, Geoff Hinton. Stamirowska herself is the author of a state-of-the-art forecasting model for a complex network that was in the maritime trade, published by the Academy of Sciences of the US. 

“The company started with an idea that popped up in my head on one sunny morning in Chicago,” she said. “I was there accompanying a friend to a scientific conference in theoretical computer science… We had a small disagreement, and I said I have to my start my own thing. So, I took out my laptop and started writing to people in my network about how to move this forward. I still remember the taste of the coffee at that moment.”

I asked her where she sees Pathway as against other startups in the space? “For use cases in GenAI engineering and knowledge management, Cohere and Writer appear beside us in the latest Gartner Quadrants,” she said. “Whereas in enterprise  deals, we often encounter Palantir for AI transformation tenders, although they are less product-oriented than we are.”

Commenting in a statement, Schuster Tanger, Co-Managing Partner and Co-founder at TQ Ventures, said: “Zuzanna and the team at Pathway possess bleeding-edge insights and expertise in one of the most exciting fields in modern business… Last and hardly least, the response from the developer community has been powerful.”

Musk’s amended lawsuit against OpenAI names Microsoft as defendant


Elon Musk’s lawsuit against OpenAI accusing the company of abandoning its non-profit mission was withdrawn in July, only to be revived in August. Now, in an amended complaint, the suit names new defendants including Microsoft, LinkedIn co-founder Reid Hoffman, and former OpenAI board member and Microsoft VP Dee Templeton.

The amended filing also adds new plaintiffs: Neuralink exec and ex-OpenAI board member Shivon Zilis and Musk’s AI company, xAI.

Musk was one of the original founders of OpenAI, which was meant to research and develop AI for the benefit of humanity, and was established as a non-profit originally. He left the company in 2018 after disagreements about its direction.

In the complaint, lawyers for Musk argue that OpenAI is now “actively trying to eliminate competitors” such as xAI by “extracting promises from investors not to fund them.” It’s also allegedly unfairly benefitting from Microsoft’s infrastructure and expertise in what Musk’s counsel describes in the filing as a “de facto merger.”

“xAI has been harmed by, without limitation … an inability to obtain compute from Microsoft on terms anywhere near as favorable as OpenAI receives … and the exclusive exchange between OpenAI and Microsoft of competitively sensitive information,” reads the complaint, filed late Thursday in federal court in Oakland, California.

Hoffman’s position on the boards of both Microsoft and OpenAI while also a partner at Greylock, the investment firm, gave Hoffman a privileged — and illicit — view into the companies’ dealings, the complaint alleges. (Hoffman stepped down from OpenAI’s board in 2023.) Greylock invested in Inflection, Musk’s counsel notes, the AI startup that Microsoft acqui-hired earlier this year — and which could reasonably be considered an OpenAI competitor, according to the complaint.

As for Templeton, whom Microsoft briefly appointed as a non-voting board observer at OpenAI, the amended filing alleges that she was in a position to facilitate agreements between Microsoft and OpenAI that would violate antitrust rules.

“The purpose of the prohibition on interlocking directorates is to prevent sharing of competitively sensitive information in violation of antitrust laws and/or providing a forum for the coordination of other anticompetitive activity,” the complaint reads. “Allowing Templeton and Hoffman to serve as members of OpenAI’s …. board undermined this purpose. “

Alongside Microsoft, Hoffman, and Templeton, California attorney general Rob Bonta is named as a defendant in Musk’s complaint. Bloomberg reported this month that OpenAI is in talks with Bonta’s office over the process to change its corporate structure.

Per the amended complaint, Zilis, who stepped down from OpenAI’s board in 2023 after serving as a member for roughly four years, has standing as an “injured employee” under California Corporations Code. Zilis repeatedly raised concerns over OpenAI’s dealmaking internally that fell on deaf ears — concerns substantially similar to Musk’s, according to the complaint.

Zilis has close ties to Musk, having worked as a project director at Tesla from 2017 to 2019 in addition to directing Neuralink research. (Neuralink is Musk’s brain-computer interface venture.) She’s also the mother of three of Musk’s children, Techno Mechanicus and twins Strider and Azure.

The 107-page amended complaint includes the unusual detail that OpenAI CEO Sam Altman proposed that OpenAI sell its own cryptocurrency in January 2018, before it ultimately decided to transition to a capped-profit structure.

“Heads up, spoke to some of the safety team and there were a lot of concerns about the ICO and possible unintended effects in the future,” Altman wrote in an email to Musk dated January 21, 2018, an exhibit filed with the amended complaint shows. An ICO, or initial coin offering, is an unregulated means by which funds are raised for cryptocurrency businesses. “Going to emphasize the need to keep this confidential, but I think it’s really important we get buy-in and give people the chance to weigh in early.”

Musk v OpenAI crypto ICO Altman email
Image Credits:Toberoff & Associates

Musk supposedly shot down the crypto sale idea. “I have considered the ICO approach and will not support it,” he wrote in an email reply to Altman and OpenAI co-founders Greg Brockman (now OpenAI’s president) and Ilya Sutskever (OpenAI’s ex-chief scientist), shows an exhibit. “In my opinion, that would simply result in a massive loss of credibility for OpenAI and everyone associated with the ICO.”

The thrust of the lawsuit remains the same on the plaintiffs’ side: that OpenAI profited from Musk’s early involvement in the company yet reneged on its nonprofit pledge to make the fruits of its AI research available to all. “No amount of clever drafting nor surfeit of creative dealmaking can obscure what is happening here,” reads the complaint. “OpenAI, Inc., co-founded by Musk as an independent charity committed to safety and transparency … [is] fast becoming a full for-profit subsidiary of Microsoft.”

OpenAI has sought to dismiss Musk’s lawsuit, calling it “blusterous” and baseless.

Musk’s amended lawsuit against OpenAI names Microsoft as defendent


Elon Musk’s lawsuit against OpenAI accusing the company of abandoning its nonprofit mission was withdrawn in July, only to be revived in August. Now, in an amended complaint, the suit names new defendants including Microsoft, LinkedIn co-founder Reid Hoffman, and former OpenAI board member and Microsoft VP Dee Templeton.

The amended filing also adds new plaintiffs: Neuralink exec and ex-OpenAI board member Shivon Zilis and Musk’s AI company, xAI.

Musk was one of the original founders of OpenAI, which was meant to research and develop AI for the benefit of humanity, and was established as a non-profit originally. He left the company in 2018 after disagreements about its direction.

In the complaint, lawyers for Musk argue that OpenAI is now “actively trying to eliminate competitors” such as xAI by “extracting promises from investors not to fund them.” It’s also allegedly unfairly benefitting from Microsoft’s infrastructure and expertise in what Musk’s counsel describes in the filing as a “de facto” merger.

“xAI has been harmed by, without limitation … an inability to license OpenAI technology given Microsoft’s exclusive license … an inability to obtain compute from Microsoft on terms anywhere near as favorable as OpenAI receives … and the exclusive exchange between OpenAI and Microsoft of competitively sensitive information.”

Hoffman’s position on the boards of both Microsoft and OpenAI while also a partner at Greylock, the investment firm, gave Hoffman a privileged — and illicit — view into the companies’ dealings, the complaint alleges. (Hoffman stepped down from OpenAI’s board in 2023.) Greylock invested in Inflection, Musk’s counsel notes, the AI startup that Microsoft acqui-hired earlier this year — and which could reasonably be considered an OpenAI competitor, according to the complaint.

As for Templeton, whom Microsoft briefly appointed as a non-voting board observer at OpenAI, the amended filing alleges that she was in a position to facilitate agreements between Microsoft and OpenAI that would violate antitrust rules.

“The purpose of the prohibition on interlocking directorates is to prevent sharing of competitively sensitive information in violation of antitrust laws and/or providing a forum for the coordination of other anticompetitive activity,” the complaint reads. “Allowing Templeton and Hoffman to serve as members of OpenAI’s …. board undermined this purpose. “

Per the amended complaint, Zilis, who stepped down from OpenAI’s board in 2023 after serving as a member for roughly four years, has standing as an “injured employee” under California Corporations Code. Zilis repeatedly raised concerns over OpenAI’s dealmaking internally that fell on deaf ears — concerns substantially similar to Musk’s, according to the complaint.

Zilis has close ties to Musk, having worked as a project director at Tesla from 2017 to 2019 in addition to directing Neuralink research. (Neuralink is Musk’s brain-computer interface venture.) She’s also the mother of three of Musk’s children, Techno Mechanicus and twins Strider and Azure.

The 107-page amended complaint includes the unusual detail that OpenAI CEO Sam Altman proposed that OpenAI sell its own cryptocurrency in September 2017, before it ultimately decided to transition to a capped-profit structure. Musk supposedly shot down the crypto sale idea.

The thrust of the lawsuit remains the same on the plaintiffs’ side: that OpenAI profited from Musk’s early involvement in the company yet reneged on its nonprofit pledge to make the fruits of its AI research available to all. “No amount of clever drafting nor surfeit of creative dealmaking can obscure what is happening here,” reads the complaint. “OpenAI, Inc., co-founded by Musk as an independent charity committed to safety and transparency … [is] fast becoming a full for-profit subsidiary of Microsoft.”

45 Years After Alien, Ridley Scott Is Still Wary of AI


Ridley Scott has Gladiator II in theaters soon, but the decorated director is probably better-known for his futuristic sci-fi tales than he is for his historical dramas, with Blade Runner and the Alien franchise leading the charge. In a new interview where he talks mostly about creating the follow-up to his 2000 Best Picture winner, Scott (who turns 87 in a few weeks) was asked for his thoughts about a burning issue in the cinematic world: the use of AI.

The question, posed by Deadline, was specifically framed in regards to the director’s “cynical” point of view on AI in 1979’s Alien —wondering if Scott has changed his opinion on its use over the past four-plus decades. “AI is a tool, remember that,” he told Deadline. “But AI can be also a terrible abuser of normal stuff, even good stuff. There’s one or two people out there … who may be able to think a little bit beyond [and use] AI for the best they can come up with, the big idea. That would include [Aliens director James] Cameron. And therefore, we always hope the very best will evolve and use AI as a tool.”

However, he continued, it’s not all bright horizons. “But see, probably one of the best ideas that is the trigger for all the best science fiction that followed, in [Stanley Kubrick’s 2001: A Space Odyssey]. You start off with a dawn of man, you see apes fighting over sustenance in a waterhole … [then] one morning, the power, not God, the power of the universe has delivered a monolith because it’s seen that the apes are now getting close enough to be thinking entities. And need that boost and help forward. The ape touches the monolith and has the first massive idea in history: he picks up a thigh bone of a beast and kills an ape with it. That’s a weapon, that is a million year quantum leap forward. It’s a grand superlative idea.”

Scott kept the 2001 example going to finish his thought. “Idea two, you’re on a spaceship now, going to search for the power that is and was, and what was the moment? Is it what we call God? … Or is [it] simply a power way beyond our comprehension, and therefore has examined us for years? … [Then you journey] to the far reaches of where they’ve never been before, and they’re relying on one crew member, called Hal. Hal is a fucking computer. And from that, an AI which won’t reveal it to them, but they’re smart enough to suspect Hal is betraying them. Because Hal knows that the expedition is more important than these human beings, and that’s Hal’s error. Hopefully, AI will always make an error. Hopefully. That’s a massive idea.”

Yep, he’s still taking a cautious stance toward AI. However, when Deadline asked if he’d rule out “AI that can help Ridley Scott make bigger and better movies,” the director didn’t rule it out completely, replying “never say never.”

Gladiator II hits theaters November 22.

Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

Amazon reportedly bumped back its AI-powered Alexa to next year


If you’re wondering what happened to Amazon’s new and improved version of its Alexa voice assistant, you’re not alone. reports that the new Alexa is still stuck in its developmental phase and Amazon has cut off access to its beta phase including its new “Let’s Chat” phase. As a result, a planned late 2024 launch has been pushed back to next year.

The problem seems to be with its large language models (LLMs). The new Alexa is designed to from users but it’s also more likely to fail doing some of the most basic things the old version could do quite easily like create a timer or operate smart lights, according to a follow up report from .

Amazon originally planned to unveil its new version of Alexa AI in October but now the timeline has been extended into next year. (As you might have noticed, October has come and gone.) The original timeline planned to premiere the next evolutionary step in Alexa’s advancement on October 17 but Amazon decided to pivot and used the date to show off its new line of Kindle ereaders. Then in August, news surfaced that the new Alexa would be powered by and come with a monthly subscription fee.

As ChatGPT began to rise in popularity in the summer of 2023, Amazon CEO Andy Jassy wanted to see if Alexa could compete if it had an AI upgrade. Jassy reportedly started peppering Alexa with sports questions “like an ESPN reporter at a playoff press conference” and its answers were “nowhere near perfect.” It even made up a recent game score for Jassy.

Despite this, Alexa passed the good enough stage and Jassy and his fellow executives felt their engineers could build a beta version by the early part of 2024. Unfortunately, Amazon wasn’t able to meet its deadline.

Even with the new deadline, the new Alexa still has a long way to go to fix its problems. Some employees told Bloomberg that the problem outside of Alexa’s innerworkings is with Amazon’s overstuffed management and a lack of “a compelling vision for an AI-powered Alexa.” .

Apple’s own research sheds light on Siri’s AI laggardness


With the introduction of the new iPad Mini, Apple made it clear that a software experience brimming with AI is the way forward. And if that meant making the same kind of internal upgrades to a tablet that costs nearly half as much as its flagship phone, the company would still march forward.

However, its ambitions with Apple Intelligence lack competitive vigor, and even by Apple’s own standards, the experience hasn’t managed to wow users. On top of that, the staggered rollout of the most ambitious AI features — many of which are still in the future — has left enthusiasts with a bad taste.

Now, it appears that the reason behind the delays has something to do with quality and performance, as per Apple’s own testing. “The research found that OpenAI’s ChatGPT was 25% more accurate than Apple’s Siri, and able to answer 30% more questions,” says a Bloomberg report.

Updated interface of Siri activation.
Apple

To recall, Apple’s position with Siri is quite unique. For example, Siri is getting enhanced natural language understanding and deeper integration with apps as well as local files. However, there are tasks it can’t quite accomplish, and for such situations, the queries will be seamlessly offloaded to ChatGPT.

That’s part of a deal Apple inked with OpenAI. Now, it would make sense that Siri can’t quite pull the same kind of internet-connected tasks as ChatGPT, primarily because Siri and ChatGPT are two entirely different products. However, Apple is deploying OpenAI’s tech stack in more places than just Siri.

According to OpenAI, the ChatGPT will also lend a hand to users with “image and document understanding.” The Writing Tools – which have already arrived in tools like Notes and Safari — are also tapping into the ChatGPT kitty. Moreover, image generation will also be handled by OpenAI’s tech.

With such deep reliance on ChatGPT, one might think that’s because Apple isn’t quite there on the leaderboard with its own AI tech stack, something that could rival the likes of Google’s Gemini or Meta. That assumption won’t be entirely implausible, and even Apple’s team seems to agree with the status quo.

“In fact, some at Apple believe that its generative AI technology — at least, so far — is more than two years behind the industry leaders,” adds the Bloomberg report. Yet, it’s not merely about advancements, but also the pace of rollout.

Choice between Siri and Apple Intelligence
Siri will offload queries to ChatGPT for chores it can’t handle. Apple

Take a look at Galaxy AI, Samsung’s take on an AI ecosystem that has already appeared on a wide array of its phones and computing machines, with some help from Google’s Gemini stack. Chinese smartphone makers have already been offering generative AI features like image generation and a next-gen assistant for a while now.

At this point in time, it seems almost certain that Apple’s strategy with Apple Intelligence was hurried, apparently in a bid to quell investor concerns that the company was lagging in the AI race. So far, whatever little we have seen from Apple’s “AI revolution” has been far from revolutionary.

The best implementation of Apple Intelligence so far has been notification summaries and prioritization, but those are more utilitarian features than something that would reimagine the software experience for users. It would be interesting to see how Apple injects fresh energy into its AI approach next year.

But so far, the company hasn’t made any such announcements, and even the promises it made at its developers conference earlier this year are yet to materialize.






Anthropic CEO goes full techno-optimist in 15,000-word paean to AI


Anthropic CEO Dario Amodei wants you to know he’s not an AI “doomer.”

At least, that’s my read of the “mic drop” of a ~15,000 word essay Amodei published to his blog late Friday. (I tried asking Anthropic’s Claude chatbot whether it concurred, but alas, the post exceeded the free plan’s length limit.)

In broad strokes, Amodei paints a picture of a world in which all AI risks are mitigated, and the tech delivers heretofore unrealized prosperity, social uplift, and abundance. He asserts this isn’t to minimize AI’s downsides — at the start, Amodei takes aim, without naming names, at AI companies overselling and generally propagandizing their tech’s capabilities. But one might argue that the essay leans too far in the techno-utopianist direction, making claims simply unsupported by fact.

Amodei believes that “powerful AI” will arrive as soon as 2026. By powerful AI, he means AI that’s “smarter than a Nobel Prize winner” in fields like biology and engineering, and that can perform tasks like proving unsolved mathematical theorems and writing “extremely good novels.” This AI, Amodei says, will be able to control any software or hardware imaginable, including industrial machinery, and essentially do most jobs humans do today — but better.

“[This AI] can engage in any actions, communications, or remote operations … including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on,” Amodei writes. “It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory it could even design robots or equipment for itself to use.”

Lots would have to happen to reach that point.

Even the best AI today can’t “think” in the way we understand it. Models don’t so much reason as replicate patterns they’ve observed in their training data.

Assuming for the purpose of Amodei’s argument that the AI industry does soon “solve” human-like thought, would robotics catch up to allow future AI to perform lab experiments, manufacture its own tools, and so on? The brittleness of today’s robots imply it’s a long shot.

Yet Amodei is optimistic — very optimistic.

He believes AI could, in the next 7-12 years, help treat nearly all infectious diseases, eliminate most cancers, cure genetic disorders, and halt Alzheimer’s at the earliest stages. In the next 5-10 years, Amodei thinks that conditions like PTSD, depression, schizophrenia, and addiction will be cured with AI-concocted drugs, or genetically prevented via embryo screening (a controversial opinion) — and that AI-developed drugs will also exist that “tune cognitive function and emotional state” to “get [our brains] to behave a bit better and have a more fulfilling day-to-day experience.”

Should this come to pass, Amodei expects the average human lifespan to double to 150.

“My basic prediction is that AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years,” he writes. “I’ll refer to this as the ‘compressed 21st century’: the idea that after powerful AI is developed, we will in a few years make all the progress in biology and medicine that we would have made in the whole 21st century.”

These seem like stretches, too, considering that AI hasn’t radically transformed medicine yet — and may not for quite some time, or ever. Even if AI does reduce the labor and cost involved in getting a drug into pre-clinical testing, it may fail at a later stage, just like human-designed drugs. Consider that the AI deployed in healthcare today has been shown to be biased and risky in a number of ways, or otherwise incredibly difficult to implement in existing clinical and lab settings. Suggesting all these issues and more will be solved roughly within the decade seems, well, aspirational.

But Amodei doesn’t stop there.

AI could solve world hunger, he claims. It could turn the tide on climate change. And it could transform the economies in most developing countries; Amodei believes AI can bring the per-capita GDP of sub-Saharan Africa ($1,701 as of 2022) to the per-capita GDP of China ($12,720 in 2022) in 5-10 years.

These are bold pronouncements, although likely familiar to anyone who’s listened to disciples of the “Singularity” movement, which expects similar results. To Amodei’s credit, he acknowledges that such developments would require “a huge effort in global health, philanthropy, [and] political advocacy,” which he posits will occur because it’s in the world’s best economic interest.

That would be a dramatic change in human behavior if so, given people have shown time and again that their primary interest is in what benefits them in the shorter term. (Deforestation is but one example among thousands.) It’s also worth noting that many of the workers responsible for labeling the datasets used to train AI are paid far below minimum wage while their employers reap tens of millions — or hundreds of millions — in capital from the results.

Amodei touches, briefly, on the dangers of AI to civil society, proposing that a coalition of democracies secure AI’s supply chain and block adversaries who intend to use AI toward harmful ends from the means of powerful AI production (semiconductors, etc.). In the same breath, he suggests that AI, in the right hands, could be used to “undermine repressive governments” and even reduce bias in the legal system. (AI has historically exacerbated biases in the legal system.)

“A truly mature and successful implementation of AI has the potential to reduce bias and be fairer for everyone,” Amodei writes.

So, if AI takes over every conceivable job and does it better and faster, won’t that leave humans in a lurch economically speaking? Amodei admits that, yes, it would, and that at that point, society would have to have conversations about “how the economy should be organized.”

But he offers no solution.

“People do want a sense of accomplishment, even a sense of competition, and in a post-AI world it will be perfectly possible to spend years attempting some very difficult task with a complex strategy, similar to what people do today when they embark on research projects, try to become Hollywood actors, or found companies,” he writes. “The facts that (a) an AI somewhere could in principle do this task better, and (b) this task is no longer an economically rewarded element of a global economy, don’t seem to me to matter very much.”

Amodei advances the notion, in wrapping up, that AI is simply a technological accelerator — that humans naturally trend toward “rule of law, democracy, and Enlightenment values.” But in doing so, he ignores AI’s many costs. AI is projected to have — is already having — an enormous environmental impact. And it’s creating inequality. Nobel Prize-winning economist Joseph Stiglitz and others have noted the labor disruptions caused by AI could further concentrate wealth in the hands of companies and leave workers more powerless than ever.

These companies include Anthropic, as loath as Amodei is to admit it. Anthropic is a business, after all — one reportedly worth close to $40 billion. And those benefiting from its AI tech are, by and large, corporations whose only responsibility is to boost returns to shareholders, not better humanity.

A cynic might question the essay’s timing, in fact, given that Anthropic is said to be in the process of raising billions of dollars in venture funds. OpenAI CEO Sam Altman published a similarly technopotimist manifesto shortly before OpenAI closed a $6.5 billion funding round. Perhaps it’s a coincidence.

Then again, Amodei isn’t a philanthropist. Like any CEO, he has a product to pitch. It just so happens that his product is going to “save the world” — and those who think otherwise risk being left behind. Or so he’d have you believe.

AI coding startup Poolside raises $500M from eBay, Nvidia and others


Poolside, the AI-powered software dev platform, has raised half a billion dollars in new capital.

The cash came in the form of a Series B led by Bain Capital Ventures, which also had participation from a who’s who of big tech firms including eBay (via eBay Ventures) and Nvidia. It brings Poolside’s total raised to $626 million; Bloomberg reports that the startup’s valuation now sits at $3 billion.

TechCrunch revealed this summer that Poolside was in the midst of raising substantial funding.

“We believe software development will be the first broad capability where AI will reach and surpass human-level intelligence,” Poolside CEO Jason Warner said in a press release. “Through our team, our applied research, and a powerful revenue engine, poolside will bring AI for software development so that anyone in the world can build.”

U.S.- and Europe-based Poolside was founded last year by Warner and Eiso Kant, both software engineers. Warner is the former CTO of GitHub, having also headed engineering orgs at Canonical and Heroku. Kant previously co-founded several dev-focused startups, including engineering analytics firm Athenian.

Warner, who incubated GitHub’s AI-powered Copilot tool, met Kant in 2017. Over the next six years, the pair plotted an AI-driven assistive tool suite for devs, which became Poolside.

Poolside develops its own AI models to help with tasks like autocompleting code and suggesting code possibly relevant to a particular context or codebase — much like rival AI assistive coding tools. The company’s customers are primarily Global 2000 companies and public-sector agencies; few have been publicly disclosed.

The Series B funding allowed Poolside to bring 10,000 Nvidia GPUs online to train future models, Warner said, and will bolster the company’s go-to-market and R&D efforts.

Despite the security, copyright and reliability concerns around AI-powered assistive coding tools, developers have shown enthusiasm for them, with the vast majority of respondents in GitHub’s latest poll saying that they’ve adopted AI tools in some form. GitHub reported in April that Copilot had over 1.8 million paying users and more than 50,000 business customers.

Encouraged by the adoption trend, VCs are pouring massive sums of cash into AI coding startups. Generative AI coding firm Magic landed $320 million in August — the same day GitHub Copilot competitor Codeium closed a $150 million fundraising round. Earlier in August, Cognition, best known for its viral coding assistant Devin, secured $175 million at a $2 billion valuation.

Polaris Research projects that the AI coding tools market could be worth $27 billion by 2032. At this rate, that doesn’t seem terribly far-fetched.

LG Technology Ventures, Felicis Ventures, Redpoint Ventures, Citi Ventures, Capital One Ventures, HSBC Ventures, DST Global, StepStone Group, Schroders Capital, Premji Invest, Dorsal Capital, BAM Elevate, Adams Street, and Fin Capital also invested in Poolside’s Series B.

Raspberry Pi built an AI camera with Sony


AI enthusiasts who like the Raspberry Pi range of products can rejoice, as the company is now announcing its new Raspberry Pi . This product is the result of the company’s collaboration with (SSS), which began in 2023. The AI Camera is compatible with all of Raspberry Pi’s single-board computers.

The approximately 12.3-megapixel AI Camera is intended for vision-based AI projects, and it’s based on SSS’ IMX500 image sensor. The integrated RP2040 microcontroller manages the neural network firmware, allowing the camera to perform onboard AI image processing and freeing up the Raspberry Pi for other processes. Thus, users who want to integrate AI into their Raspberry Pi projects are no longer limited to the Raspberry Pi AI Kit.

The AI Camera isn’t a total replacement for Raspberry Pi’s , which is still available. For those interested in the new AI Camera, it’s available right now from Raspberry Pi’s approved resellers for $70.

OpenAI reportedly plans to increase ChatGPT’s price to $44 within five years


OpenAI is reportedly telling investors that it plans on charging $22 a month to use ChatGPT by the end of the year. The company also plans to aggressively increase the monthly price over the next five years up to $44.

The documents obtained by shows that OpenAI took in $300 million in revenue this August, and expects to make $3.7 billion in sales by the end of the year. Various expenses such as salaries, rent and operational costs will cause the company to lose $5 billion this year.

OpenAI is reportedly circulating the documents the NYT reported on as part of a drive to find new investors to prevent or lessen its financial shortfall. Fortunately, OpenAI is raising money on a $150 billion valuation, and a new round of investments could bring in as much as $7 billion.

OpenAI is also reportedly in the midst of switching from . The business model allows for the removal of any caps on investor returns so they’ll have more room to negotiate for new investors at possibly higher rates.