DeepSeek vs. ChatGPT: Hands On With DeepSeek’s R1 Chatbot


The DeepSeek AI chatbot, released by a Chinese startup, has temporarily dethroned OpenAI’s ChatGPT from the top spot on Apple’s US App Store.

The app is completely free to use, and DeepSeek’s R1 model is powerful enough to be comparable to OpenAI’s o1 “reasoning” model, except DeepSeek’s chatbot is not sequestered behind a $20-a-month paywall like OpenAI’s is. Also, the DeepSeek model was efficiently trained using less powerful AI chips, making it a benchmark of innovative engineering.

I’ve tested many new generative AI tools over the past couple of years, so I was curious to see how DeepSeek compares to the ChatGPT app already on my smartphone. After a few hours of using it, my initial impressions are that DeepSeek’s R1 model will be a major disruptor for US-based AI companies, but it still suffers from the weaknesses common to other generative AI tools, like rampant hallucinations, invasive moderation, and questionably scraped material.

How to Access the DeepSeek Chatbot

Users interested in trying out DeepSeek can access the R1 model through the Chinese startup’s smartphone apps (Android, Apple), as well as on the company’s desktop website. You can also use the model through third-party services like Perplexity Pro. In the app or on the website, click on the DeepThink (R1) button to use the best model. Developers who want to experiment with the API can check out that platform online. It’s also possible to download a DeepSeek model to run locally on your computer.

In order to use all the consumer features, you will need to create a user account that tracks your chats. “We store the information we collect in secure servers located in the People’s Republic of China,” reads the company’s privacy policy. Check out this article from WIRED’s Security desk for a more detailed breakdown about what DeepSeek does with the data it collects. It’s worth keeping in mind that, just like ChatGPT and other American chatbots, you should always avoid sharing highly personal details or sensitive information during your interactions with a generative AI tool.

Is This Basically FreeGPT?

Yes and no! If you’re looking for a free chatbot to use, ChatGPT already includes plenty of free features. So does Anthropic’s Claude, Google’s Gemini, and Meta’s AI tool. So, why is the fact that DeepSeek is free notable? It’s about the raw power of the model that’s generating these free-for-now answers. As previously mentioned, DeepSeek’s R1 mimics OpenAI’s latest o1 model, without the $20-a-month subscription fee for the basic version and $200-a-month for the most capable model. This comes as a major blow to OpenAI’s attempt to monetize ChatGPT through subscriptions.

Another feature that’s similar to ChatGPT is the option to send the chatbot out into the web to gather links that inform its answers. DeepSeek does not have deals with publishers to use their content in answers; OpenAI does , including with WIRED’s parent company, Condé Nast. But the web search outputs were decent, and the links gathered by the bot were generally helpful.

Still, the current DeepSeek app does not have all the tools longtime ChatGPT users may be accustomed to, like the memory feature that recalls details from past conversations so you’re not always repeating yourself. DeepSeek also doesn’t have anything close to ChatGPT’s Advanced Voice Mode, which lets you have voice conversations with the chatbot, though the startup is working on more multimodal capabilities.

A Research Breakthrough, but Still Inaccurate

Though it may almost seem unfair to knock the DeepSeek chatbot for issues common across AI startups, it’s worth dwelling on how a breakthrough in model training efficiency does not even come close to solving the roadblock of hallucinations, where a chatbot just makes things up in its responses to prompts. Many of the outputs I generated included blatant falsehoods, confidently spewed out. For example, when I asked R1 what the model already knew about me without searching the web, the bot was convinced I’m a longtime tech reporter at The Verge. No shade, but not true!

DeepSeek vs. ChatGPT Hands On With DeepSeeks R1 Chatbot

Reece Rogers

I Stared Into the AI Void With the SocialAI App


The first time I used SocialAI, I was sure the app was performance art. That was the only logical explanation for why I would willingly sign up to have AI bots named Blaze Fury and Trollington Nefarious, well, troll me.

Even the app’s creator, Michael Sayman, admits that the premise of SocialAI may confuse people. His announcement this week of the app read a little like a generative AI joke: “A private social network where you receive millions of AI-generated comments offering feedback, advice, and reflections.”

But, no, SocialAI is real, if “real” applies to an online universe in which every single person you interact with is a bot.

There’s only one real human in the SocialAI equation. That person is you. The new iOS app is designed to let you post text like you would on Twitter or Threads. An ellipsis appears almost as soon as you do so, indicating that another person is loading up with ammunition, getting ready to fire back. Then, instantaneously, several comments appear, cascading below your post, each and every one of them written by an AI character. In the new new version of the app, just rolled out today, these AIs also talk to each other.

When you first sign up, you’re prompted to choose these AI character archetypes: Do you want to hear from Fans? Trolls? Skeptics? Odd-balls? Doomers? Visionaries? Nerds? Drama Queens? Liberals? Conservatives? Welcome to SocialAI, where Trollita Kafka, Vera D. Nothing, Sunshine Sparkle, Progressive Parker, Derek Dissent, and Professor Debaterson are here to prop you up or tell you why you’re wrong.

Mobile Phone Phone and Text

Screenshot of the instructions for setting up the Social AI app.

Is SocialAI appalling, an echo chamber taken to its logical extreme? Only if you ignore the truth of modern social media: Our feeds are already filled with bots, tuned by algorithms, and monetized with AI-driven ad systems. As real humans we do the feeding: freely supplying social apps fresh content, baiting trolls, buying stuff. In exchange, we’re amused, and occasionally feel a connection with friends and fans.

AI Chatbots Are Running for Office Now


Victor Miller [Archival audio clip]: She’s asking what policies are most important to you, VIC?

VIC [Archival audio clip]: The most important policies to me focus on transparency, economic development, and innovation.

Leah Feiger: That is so bizarre. I got to ask, could VIC be exposed to other sources of information other than these public records? Say, email from a conspiracy theorist who wants VIC to do something not so good with elections that would not represent its constituents.

Vittoria Elliott: Great question. I asked Miller, “Hey, you’ve built this bot on top of ChatGPT. We know that sometimes there’s problems or biases in the data that go into training these models. Are you concerned that VIC could imbibe some of those biases or there could be problems?” He said, “No, I trust OpenAI. I believe in their product.” You’re right. He decided, because of what’s important to him as someone who cares a lot about Cheyenne’s governance, to feed this bot hundreds, and hundreds, and hundreds of pages of what are called supporting documents. The kind of documents that people will submit in a city council meeting. Whether that’s a complaint, or an email, or a zoning issue, or whatever. He fed that to VIC. But you’re right, these chatbots can be trained on other material. He said that he actually asked VIC, “What if someone tries to spam you? What if someone tries to trick you? Send you emails and stuff.” VIC apparently responded to him saying, “I’m pretty confident I could differentiate what’s an actual constituent concern and what’s spam, or what’s not real.”

Leah Feiger: I guess I would just say to that, one-third of Americans right now don’t believe that President Joe Biden legitimately won the 2020 election, but I’m so glad this robot is very, very confident in its ability to decipher dis and misinformation here.

Vittoria Elliott: Totally.

Leah Feiger: That was VIC in Wyoming. Tell us a little more about AI Steve in the UK. How is it different from VIC?

Vittoria Elliott: For one thing, AI Steve is actually the candidate.

Leah Feiger: What do you mean actually the candidate?

Vittoria Elliott: He’s on the ballot.

Leah Feiger: Oh, OK. There’s no meat puppet?

Vittoria Elliott: There is a meat puppet, and that Steve Endicott. He’s a Brighton based business man. He describes himself as being the person who will attend Parliament, do the human things.

Leah Feiger: Sure.

Vittoria Elliott: But people, when they go to vote next month in the UK, they actually have the ability not to vote for Steve Endicott, but to vote for AI Steve.

Leah Feiger: That’s incredible. Oh my God. How does that work?

Vittoria Elliott: The way they described it to me, Steve Endicott and Jeremy Smith, who is the developer of AI Steve, the way they’ve described this is as a big catchment for community feedback. On the backend, what happens is people can talk to or call into AI Steve, can have apparently 10,000 simultaneous conversations at any given point. They can say, “I want to know when trash collection is going to be different.” Or, “I’m upset about fiscal policy,” or whatever. Those conversations get transcribed by the AI and distilled into these are the policy positions that constituents care about. But to make sure that people aren’t spamming it basically and trying to trick it, what they’re going to do is they’re going to have what they call validators. Brighton is about an hour outside of London, a lot of people commute between the two cities. They’ve said, “What we want to do is we want to have people who are on their commute, we’re going to ask them to sign up to these emails to be validators.” They’ll go through and say, “These are the policies that people say that are important to AI Steve. Do you, regular person who’s actually commuting, find that to actually be valuable to you?” Anything that gets more than 50% interest, or approval, or whatever, that’s the stuff that real Steve, who will be in Parliament, will be voting on. They have this second level of checks to make sure that whatever people are saying as feedback to the AI is checked by real humans. They’re trying to make it a little harder for them to game the system.