Microsoft Delivers Stark Warning About AI-Enabled Online Scams


Microsoft’s latest Cyber Signals report shows how cybercriminals are using AI to make scams more believable and harder to spot. The company says that between April 2024 and April 2025, it stopped $4 billion worth of fraud attempts, blocked about 1.6 million fake bot signups every hour, and rejected 49,000 fake partnership requests. This shows how common online fraud has become, especially as scammers use AI to trick people more easily.

AI is now helping scammers create fake websites, job postings, and even customer service chatbots that seem real. For example, fake online stores can be set up in minutes, complete with AI-written product descriptions and reviews. Some scammers use AI to run fake job interviews or send emails that look like they come from real companies.

Tech support scams are also on the rise. Criminals pretend to be from companies like Microsoft, convincing people to give them remote access to their computers. Microsoft has added new warnings and safety steps in its Quick Assist tool, which helps people share their screens for support. The company now blocks thousands of suspicious connection attempts every day.

To fight these scams, Microsoft is using machine learning and AI to spot fraud patterns and warn users about probable threats. Tools like Microsoft Edge now have typo protection and can spot fake websites, while Microsoft Defender can help protect against phishing and unsafe downloads. The company also works with law enforcement and international groups to take down scam networks.

Microsoft says that everyone should be careful online—double-check websites before buying, be wary of job offers that seem too good to be true, and never give personal information to unverified contacts.

Leave a Reply

Your email address will not be published. Required fields are marked *