Skip to main content

Cybersecurity

October 23, 2025

 

Welcome to the AI arms race in enterprise cybersecurity: Who’s winning?

October, Cybersecurity Awareness Month, is a good time to think about cyber threats. Are corporations’ defenses up to the challenge?

google logo

Bree Fowler

Contributor

Cybersecurity Awareness Month may not hold the same appeal as other hallmarks of October — gazing upon burnished foliage or bingeing on Halloween candy — but it can prompt corporate leaders and consumers alike to engage in an important ritual: reevaluating their approach to cybersecurity.

And there’s a lot to reflect on through this past year. Companies especially should be asking themselves this month (and every month): How has the online threat landscape changed? And are your business’ security practices doing enough to keep pace with those changes?

Artificial intelligence has undeniably been this year’s biggest game changer. AI tools, including large language models (LLMs) such as ChatGPT, have supercharged the abilities of attackers. Meanwhile, a growing number of companies are incorporating AI into their day-to-day business, pushing cybersecurity professionals to guard new frontiers without much precedent on how to do that. Consumers, meanwhile, are caught in the middle of this rapidly changing AI environment.

These exact dynamics were on the minds of many of the 20,000 attendees at this year’s Black Hat Conference in Las Vegas. From the splashy booths on the business hall floor to the cutting-edge research presented in talks to the quiet chats in between, AI dominated the conversation.

At the event’s opening keynote, longtime cybersecurity researcher Mikko Hyppönen called AI the “biggest technical revolution” he had seen in his life, noting that researchers using LLMs have already discovered a couple dozen zero days — an industry term for undetected weakness in software or code.

“When researchers find security vulnerabilities with AI, that’s great because we can fix them,” Hyppönen said. “When attackers do the same, that's awful. That’s going to happen as well.”

 

A rising threat to consumers

Online scammers are already using LLMs to write more convincing and targeted phishing emails and do it at a much bigger scale than they ever could have without AI tools.

Gone are the days of poorly written generic emails that even the least tech-savvy person could spot as scams. Non-native English-speaking scammers are now pros at polishing their messages with LLMs.

And even those scammers posing as lonely soldiers stationed overseas, pushing a business opportunity that looks too good to be true (because it actually is), are sprinkling in personal details mined by AI to make those messages much more believable.

They’ve also moved beyond email, delivering their now well-crafted communications to social media, texts and even phone calls.

At the same time, there’s been a rise in scam messages containing audio or video deepfakes. Last year, a worker in Hong Kong was duped into paying out $25 million to fraudsters after signing on to what he thought was a video call with officials at his multinational company, including the chief financial officer. But it turned out that the others on the call were live-video deepfake recreations of those people.

Consumers also have been targeted in audio deepfake scams. That’s where cybercriminals use AI to deepfake the voices of people, typically younger, then use the mimicked voice to call family members, saying that they’ve been kidnapped or are in jail, to extort money out of them.

All of that has companies and consumers understandably scared. A recent study conducted for Mastercard polled about 13,000 consumers around the world, including about 1,000 in the U.S., and found AI-generated fake content is the No. 1 scam-related future concern for consumers. But only 13% of those polled said they are very confident in their ability to identify AI-generated threats or scams if they are targeted by them.

The vast majority of those polled specifically cited concerns about more sophisticated attacks from AI systems being hacked and turned malicious, automated large-scale cyberattacks, and more convincing phishing emails created by AI.

And worried consumers can mean big problems for the companies that serve them. If consumers can’t trust that they’re dealing with a legitimate company or that their personal information is safe, they could choose to take their business elsewhere.  

 

Democratizing security

For now, at least, it appears that defenders have the advantage. In addition to securing the new AI elements in their business customers’ systems, cybersecurity companies are incorporating AI into their own products, with the goal of stopping online threats faster and more efficiently.

Meanwhile, scammers and other cybercriminals don’t yet have enough incentive to innovate the same way. While they’re using AI tools to work faster and go bigger, experts say they’re largely sticking to improved versions of the same old scams.

Nicole Perlroth, a former cybersecurity journalist who now serves as a venture partner at Ballistic Ventures and leads her own cyber mission fund, Silver Buckshot, said in her Black Hat keynote address that she’s encouraged by the new and emerging technologies she’s seeing in cybersecurity.

She pointed to new AI-powered deepfake detection technologies that are coming to market and noted that AI has helped "democratize" cybersecurity products and services, making them more accessible to smaller companies that couldn’t afford them in the past.

Hyppönen noted that deepfake scams are currently very rare, but added that he expects all kinds of online scams, along with ransomware, to only get worse as AI technologies get better and cheaper.

The good news is, the defenders have a head start when it comes to AI.

“Attackers are using AI as well, but they're only beginning,” he said. “We’ve only seen fairly simple attacks with AI so far. It will change, but right now I would say we are prepared.”

When it comes to fraud, a sense of insecurity and even inevitability

Discover key trends in fraud and cybersecurity from a new Mastercard global survey, including that AI is heightening anxiety.

Related stories

Welcome to the AI arms race in enterprise cybersecurity: Who’s winning?

Episode two of "Anatomy of a Scam," a Mastercard documentary series

orange gradient circles