Security

Microsoft says hackers are using AI to make cyberattacks even worse.

Microsoft’s new report cautions businesses that hackers have started using artificial intelligence (AI) in their job to speed up their tradecraft, get beyond security measures, and do bad things.

Microsoft’s security blog says that the cyber underworld is copying businesses that use AI to make their operations faster, bigger, and more resilient. For example, it is copying businesses that use corporate-style organizational structures, as-a-service models, and specialization.

It said that Microsoft’s threat intelligence unit has found that most of the bad AI use today is language models making text, code, or media. Threat actors utilize generative AI to write phishing emails, translate text, summarize stolen data, create or fix malware, and build scripts or infrastructure.

“AI is quickly becoming a part of the entire cyberattack lifecycle, but not always in the ways people think it will,” said Ensar Seker, CISO of SOCRadar, a threat intelligence business in Newark, Del.

He told TechNewsWorld, “In a lot of cases, threat actors aren’t making their own advanced AI models.” “Instead, they are putting existing generative AI tools to use to speed up the work of traditional attackers.”

He went on to say, “The biggest effect of AI on cyber operations is that they are more efficient, not that they use completely new attack methods.”

“But,” he went on, “AI doesn’t replace traditional attacker tradecraft or make human knowledge unnecessary. Advanced campaigns, especially those run by nation-state groups, still depend a lot on manual reconnaissance, proprietary tools, and strict operational security.

He remarked, “AI is more of a force multiplier than a replacement for established tactics.” “Threat actors still require a way in, resources, and a defined goal. AI only helps people get things done faster after everything is set up.

AI Speeds Up Attack Planning

“Think about where threat actors used to spend their time,” said Stu Bradley, senior vice president for risk, fraud, and compliance solutions at SAS, an analytics and AI software company in Cary, North Carolina. “They would find and research their victims, write convincing phishing messages, and build a relationship over weeks or months to get that romance scam payout.”

He told TechNewsWorld, “With the help of easily accessible AI tools, those tasks that used to take a lot of time are now being streamlined and automated.” “GenAI lets scammers make polished, targeted content in seconds—content that used to take hours to write.” Because of this, the time between discovering a victim or victims, as is generally the case, and attacking them is getting shorter and shorter.

“And criminals also use these AI tools to automate their attacks, so they can go after a lot more victims at once with a lot less work,” he said. “The effect is huge when you think about how most fraud schemes are just a numbers game. You won’t hit every time, but the more times you try, the greater your chances.

Eric Schwake, director of cybersecurity strategy at Salt Security, an API security company in Palo Alto, California, said that threat actors use AI to totally automate the manual, time-consuming parts of the cyber death chain.

“They use generative AI to quickly do reconnaissance, write and fix bad code, and make very specific phishing schemes in seconds,” he told TechNewsWorld. “By making it easier and faster to create and use an exploit, attackers can move faster than traditional security measures or human analysts can react.”
Force Multiplier

Microsoft said that its threat intelligence section has seen threat actors use AI in their daily operations, which aren’t necessarily harmful but help them reach their larger goals. In these situations, AI is used to make operations more efficient, larger, and more sustainable, not to carry out attacks directly.

Jacob Krell, senior director for safe AI solutions and cybersecurity at Suzu Labs in Las Vegas, which offers AI-powered cybersecurity services, said, “AI lets threat actors run more of the attack lifecycle at the same time.”

He told TechNewsWorld that “reconnaissance, persona development, phishing lure generation, infrastructure setup, and post-compromise data triage can all be done faster and on more targets at once.” “What used to take a lot of specialists can now be done in a single, repeatable workflow.”

Bradley from SAS said that AI helps these organized crime groups with their staffing issues. He said, “You don’t need a big crew to run a big operation anymore.” “The automation of AI content generation means they can flood email, SMS, voice, and social channels at the same time with personalized messages that are much more convincing than past scams.”

AI is a tremendous force multiplier that lets one attacker coordinate thousands of attacks at once, according to Salt Security’s Schwake. He claimed, “Attackers are using AI to automatically find targets, spray passwords, and quickly set up attack infrastructure like look-alike domains.” “This ability lets actors who aren’t very experienced run very complicated campaigns across a global attack surface without needing more people.”
AI Helps Attackers Build Faster

Microsoft also said that bad guys are using AI to make resources. It said that threat actors can use AI models to plan, set up, and fix their hidden infrastructure. This strategy makes it easier for less skilled players to get started and speeds up the deployment of robust infrastructure while lowering the danger of being caught.

Vincenzo Iozzo, CEO and co-founder of SlashID, an identity threat detection and response company in Chicago, said, “AI makes tools more flexible and able to avoid detection.”

He told TechNewsWorld that “AI-generated malware can be polymorphic, changing its own code to avoid signature-based detection.” “AI also lets bad actors quickly change campaigns that get blocked by changing payloads, rotating infrastructure, and changing social engineering lures faster than defenders can respond.”

Suzu Labs’ Krell said that AI makes systems more resilient by decreasing the time it takes for threat actors to rebuild. He said, “When a payload, lure, or infrastructure component is found, threat actors can use AI to quickly change code, update phishing content, fix deployment problems, and add functionality back in using different libraries or languages.”

“Command and control callback locations are also changing more often than manual operations would allow, which gives defenders less time to act on flagged indicators,” he said.
The rise of agentic AI in cybercrime

Microsoft said that its threat researchers are starting to see early signs of a shift toward more agentic applications of AI, even though generative AI is now responsible for most of the threat actor activity utilizing AI.

This development might be a big deal for threat actors since it could let them run semi-autonomous workflows that constantly improve phishing attacks, test and upgrade infrastructure, stay persistent, or keep an eye on open-source intelligence for fresh chances.

Microsoft has not yet seen widespread use of agentic AI by threat actors, mainly because of ongoing reliability and operational issues, it said. However, real-world examples and proof-of-concept research show that these systems might be used for automated surveillance, infrastructure management, malware generation, and decision-making after a breach.

Krell remarked, “What we are seeing today is early but meaningful experimentation.” “Agentic AI is being used to help workflows that include planning, evaluating tool use, and making changes over time instead of just giving people one-time prompts.”

He went on to say, “Microsoft has reported that [North Korean threat actor] Coral Sleet is using agentic AI tools in an end-to-end workflow for lure development, infrastructure provisioning, and rapid payload testing and deployment.” “Reliability and operational risk still limit large-scale use, but the direction is clear.”

He went on to say, “AI is not taking the place of threat actors.” “It is making them work better.”

He remarked, “The most immediate effect is not fully autonomous intrusions, but faster research, faster adaptation, more scalable social engineering, and more sustainable misuse of legitimate access.” “AI acceleration does not move in a straight line. Capabilities are building up, and businesses that still think of this as a phishing email problem are already not seeing the big picture of the operational transformation that is happening.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button