AI-enhanced malicious attacks are a top concern for 80% of executives, and for good reason, as there is a lot of evidence that bad actors are exploiting the technology.
For the third consecutive quarter, Gartner has found that cyber attacks staged using artificial intelligence are the biggest risk for enterprises.
The consulting firm surveyed 286 senior risk and assurance executives from July through September, and 80% cited AI-enhanced malicious attacks as the top threat they were concerned about. This isn’t surprising, as evidence suggests AI-assisted attacks are on the rise.
Other commonly cited emerging risks outlined in the report include AI-assisted misinformation, escalating political polarization, and misaligned organizational talent profiles.
Attackers are using AI to write malware, craft phishing emails, and more
In June, HP intercepted an email campaign spreading malware in the wild with a script that “was highly likely to have been written with the help of GenAI.” The VBScript was neatly structured, and each command had a comment, which would prove an unnecessary effort for a human to write.
The researchers then used GenAI to produce a script and found similar output, suggesting that the original malware was at least partially AI-generated.
SEE: 20% of Generative AI ‘Jailbreak’ Attacks are Successful
The number of business email compromise attacks detected by security firm Vipre in the second quarter was 20% higher than the same period in 2023, and two-fifths of them were generated by AI. The top targets were CEOs, followed by HR and IT personnel.
Usman Choudhary, VIPRE’s chief product and technology officer, said in the press release: “Malefactors are now leveraging sophisticated AI algorithms to craft compelling phishing emails, mimicking the tone and style of legitimate communications.”
Retail sites alone experienced an average of 569,884 AI-driven attacks each day from April to September, according to Imperva Threat Research. Researchers said that tools such as ChatGPT, Claude, and Gemini, as well as special bots that scrape websites for LLM training data, are being used to conduct distributed denial-of-service attacks and business logic abuse, for example.
More ethical hackers are admitting to using GenAI, too, with the proportion increasing from 64% to 77% in the last year, according to a report from BugCrowd. These researchers say it assists with die-channel attacks, fault-injection attacks, and automating parallelized attacks to simultaneously breach multiple devices. But if the ‘good guys’ are finding AI valuable, then so will the bad actors.
Full article:
https://www.techrepublic.com/article/ai-cyber-attacks-gartner/