The National Cyber Security Centre report highlights social engineering as a critical area where AI will significantly enhance capabilities, making phishing attacks more convincing and more challenging to spot.
Artificial intelligence (AI) is set to significantly increase cyber threats, particularly ransomware attacks, over the next two years, according to a report by the United Kingdom’s National Cyber Security Centre (NCSC).
In its report, NCSC outlines how AI’s effect on cyber threats can be balanced by leveraging AI to strengthen cybersecurity through detection and improved design. However, it recommended further study to gauge how AI advancements in cybersecurity will mitigate the impact of threats.
A ransomware attack is a cyberattack in which malicious software is deployed to encrypt a victim’s files or entire system. The attackers then demand a ransom, typically in cryptocurrency, to provide the victim with the decryption key or tools to restore access to their data.
The influence of AI on cyber threats is expected to vary, favoring advanced state actors with more access to sophisticated AI-driven cyber operations. The report highlights social engineering as a critical area where AI will significantly enhance capabilities, making phishing attacks more convincing and more challenging to spot.
Table showing the extent of capability uplift caused by AI over the next two years. Source: National Cyber Security Centre
The NCSC report states that AI will mainly enhance threat actors’ abilities in social engineering. Generative AI (GenAI) can already create convincing interactions, like lure documents, free of translation and grammatical errors common in phishing, and this trend is expected to grow over the next two years as models evolve and gain popularity.
Supporting this statement, James Babbage, Director General for Threats at the National Crime Agency, said:
“AI services lower barriers to entry, increasing the number of cyber criminals, and will boost their capability by improving the scale, speed, and effectiveness of existing attack methods. Fraud and child sexual abuse are also particularly likely to be affected.”
The NCSC assessment points out challenges in cyber resilience due to AI models like Generative AI (GenAI) and large language models (LLMs). These models make verifying the legitimacy of emails and password reset requests difficult. The decreasing time between security updates and threat exploitation makes it challenging for network managers to quickly patch vulnerabilities.
Harnessing advanced AI in cyber operations requires expertise, resources, and access to quality data, so highly capable state actors are best positioned to leverage AI’s potential. Other state actors and commercial companies will see moderate capability gains in the next 18 months, according to the report.
Despite NCSC’s recognition of the significance of skills, tools, time, and money for using advanced AI in cyber operations, the report states that these factors will become less crucial as AI models become more widespread. The accessibility of AI-enabled cyber tools is predicted to increase as capable groups will monetize AI-enabled cyber tools, making improved capability available to anyone willing to pay.
To identify when threat actors have been able to harness AI effectively, the report notes that there will be increases in the volume, complexity, and impact of cyber operations. Lindy Cameron, CEO of NCSC, speaking on the need for the government to first harness AI’s potential while managing its risks said:
“We must ensure that we both harness AI technology for its vast potential and manage its risks – including its implications on the cyber threat.
To tackle this enhanced threat, the U.K. government invested £2.6 billion in 2022, under its Cyber Security Strategy to improve the U.K.’s resilience.