Skip links

The near-term impact of AI on the cyber threat

  • AI will primarily offer threat actors capability uplift in social engineering. Generative AI (GenAI) can already be used to enable convincing interaction with victims, including the creation of lure documents, without the translation, spelling and grammatical mistakes that often reveal phishing. This will highly likely increase over the next two years as models evolve and uptake increases.  
  • AI’s ability to summarise data at pace will also highly likely enable threat actors to identify high-value assets for examination and exfiltration, enhancing the value and impact of cyber attacks over the next two years.

  • Threat actors, including ransomware actors, are already using AI to increase the efficiency and effectiveness of aspects of cyber operations, such as reconnaissance, phishing and coding. This trend will almost certainly continue to 2025 and beyond. Phishing, typically aimed either at delivering malware or stealing password information, plays an important role in providing the initial network accesses that cyber criminals need to carry out ransomware attacks or other cyber crime. It is therefore likely that cyber criminal use of available AI models to improve access will contribute to the global ransomware threat in the near term.

  • AI is likely to assist with malware and exploit development, vulnerability research and lateral movement by making existing techniques more efficient. However, in the near term, these areas will continue to rely on human expertise, meaning that any limited uplift will highly likely be restricted to existing threat actors that are already capable. AI has the potential to generate malware that could evade detection by current security filters, but only if it is trained on quality exploit data. There is a realistic possibility that highly capable states have repositories of malware that are large enough to effectively train an AI model for this purpose.

  • Cyber resilience challenges will become more acute as the technology develops. To 2025, GenAI and large language models (LLMs) will make it difficult for everyone, regardless of their level of cyber security understanding, to assess whether an email or password reset request is genuine, or to identify phishing, spoofing or social engineering attempts. The time between release of security updates to fix newly identified vulnerabilities and threat actors exploiting unpatched software is already reducing. This has exacerbated the challenge for network managers to patch known vulnerabilities before they can be exploited. AI is highly likely to accelerate this challenge as reconnaissance to identify vulnerable devices becomes quicker and more precise.

  • Expertise, equipment, time and financial resourcing are currently crucial to harness more advanced uses of AI in cyber operations. Only those who invest in AI, have the resources and expertise, and have access to quality data will benefit from its use in sophisticated cyber attacks to 2025. Highly capable state actors are almost certainly best placed amongst cyber threat actors to harness the potential of AI in advanced cyber operations. Other state actors and most commercial companies that offer capability to states worldwide will gain moderate capability uplift over the next eighteen months in social engineering, reconnaissance and exfiltration. Capable and established criminal groups are also likely to have enough training data and resource to gain some uplift.

  • However, it is a realistic possibility that these factors may become less important over time, as more sophisticated AI models proliferate and uptake increases. Publicly available AI models already largely remove the need for actors to create their own replica technologies, especially in low-sophistication operations such as spear-phishing. Less-skilled cyber actors will almost certainly benefit from significant capability uplifts in this type of operation to 2025. Commoditisation of cyber crime capability, for example ‘as-a-service’ business models, makes it almost certain that capable groups will monetise AI-enabled cyber tools, making improved capability available to anyone willing to pay.

  • To 2025, training AI on quality data will remain crucial for its effective use in cyber operations. The scaling barriers for automated reconnaissance of targets, social engineering and malware are all primarily related to data. But to 2025 and beyond, as successful exfiltrations occur, the data feeding AI will almost certainly improve, enabling faster, more precise cyber operations.

  • Increases in the volume and heightened complexity and impact of cyber operations will indicate that threat actors have been able to effectively harness AI. This will highly likely intensify UK cyber resilience challenges in the near term for UK government and the private sector.

  • Source