GCHQ’s National Cyber Security Centre (NCSC) has warned that U.K. critical systems are facing growing risks due to a widening ‘digital divide’—the gap between organizations that can adapt to AI (artificial intelligence)-enabled threats and those that cannot. In a report released on the opening day of the CYBERUK conference, the NCSC warned that developments in AI are likely to accelerate the time between the discovery of software vulnerabilities and their exploitation by malicious actors, highlighting the growing cyber threat expected between now and 2027. The agency urges organizations to adopt its guidance on securely implementing AI tools while maintaining strong cybersecurity measures across systems.

The assessment builds on the NCSC’s previous report, the near-term impact of AI on the cyber threat assessment, published last January, and looks to highlight the most significant impacts on cyber threats to the UK from AI developments over the coming years.

“We know AI is transforming the cyber threat landscape, expanding attack surfaces, increasing the volume of threats, and accelerating malicious capabilities,” Paul Chichester, NCSC director of operations, said in a Wednesday media statement. “While these risks are real, AI also presents a powerful opportunity to enhance the UK’s resilience and drive growth—making it essential for organizations to act.”

Chichester added that “Organizations should implement strong cybersecurity practices across AI systems and their dependencies and ensure up-to-date defences are in place.”

The NCSC assessment report noted that by 2027, AI-enabled tools will almost certainly enhance threat actors’ capability to exploit known vulnerabilities, increasing the volume of attacks against systems that have not been updated with security fixes. System owners already face a race in identifying and mitigating disclosed vulnerabilities before threat actors can exploit them. The time between disclosure and exploitation has shrunk to days, and AI will almost certainly reduce this further. This will highly likely contribute to an increased threat to critical national infrastructure (CNI) or CNI supply chains, particularly any OT (operational technology) with lower levels of security.

The assessment report recognized that AI will almost certainly continue to make elements of cyber intrusion operations more effective and efficient, leading to an increase in frequency and intensity of cyber threats. Cyber threat actors are almost certainly already using AI to enhance existing tactics, techniques and procedures (TTPs) in victim reconnaissance, vulnerability research and exploit development, access to systems through social engineering, basic malware generation, and processing exfiltrated data. By 2027, this will likely increase the volume and impact of cyber intrusions through the evolution and enhancement of existing TTPs, rather than creating novel threat vectors. 

In the near term, only highly capable state actors with access to requisite investment, quality training data, and expertise will be able to harness the full potential of AI in advanced cyber operations. 

Also, the majority of other cyber threat groups are almost certain to focus on the use or repurposing of commercially available and open-source AI models to uplift their capability. The release of capable open-source models likely lowers the barrier to building similar models and narrow AI-enabled tools to enhance capability across both cyber defence and threat. 

AI cyber capability is likely to make cybersecurity at scale increasingly important by 2027 and beyond, with the most significant AI cyber development highly likely to come from AI-assisted vulnerability research and exploit development (VRED) that enables access to systems through the discovery and exploitation of flaws in the underlying code or configuration.   

The report also called for keeping pace with frontier AI cyber developments will almost certainly be critical to cyber resilience for the decade to come. For skilled cyber hackers who can fine-tune AI models or build sovereign AI systems dedicated to vulnerability exploitation, AI will likely enhance zero-day discovery and exploitation techniques by 2027. Zero-days are unpatched, and likely unknown, vulnerabilities in systems that threat actors can exploit in the knowledge that their targets will likely be vulnerable. 

Assuming a lag, or no change to cybersecurity mitigations, there is a realistic possibility of critical systems becoming more vulnerable to advanced threat actors by 2027.

By 2027, skilled cyber actors will likely be using AI-enabled automation to aid evasion and scalability. The development of fully automated, end-to-end advanced cyber attacks is unlikely to 2027. Skilled cyber actors will need to remain in the loop. However, skilled cyber hackers will almost certainly continue to experiment with automation of elements of the attack chain, such as identification and exploitation of vulnerabilities, rapid changes to malware and supporting infrastructure to evade detection. This human-machine teaming will likely make the identification, tracking, and mitigation of threat activity more challenging without the development of effective AI assistance for defence.

The NCSC assessment report detailed that the proliferation of AI-enabled cyber tools will likely increase access to AI-enabled intrusion capability to an expanded range of state and non-state actors. The commercial cyber intrusion sector will almost certainly incorporate AI into products on offer. Criminal use of AI will likely increase by 2027, as AI becomes more widely adopted in society. 

Skilled cyber criminals will likely focus on getting around safeguards on available AI models and AI-enabled commercial penetration testing tools to make AI-enabled cyber tools available ‘as a service.’ This will uplift (from a low base) novice cyber criminals, hackers for hire, and hacktivists in conducting opportunistic information gathering and disruptive operations.

Additionally, the growing incorporation of AI models and systems across the UK’s technology base, and particularly within CNI, almost certainly presents an increased attack surface for adversaries to exploit.  AI systems include data, methods for teaching and evaluating AI, and the necessary technology to use them. AI technology is increasingly connected to company systems, data, and operational technology for tasks. 

Threat actors will almost certainly exploit this additional threat vector. Techniques such as direct prompt injection, software vulnerabilities, indirect prompt injection, and supply chain attack are already capable of enabling exploitation of AI systems to facilitate access to wider systems. 

The NCSC report highlighted that insufficient cybersecurity will almost certainly increase opportunity for capable state-linked actors and cyber criminals to misuse AI systems for cyber threats. 

In the rush to provide a ‘market-leading’ AI model (or applications that are more advanced than competitors), there is a risk that developers will prioritise an accelerated release schedule over security considerations, increasing the cyber threat from compromised or insecure systems. 

The threat will also be enabled by insecure data handling processes and configuration, which includes transmitting data with weak encryption making it vulnerable to interception and manipulation; poor identity management and storage increasing the risk of credential theft, particularly those with privileged access or reused across multiple systems; and collecting extensive user data, increasing the risk of de-anonymising users and enabling targeted attack. 

Earlier this year, the UK government announced the new AI Cyber Security Code of Practice, produced by the NCSC and the Department for Science, Innovation and Technology (DSIT), which will help organizations develop and deploy AI systems securely. The Code of Practice will form the basis of a new global standard for secure AI through the European Telecommunications Standards Institute (ETSI).

Facebook Twitter Pinterest LinkedIn Tumblr Email
Leave A Reply