2026-04-04 AI Security and Credentials

The AI security checklist, part 1: what 'uk security level' now means for you

If you’re an IT leader, a security manager, or even just ‘the AI person’ in your organisation, you’ve probably spent years diligently building up your ‘UK security level’. Cyber Essentials, ISO 27001, perhaps even some NCSC guidelines. You’ve got policies, procedures, and the certifications to prove it. Good on you. But I’m here to tell you, bluntly, that your comfortable world has just been upended. Because AI, in its current chaotic adoption phase, has fundamentally changed what ‘UK security level’ actually means for your business. And if you’re not seeing it, you’re already behind the curve, leaving your organisation exposed.

The grand illusion: your ‘UK security level’ isn’t what it used to be

Let’s be frank: those security frameworks you’ve worked so hard for—Cyber Essentials, ISO 27001, your meticulously crafted internal policies—they were never designed for the AI era. They’re excellent for traditional IT assets, for network perimeters, for known software vulnerabilities, and for the predictable flow of data within your established infrastructure. But AI isn’t just another piece of software; it’s a fundamentally different beast, bringing with it entirely new classes of assets, new perimeters, and a whole host of novel vulnerabilities that your current certifications simply don’t cover.

I’ve sat in enough post-mortem meetings to know that the assumption that ‘we’re ISO 27001 certified, so we’re fine’ is a dangerous one. Your framework doesn’t have a clause for ‘what happens when an employee uploads confidential client data to a public large language model because they thought it would help them summarise a report’. It doesn’t account for the subtle ways an AI model can be tricked into revealing its training data, or how a third-party AI service might silently hoover up your intellectual property. The spirit of good security practice remains, of course, but the mechanisms and scope required to achieve a genuine ‘UK security level’ have shifted dramatically. You’re not just protecting servers and endpoints anymore; you’re protecting models, prompts, training data, and the very outputs of AI systems, all of which behave in ways conventional security wasn’t built to anticipate.

The wild west of shadow AI: data leaks and credential chaos

If you think you know every piece of software your employees are using, I’ve got some bad news. The rise of AI has supercharged the phenomenon of shadow IT into shadow AI. Your teams, often with the best intentions, are experimenting. They’re signing up for free trials of generative AI tools, uploading documents, refining prompts, and integrating these services into their workflows. And they’re doing it outside of your procurement, security, and IT oversight.

Each new SaaS AI tool represents a new login, a new set of credentials, and a new unmanaged gateway for your data. This isn’t just about ‘credential sprawl’ in the traditional sense; it’s about your proprietary information, client data, and potentially even sensitive personal identifiable information (PII) being fed into third-party models. These models, in turn, might use that data for further training, effectively making your confidential information part of their public knowledge base or, at the very least, storing it in a jurisdiction you have no control over.

I’ve seen organisations crippled by this. A practitioner, trying to be more efficient, uploads a draft proposal with sensitive financial figures. A developer pastes code snippets into an AI assistant, inadvertently sharing intellectual property. The security posture becomes fragmented, visibility evaporates, and audit trails vanish. You’re left with a gaping hole where your data governance used to be, all because someone was ‘just trying to be productive’.

New beasts in the security jungle: AI’s attack surface

Traditional security is about patching known vulnerabilities and defending against common attack vectors. AI introduces an entirely new menagerie of threats that demand a different kind of vigilance. These aren’t just theoretical; they are actively being exploited:

  • Prompt Injection: This is more than just ‘jailbreaking’ a chatbot. It’s about an attacker crafting malicious instructions, either directly in a prompt or subtly embedded within data fed to the AI, to make the model perform unintended actions. Imagine an AI customer service agent revealing internal policies or confidential customer details because a cunning prompt bypassed its safety filters.
  • Model Poisoning: Adversaries can inject deliberately bad or misleading data into an AI model’s training set. This can degrade the model’s performance, introduce biases, or even create ‘backdoors’ that an attacker can later exploit. Detecting this can be incredibly difficult, as the model might still appear to function normally until a specific trigger is activated.
  • Data Exfiltration via AI Outputs: An AI model, even if not directly compromised, can inadvertently reveal sensitive information it was trained on. Attackers can craft specific prompts to ‘trick’ the model into regurgitating confidential data, or the model’s outputs might simply contain traces of sensitive information that should never have been there.
  • AI Supply Chain Vulnerabilities: Modern AI systems are rarely built from scratch. They rely on open-source components, pre-trained models, third-party APIs, and data sets. Each of these components introduces a potential vulnerability. A flaw in an upstream library or a compromised pre-trained model could propagate risks throughout your entire AI ecosystem. Think of it like a Log4j scenario, but for your machine learning pipeline.

These are complex, insidious threats that require a deep understanding of AI’s inner workings. Your existing firewalls and antivirus software simply aren’t equipped to deal with them.

Data, data, everywhere, nor any drop to drink (compliantly): GDPR and residency nightmares

For any organisation operating in the UK, UK GDPR is a constant, looming presence. And AI, with its voracious appetite for data, has turned what was already a complex compliance landscape into a minefield. The fundamental questions—where is my data going? Who owns it? Is it being used for further training?—become incredibly difficult to answer when you’re dealing with opaque, third-party AI services.

Many of the most popular and powerful AI tools are developed and hosted outside the UK, often in the US. This immediately raises significant data residency concerns. If your data, particularly personal data, is being processed by models in another jurisdiction, you need to be absolutely certain that adequate safeguards are in place and that transfers are lawful under UK GDPR. For public sector bodies or organisations in highly regulated industries, this isn’t just a best practice; it’s a non-negotiable legal requirement. The idea that ‘we’re just using it for internal analysis’ doesn’t magically make the data transfer compliant.

Furthermore, how are you classifying the data your employees are feeding into these models? What might seem like an innocuous ‘draft’ report or ‘internal’ analysis can still contain PII, commercially sensitive information, or intellectual property. The ‘black box’ nature of many AI models means it’s often incredibly difficult to audit how the AI processed that data, what it learned from it, or whether it’s been retained. This lack of transparency directly conflicts with GDPR principles like data minimisation, purpose limitation, and accountability.

From theory to practice: crafting AI-specific security policies

It’s clear then that simply adding a quick clause about ‘not uploading sensitive data to the internet’ to your existing acceptable use policy isn’t going to cut it. You need to move beyond generic security principles and start crafting AI-specific policies and controls that address these new realities head-on. This isn’t about reinventing the wheel, but about adding new spokes tailored for the AI engine.

Your policies need to focus on:

  • Data Minimisation and Purpose Limitation for AI: Clearly define what types of data can be used with specific AI tools and for what explicit purposes. No more ‘just for a bit of help’.
  • AI Tool Procurement and Vetting: Establish a rigorous process for evaluating and approving AI services, focusing heavily on their security posture, data handling practices, and compliance with UK GDPR.
  • Prompt Engineering Best Practices: Educate users on how to construct prompts that minimise risk, avoid revealing sensitive information, and prevent prompt injection attacks.
  • Model Governance: For any AI models you develop or deploy internally, establish clear ownership, version control, and regular security audits.
  • Transparency and Accountability: As much as possible, understand how your chosen AI tools process data and ensure there are clear lines of accountability for their use and outputs.

This is a journey, not a destination. It requires an iterative approach, constant vigilance, and a willingness to learn as the technology evolves.

Immediate action: where do we even start?

Feeling a bit overwhelmed? I get it. The scale of the challenge can seem daunting. But you don’t need to solve everything at once. Here are 3-5 immediate, practical steps you can take right now to start shoring up your ‘UK security level’ in the face of AI:

  1. Conduct a Shadow AI Discovery Audit: You cannot secure what you don’t know about. Use network logs, proxy data, and carefully worded employee surveys to uncover which AI tools are currently in use across your organisation. Be empathetic; remember, people are trying to be productive. The goal is discovery, not immediate punishment.
  2. Implement a Basic Data Classification for AI Policy: Start simple. A clear, unambiguous rule: no sensitive, confidential, or personal data (PII) is to be uploaded to public, third-party AI models without explicit, documented approval. This is a quick win that immediately reduces your highest risk exposure.
  3. Launch Targeted User Education: Don’t just send out a dry policy document. Explain why these rules are in place. Use real-world (anonymised!) examples of data leaks or security incidents. Empower your employees to be part of the solution, not just the problem.
  4. Establish an AI Tool Vetting Process: Stop the bleeding. Put a temporary hold on new AI tool adoption until a formal vetting process is in place. This process should evaluate data residency, security certifications, data usage policies, and contractual terms. Even a simple checklist is better than nothing.
  5. Start Small with Controlled AI Environments: If your teams are eager to experiment with AI, provide a sandboxed, controlled environment. This could be an internal, self-hosted LLM, a heavily restricted cloud instance, or even just a set of guidelines for using anonymised or synthetic data for testing. This allows innovation without exposing your core assets.

This isn’t about fear-mongering; it’s about facing reality. Your organisation’s ‘UK security level’ has new demands placed upon it by the rapid, often chaotic, adoption of AI. It’s not about throwing out everything you’ve built, but about recognising the new landscape and building on your existing foundations with AI-specific vigilance.

Don’t get caught out. This isn’t just theory; it’s what’s happening right now in organisations across the UK. Understand the immediate shifts AI brings to your security landscape, and crucially, prepare for Parts 2 and 3 of our AI Security Checklist, where we’ll delve deeper into specific controls and governance. In the meantime, I’d genuinely like to hear from you: What’s your biggest AI security headache right now? Or, what does ‘UK security level’ mean to you in this brave new AI world?