Man and robot waling on a tightrope

A CISO’s Balancing Act: Artificial Intelligence in Cyber Security

🚀 Leveraging AI For Business Growth

Artificial Intelligence (AI) is changing the way we operate by automating processes, personalising customer interactions and streamlining internal operations at a level never seen before. Beyond this, it is now defining the strategic direction of firms that are prioritising expansion and growth to their bottom line.

In May 2025, technology pioneer Microsoft laid off approximately 3% of its global workforce, with 6000 roles removed to offset their massive $80 billion AI investment (The Economic Times, 2025). The primary focus of these layoffs were senior engineers and management positions due to their cost implications. This focus on more senior roles is partnered with the automation of entry level junior positions to reduce costs and boost efficiency. This demonstrates that AI is no longer seen as a side of desk tool to drive operational efficiency. Rather, it is considered to be at the epicentre of an organisation’s success.

Lightbulb Icon Key Takeaway

AI must be adopted strategically and securely. Too much control and business processes such as software engineering, marketing and research are hampered. Too little control and risks around software licensing and data breaches escalate very quickly.

⚠️ Risks in AI Adoption

Man in a suit treading over rocks showing navigating the risks of adopting ai without cyber awareness training

While AI drives cost savings and enables efficiency, poor implementation presents a multitude of risks. These risks range from intellectual property (IP), compliance, data security and adversarial threats. A case in point was shown in 2023, where Samsung had a significant data breach where a staff member shared confidential semiconductor source code with ChatGPT while trying to resolve technical issues. This exposed sensitive intellectual property and led to Samsung banning public AI models within the organisation, then opting to develop a secure, internal AI platform (Forbes, 2023).

Some other key risks organisations should consider include:

  • AI models may memorise and leak sensitive data, such as personally identifiable information (PII) and confidential data. Take Samsung for example.
  • Using third-party models or APIs exposes organisations to vulnerabilities from vendors, leading to supply chain attacks.
  • Generative models can be manipulated by malicious actors to produce harmful or unintended outputs. Just as organisations leverage AI to support operations, so do threat actors.
  • Global AI laws are evolving, and compliance requirements may shift quickly leading to non-compliance, especially around consent and data minimisation.
  • Employees interacting with AI tools bust been have comprehensive cyber awareness training to ensure safe usage practices.
'Degrees of Security' University Sector Report 2025
Recent research analysing if UK Universities are keeping pace with the building cyber security threat.
Download

🧠 The Importance of Cyber Awareness Training

To ensure safe adoption of AI, organisations must first start with their people through cyber awareness training. It may seem cliche, but the reality is, security is first and foremost, a human problem. Human error contributed to 95% of data breaches in 2024, driven by insider threats, credential misuse and user-driven errors (Mimecast, 2024).

Case Study Icon Case Study

Recognising this in the context of AI use, the UK Government and National Cyber Strategy developed a Code of Practice and global standard on, highlighting the importance of employee cyber awareness training on AI and its risks. However, despite this, 47% of organisations who use AI still do not have any specific AI cyber security practices or processes in place (GOV.UK, 2024).

Evidently, it is essential that organisations embed cyber awareness training into their culture, as employees may engage in insecure practices such as sharing confidential information to unapproved AI driven platforms or falling victim to phishing attacks disguised as AI-generated prompts.  Employees should be trained to understand how AI works and basic security principles, as well as how it can be used to carry out cyber-attacks.

An effective cyber awareness training program cannot just offer education modules but rather should incorporate the following:

  • Live simulations that demonstrate to employees how AI can lead to data breaches if managed incorrectly.
  • How to distinguish between secure internal AI systems and insecure public AI tools.
  • Where and when to escalate problems to information security personnel.
  • Real life scenarios where AI has been used to drive cyber-attacks, such as AI deepfake voice manipulators and how they have already been used to carry out CEO fraud attacks.
Case Study Icon Case Study

A real-life scenario that occurred in 2024 was where UK design and engineering firm, Arup, was targeted by an artificial intelligence-created deepfake scam that cost it £20 million. The fraudsters tricked a Hong Kong employee into attending a video call with people he believed were the Chief Financial Officer (CFO) and other staff members, all of whom were deepfake recreations. During the call, they persuaded the employee to make 15 bank transfers to five Hong Kong bank accounts. The scam was only discovered when they followed up with the group’s headquarters. Grounding cyber awareness training in tangible examples ensures staff realise this is not an abstract emerging technology and risk anymore – it is here and now.

🗺️ Navigating AI Regulations and Ensuring Compliance

Beyond cyber awareness training and things an organisation ‘should’ do, businesses must now prepare for legislation such as the EU AI Act and global data protection regulations (General Data Protection Regulation (GDPR) and California Privacy Rights Act (CCPA)), calling for accountability, transparency and risk management when adopting AI.

Case Study Icon Case Study – Clearview AI

Non-compliance may not only entail large fines (£ millions) but also cause reputational damage among customers. A notable recent example is that of Clearview AI. Clearview is an American facial recognition company, providing software primarily to law enforcement and other government agencies. Due to regulatory oversight, they levied $51.75m of fines in Europe for scraping biometric data in absence of consent.

Current Regulations You Should Know

In the UK, AI regulation follows a sector-specific model grounded in five principles: safety, transparency, fairness, accountability, and contestability. These principles were outlined in the AI Regulation White Paper (2023).

The UK Data Protection Act closely mirrors the European Union’s (EU) GDPR and governs data management practices. Key parts of this regulation that should be considered covering AI, include Article 5, 32 and 22.

  • Article 5 sets out key principles including purpose limitation, data minimisation, and accuracy.
  • Article 32 requires appropriate technical and business measures to ensure data security, such as encryption and access controls.
  • Article 22 is particularly relevant to AI, granting individuals the right not to be subject to decisions based solely on automated processing, ensuring human oversight and the ability to contest such decisions.

Emerging Legislation

Plant emerging from the ground

Due to the rapid growth of AI globally, laws are forever evolving. While the UK avoids adopting the EU’s rigid framework for now through the UK Data Protection Act, some additional emerging regulations organisations should be aware of include:

  • The 2024 EU AI Act aims to address AI systems by imposing strict obligations on high-risk applications like biometric surveillance and credit scoring. These include mandatory risk assessments, bias testing and explainability to complement complementing GDPR requirements.
  • In the U.S., regulatory efforts are advancing through policy action under the 2023 Executive Order on AI, which emphasises model accountability, watermarking of synthetic content and safety standards development through NIST. 
  • Lastly, the UK AI framework covers how AI should be developed and used safely, fairly and transparently. Similar frameworks have emerged in the US to drive secure and ethical adoption of AI. These are not a single law, but rather a set of principles applied by different regulators depending on the sector.
Lightbulb Icon Key Takeaway

The technological and regulatory landscape in terms of AI is evolving at rapid pace. Organisations need to remain abreast with both to ensure they do not fall victim to cyber and compliance risks when adopting AI.

📊 Achieving AI Maturity Through Phased Adoption

True AI maturity does not happen overnight. It is important that each organisation approaches AI adoption aligned with their context and needs.  For example, a university will have a different expectation of optimal maturity over a large global Bank.

Therefore, a gradual, phased strategy is recommended to allow organisations to build resilience at each stage of their own journey:

AI maturity and adoption split into phases.

Ranging from left (level 1) to right (level 5), the different levels of maturity and subsequent security controls are outlined. As seen in the ‘Initial’ phase, an immature approach is to block AI usage completely. What this leads to is employees finding workarounds and engaging with insecure practices as well as risky tooling. Shifting to the ‘Managed’ stage, one of the key elements is that staff cyber awareness training should be introduced early in the adoption journey. Throughout these phases, cyber awareness training must evolve alongside, based on the different tools, controls and associated risks. For example, ongoing training on emerging risks in phase four, ‘Measured.’

Conclusion

By now, you will have realised that it is essential that an organisation’s AI strategy adopts a security-first approach. Whether you are far along your AI adoption journey or just starting out, the four key points will enable you to engage in secure AI practices:

  • Phased adoption of AI: As mentioned above, an organisation should implement a phased adoption of AI. Starting with narrow, low-risk applications before expanding to key operations. This should be partnered with relevant controls, such as role-based access or whitelisting approved software. 
  • Secure infrastructure:  Where possible, use private cloud environments, encrypted AI systems and internally managed AI tenants rather than relying wholly on third-party public platforms. This helps you to manage the controls applied to drive down your risk.
  • Establishing AI usage policies and ongoing cyber awareness training: It is important that a clear message is shared with the organisation around what data can and cannot be sent to AI models. This should be embedded into security guidelines and policies, which should be reiterated in cyber awareness training for all staff, with a focus on new joiners and high-risk functions (such as HR and Finance).
Lightbulb Icon Key Takeaway

Securing AI is not an isolated event – it is an ongoing process. Cyber awareness training is essential: investment in regular, realistic cyber drills and AI-specific training will ensure you are able to leverage AI to expand operations while bolstering your organisation with cyber resilience.

Cyber as a Service solutions have the potential drive this cyber awareness training, providing businesses with flexible, expert-managed security solutions that are tailored to their evolving AI journey.

Free Cyber Capability Maturity Model.
Use this to strategically measure your cyber security posture and transformation.
Download
Download our cyber security capability maturity model.

Share this post
Category
Published
May 30 - 2025
Cypro firewall showing robust network security
Secure your business.
Elevate your security, accelerate your growth. We take care of cyber security for high-growth companies, at every stage of their journey.
Get in touch
Related Posts
View All Posts
  • Machine and a scared man showing how security debt can creep up on you
    What Cyber Security Debt Really Costs SMBs & How to Pay It Down

    🧨 Security Debt: The Quiet Liability SMBs Carry Like technical debt, cyber security debt accumulates when small to medium-sized businesses (SMBs)…

  • 24/7 cyber security monitoring with a threat-led approach
    A Threat-Led Approach: How to Choose What Your SOC Should Monitor

    Welcome to the golden age of log overload. Your Security Operations Centre (SOC) is probably drowning in logs from SaaS…

  • Threat intelligence analysing threats and threats that impact organisations
    How to Focus on Your Most Pertinent Cyber Security Threats using MITRE ATT&CK

    🗣️ Speaking the Same Language in Cyber Security The MITRE ATT&CK framework is a globally recognised, open-source knowledge base of…

CyPro Cookie Consent

Hmmm cookies...

Our delicious cookies make your experience smooth and secure.

Privacy PolicyOkay, got it!
We use cookies to enhance your experience, analyse site traffic, and for marketing purposes. For more information on how we handle your personal data, please see our Privacy Policy.

Schedule a Call