Cybersecurity in an AI-Enabled World: A New Threat Landscape
- Alternit One

- Feb 6
- 3 min read
Cybersecurity has entered a new phase. Artificial intelligence (AI) is no longer an emerging feature on the horizon; it is now embedded on both sides of the threat landscape. Attackers are using AI to scale social engineering, generate convincing deepfakes and automate reconnaissance. At the same time, defenders are increasingly relying on AI-driven detection, response and automation to keep pace.
Against this backdrop, regulatory expectations are shifting. The UK’s Online Safety Act reframes cyber risk not simply as a technical or perimeter issue but as a matter of user protection. Firms are now expected to proactively assess, mitigate and manage the risk of harm arising from how their digital services are used internally - not just how they are attacked from the outside.
This represents a meaningful change in how cybersecurity should be governed.
From systems protection to user safety
Online safety duties extend beyond content moderation or consumer platforms. For regulated firms, they bring renewed focus to internal environments: how staff access systems, how tools are used, and how misuse - whether accidental or malicious - could lead to harm.
AI-enabled threats are particularly relevant in this context because they are both foreseeable and scalable. Deepfake audio or video can be used to impersonate senior leaders. AI-assisted phishing can target individuals with precision. Automated tooling can exploit human behaviour at speed. Under an online safety lens, these are not edge cases; they are risks that must be understood, assessed and addressed.
Crucially, the emphasis is on proportionality. Firms are not expected to eliminate risk entirely but they are expected to demonstrate that safeguards reflect the likelihood and impact of harm, and that governance frameworks are clear, active and evolving.
How AI changes the cyber risk equation
AI fundamentally alters the balance of risk. It increases the credibility of attacks, reduces the cost of execution and compresses the time between compromise and impact. This affects both sides of the risk assessment equation: likelihood rises and potential impact escalates.
Many of the most effective attacks no longer rely on technical vulnerabilities alone. They exploit trust, urgency and access - particularly where users hold elevated permissions or operate under pressure. This is why internal threats, social engineering and synthetic media have become central concerns, rather than niche topics.
AI-enabled defence still requires human oversight
Security teams are responding in kind. AI is now widely used to detect anomalies, correlate signals and automate responses that would be impossible to manage manually at scale. Managed services and continuous monitoring are increasingly relied upon to close skills gaps and maintain coverage.
However, automation alone is not a safeguard. Under online safety expectations, firms must be able to explain how decisions are made, how incidents are escalated and where accountability sits. Human-in-the-loop controls, clear ownership and documented governance remain essential, particularly when AI systems are acting on behalf of the organisation.
What proportionate safeguards look like
Strong online safety alignment is characterised by clarity rather than complexity. This includes:
● regular, documented risk assessments that explicitly consider AI-enabled misuse
● controls that align to real-world behaviours rather than theoretical threats
● monitoring that balances visibility with operational practicality.
Above all, it requires governance that works in practice. Who owns cyber risk? How are decisions escalated? How are controls reviewed as threats evolve? These questions sit at the heart of both cybersecurity resilience and user protection.
A trust obligation, not just a security function
In an AI-enabled world, cybersecurity is no longer only about defending systems. It is about protecting people - employees, clients and counterparties - from harm that arises through digital services.
Online safety duties make this explicit. Firms that respond by embedding proportionate controls, clear governance and human-led oversight will not only strengthen their security posture but reinforce trust where it matters most.
How Alternit One can help
A1 works with regulated financial firms to strengthen cybersecurity and governance in line with evolving regulatory expectations. Through a combination of human-led expertise, AI-enabled security capabilities and clear risk frameworks, we help firms assess exposure, implement proportionate controls and maintain oversight as threats evolve.
If you are reviewing your cybersecurity or compliance posture in light of online safety obligations, our team can support you with clarity, confidence and practical guidance.


