AI and Cybersecurity: Friend, Foe, and the Future of Defence

Best practices and data management are changing rapidly, and most organizations can't keep up.


AI and cybersecurity strategy - iONAI is changing cybersecurity best practices and data management faster than most organizations can adapt. Intelligent tools are transforming how companies manage data, automate workflows, and make decisions while giving bad actors new ways to move faster, hide better, and scale attacks.

That’s why your AI and cybersecurity strategy can’t be reactive or siloed; it needs to account for new risk, new compliance pressure, and new attack paths.

The AI cybersecurity landscape

When I was in university, the internet was just starting to make waves.

  • My professors were constantly on the lookout for students using new tech to cheat on papers. 
  • Big companies were wondering how to leverage this new technology. 
  • No one knew how much to trust anything they found online.

Sound familiar? The internet may have changed a lot since then, but our reactions to new technologies haven’t

In our latest struggle with a potentially culture-changing technology:

  • Teachers are worried their students are cheating with AI. 
  • Companies are trying to jump on the trend, but don’t know how to implement it. 
  • And, of course, no one knows who or what to trust.

The last time that happened, organizations that didn’t get on board got left behind. That’s why we see companies today jumping on the bandwagon with no plan. All they know is that AI is the next best thing, and they need to leap or get pushed.

Which, in some cases, is leading to risky AI adoption.

The challenges of early implementation 

When everyone is talking about AI, it’s tempting to throw caution to the wind and start implementing it throughout your system just to stay in the conversation. However, that reactive approach is dangerous at the best of times. With a tool as powerful as AI, rushing your adoption without a plan in place could create ongoing problems down the road.

A cautious, well-thought-out implementation plan, starting with strong guardrails and zero-trust infrastructure, is the best way forward.
The problem is that while any legitimate business has to proceed cautiously, malicious actors don’t. As we’ve seen, like in the Claude AI example above, they’re moving fast, making mistakes, learning from them and failing forward.

That leaves most organizations reacting to new attacks, rather than proactively approaching risks.

What are common AI cybersecurity threats?

As attackers have more time to explore the abilities of AI, the risks will continue to grow and branch in often surprising ways. The most common AI cybersecurity threats include:

  • AI-powered phishing and social engineering
  • Deepfakes and voice cloning
  • Prompt injection attacks
  • Data leakage through AI tools
  • Malicious or poisoned training data
  • Model theft and IP exposure
  • Automated vulnerability discovery
  • Shadow AI usage (unapproved tools creating security gaps)

The risks of AI-powered hackers and bad actors

In September of 2025, Anthropic detected suspicious activity by users of their Claude AI model. Investigation revealed an attack group (possibly foreign-funded) had used Claude to almost entirely automate cyberattacks. By leveraging Claude’s agentic capabilities, the attack group was able to chain together tasks that implemented up to 90% of an attack without human input.  

This increased the team’s ability to scale their attack to, at times, inhuman levels. At one point, the AI made thousands of requests, up to multiple per second. This would have been impossible with a purely human team.

This marks a new class of attacks that will define the AI-era of cybersecurity.

The Anthropic team’s discovery of attackers running nearly fully automated attacks vividly demonstrates how AI can give a single attacker the power to orchestrate attacks that weren’t previously possible. 

That alone will change how we do cybersecurity. But it’s just the beginning.
Social engineering attacks (like those run by Scattered Spider) are already a major problem in organizations. As it becomes easier for attackers to spoof voices and even video on calls, the opportunities to twist trust into unauthorized access grow.

The risks of AI-enhanced tools

One of the greatest ironies of the new AI age is that the tools you use to empower your employees to build your organization are the same ones attackers use to tear it down.

For example:

  • You implement an AI tool that makes it easier for managers to comb your databases to find all the information they need to do their jobs.
  • Attackers use that same tool to quickly move through your databases and find information they can use to attack you.

Because the benefits and risks of AI overlap, adopting AI has to be done slowly and thoughtfully, even when the rollout is fast and chaotic.

How do you secure AI to avoid cybersecurity risks?

Marketing makes all those AI-powered security tools look like the answer to your cybersecurity issues. But no matter what miracles security companies promise, those tools won’t work as advertised if you don’t have a plan.

The first step for building that plan is to ask: ‘Why does my organization need AI?’

1. Start with why

When you start by asking why, the answer defines your implementation—both where you will use it and where you won’t.

Your why will reveal the benefits of AI that will have the biggest impact on your organization. That’s where you can start implementing. 

Even more importantly, it reveals where the risks of AI will outweigh the benefits. That’s where you start putting guardrails in place.

2. Put guardrails in place

AI can be all-pervasive if you let it. 

Once it’s in your network, it quickly catalogues everything, laying your data bare and making it easy to access. 

Or at least it does if you don’t have guardrails in place. If you’ve started with knowing why you need AI, you’ve already taken the first step to restrict any future AI implementation.

So, you can build natural guardrails that: 

  • Ensure you know what data AI has access to
  • Keep AI from having excessive permissions in your network

This keeps your AI from inadvertently taking advantage of any flaws in your access.

For example, early on in an organization, you may have a file with core processes and data that everyone has access to. That works for a small team.

But as the team grows, and the founder moves on from the day-to-day operations, that document and its open permissions often get forgotten about in some sparsely trafficked corner of the network.

Once AI is introduced, what was once a forgotten file buried in subfolders no one has opened in years is now as easy to find as yesterday’s meeting notes.

If attackers get past your firewall, they will find it and use it against you. Maybe more importantly, if a disgruntled employee finds it, they may be tempted to do the same.

3. Ensure zero trust 

If implicit trust is dangerous within traditional systems, it’s even more so with AI. It can take a human user months to crawl your system before they find the cracks of inherent trust. That’s why these small mistakes often go unnoticed.

However, AI moves through your system at a much higher speed, without stopping to eat or sleep. That means any mistakes in access will be found. This allows AI into areas of your system where the risk far outweighs the reward.

4. Implement human oversight

At its heart, AI is a black box that reacts differently every time it’s queried. Most of the time, those reactions are in line with reality, but not every time. 

So even if you follow every step above, it’s still dangerous to allow AI to operate without human oversight. Maintaining human oversight (with a strong security culture) gives your organization the ability to verify the AI’s actions and responses with real-world data.

So your security remains deterministic, explicit and auditable.

Strengthen AI and cybersecurity with iON managed services and monitoring

AI will, and is, changing our world. Ignoring it is no longer an option, but neither is heedless adoption. As an organization, you need to:

  1. Build a solid foundation for AI adoption by defining why you need it and putting in place guardrails that ensure it meets that need.
  2. Embrace tools that increase the efficacy of your defence so you don’t fall behind.
  3. At least for now, keep people at the centre.

With iON managed services and monitoring, we can help you find and dial in the tools you need to maintain your security and be there at the centre to ensure your new AI tools are working for you, instead of against you.

If you’re adopting AI, talk to one of our experts to make sure you’re taking the right steps to secure your AI tools.

From the desk of Stephen Mathezer, VP of Service Delivery & Innovation

Stephen is a seasoned security expert with over 20 years of experience in operating system and network security. He specializes in architecting, implementing, and managing security solutions, prioritizing the optimization of existing tools before adopting new technologies. With a background in both operational and architectural security, he has secured industrial control networks in the oil and gas sector and conducted extensive security assessments and penetration tests. His expertise helps organizations enhance visibility, detect threats, and reduce risk. Stephen holds multiple cybersecurity certifications and is a SANS Certified Instructor.

Similar posts