The numbers that set the scene
Start here: the global cost of cybercrime is projected to reach $10.5 trillion annually in 2025, according to Cybersecurity Ventures. That figure is larger than the GDP of every country except the US and China.
At the individual organisation level, IBM's 2025 Cost of a Data Breach Report puts the average breach cost at $4.44 million globally. In the US, that number hits a record $10.22 million. For breaches specifically linked to AI-driven attacks, the average rises to $5.72 million, a 13% premium over standard incidents.
And cybersecurity spending is responding: Gartner forecasts global information security spending will reach $213 billion in 2025 and $240 billion in 2026, a 12.5% year-on-year increase. The primary growth driver in both years: AI, used both offensively and defensively.
The Sparagus read: These are global figures, and they can feel abstract. But the mechanisms behind them, phishing at scale, automated exploitation, insider data leakage through AI tools, are not abstract at all. They're happening in organisations of every size, including yours.
Belgium specifically: the threat is not theoretical
The Centre for Cybersecurity Belgium (CCB) reported a 165% increase in cyberattacks in Belgium in 2025, reaching an average of 275 attacks per day. The CCB recorded 635 incident notifications in 2025, nearly 70% more than the year before, with cyber-related incidents up 58% year-on-year.
This acceleration is not a blip. The CCB itself attributes it to increased automation and AI adoption by attackers. The average time between a vulnerability being disclosed and actively exploited has dropped to five days. Nearly a third of vulnerabilities are exploited within 24 hours of discovery.
For Belgian companies navigating NIS2 compliance, the regulatory pressure is also intensifying. Essential and important entities were required to submit their CyberFundamentals self-assessment or ISO 27001 documentation to the CCB by April 2026. According to KPMG Belgium's Cyber Study 2025, many mid-market Belgian companies are approaching this compliance requirement without adequate internal security architecture.
The Sparagus read: The 275 attacks per day figure is not about nation-state actors targeting government infrastructure. The majority of those attacks target ordinary organisations, including professional services firms, mid-size manufacturers, and consulting businesses. The question isn't whether your organisation will face an attempt. It's whether your defences are proportionate to the current threat level.
The three attack types rising fastest
1. AI-generated phishing: volume and precision combined
82.6% of phishing emails are now created using AI, up 53.5% from the previous year. The second half of 2024 saw a 202% surge in phishing emails overall, with credential phishing attacks rising 703%.
What makes AI-generated phishing qualitatively different is personalisation at scale. Where traditional phishing relied on mass generic messages, AI tools can now generate contextually accurate, individually tailored messages in seconds, referencing real colleagues, real projects, and real company language scraped from public sources.
2. Deepfakes and voice cloning: the identity problem
Deepfake files grew from 500,000 in 2023 to 8 million in 2025, according to Deepstrike. In Q1 2025 alone, more deepfake incidents were recorded than in the entirety of 2024.
Voice cloning attacks, primarily used for business email compromise and fraudulent payment authorisations, increased 81% in 2025. Between January and September 2025 alone, AI-driven deepfakes caused over $3 billion in losses in the US.
The risk model has changed. Where verification previously relied on recognising a voice or a communication style, that assumption is no longer safe.
3. Automated exploitation: speed as the new weapon
AI-powered cyberattacks increased 72% year-over-year globally, with automated scanning activities reaching 36,000 scans per second. The consequence: the window between a system being vulnerable and being exploited has collapsed from weeks to hours.
87% of organisations report having been targeted by AI-powered attacks in the last 12 months. This is no longer a specialised threat. It's the baseline.
The threat inside the building: shadow AI
Here is the finding from IBM's 2025 Cost of a Data Breach Report that deserves far more attention than it receives:
20% of organisations experienced data breaches directly linked to shadow AI, meaning AI tools employees are using without IT knowledge or approval. Those breaches cost an average of $4.63 million, $670,000 more than standard breaches.
The scale of the underlying behaviour is striking:
The Sparagus read: The biggest cybersecurity gap in most Belgian organisations right now is not a missing firewall or an unpatched vulnerability. It's the absence of a clear, practical AI usage policy. Every employee who pastes a client brief, a financial projection, or an HR document into an unsanctioned AI tool is potentially exfiltrating sensitive data to a third-party infrastructure your IT team has never evaluated. This is happening today, at scale, in organisations that consider themselves security-conscious.
Why SMEs remain disproportionately exposed
75% of cyberattacks target SMEs and micro-businesses. Of SMEs that experience a serious attack, 60% close within six months. These figures are getting harder to explain away as the tools available to attackers become cheaper and more powerful.
According to Jimber's State of Mid-Market Cybersecurity in Belgium 2026:
- Most mid-market Belgian companies have no formal AI usage policy
- Security budgets are increasing (SMEs now allocating 10 to 15% of IT spend to cybersecurity) but training investment remains disproportionately low
- NIS2 compliance is creating structural pressure to formalise security posture, but many are approaching it as a documentation exercise rather than an operational one
The irony of 2026 is this: AI tools make certain security operations more accessible than ever, yet the shadow AI risk those same tools introduce is simultaneously creating new attack surfaces that most SMEs are not managing.
What a credible response looks like in 2026
Organisations with extensive AI and automation in their security stack pay $3.62 million per breach versus $5.52 million without, a 34% reduction worth $1.9 million per incident on average, according to IBM. The ROI case for AI in defence is clear. The prerequisite is governance.
The Sparagus read: You cannot talk about AI in your organisation without talking about cybersecurity, and you cannot talk about cybersecurity in 2026 without accounting for AI. They are the same conversation now.
What that looks like in practice:
- An AI usage policy that defines which tools are approved, with what data, and by whom. Not a 40-page legal document, but a practical, readable guide that employees can actually follow.
- Data classification that tells employees which information can and cannot be shared with external AI systems. Client data, financial projections, HR records: these categories need explicit rules.
- Manager-level accountability for AI tool adoption in their teams. Security cannot live only in IT. Manager behaviour shapes team behaviour.
- Regular training that is specific and scenario-based, not generic annual compliance sessions. Employees need to recognise AI-generated phishing, understand why voice verification is no longer sufficient, and know what to do when they're unsure.
- Visibility tools that give IT a real picture of which AI applications are in use across the organisation. You cannot govern what you cannot see.
In short
AI has fundamentally changed the economics and mechanics of cyberattacks. The cost of launching a sophisticated, personalised, high-volume attack has collapsed. The speed of exploitation has accelerated. And a significant portion of the risk is now originating inside organisations, from employees using AI tools without oversight.
Belgium is facing 275 attacks per day and rising. The regulatory environment is tightening under NIS2. And the data is unambiguous: organisations without AI governance frameworks are paying a measurable premium when things go wrong.
The answer is not to ban AI internally. It's to govern it. The companies that do this well will be both more secure and more productive than those that treat cybersecurity and AI adoption as separate problems. They are not separate. They never were.