The AI Shame Problem: Why Your Employees Are Using AI in Secret

More than half your employees are using AI at work and not telling you about it. Here's what's driving it, what it's costing your organisation, and what to do about it.

April 20, 2026
Purple Elipse - Sparagus
9 min read

30-second post summary

A majority of employees across the world are using AI tools at work without telling their managers. This phenomenon, increasingly called "AI shame", is not primarily driven by embarrassment. It's driven by a training gap so large that most employees lack the confidence to openly use tools they've figured out on their own. Meanwhile, companies are investing in AI infrastructure while underinvesting in the people who are supposed to use it. This article breaks down what the data shows, why it matters more than most leadership teams realise, and what organisations need to do differently.

The number that should alarm every HR director

Here it is: according to a study covering 47 countries, 57% of employees admit to concealing how they use AI at work.

Other studies put it slightly lower. A Cornerstone OnDemand report found 48.8% of employees hiding their AI usage. A Security Magazine survey put it at 32%. Across every methodology, every country, the finding is consistent: a significant proportion of your workforce is using AI, and not telling you.

The twist that makes this genuinely interesting: 53.4% of C-suite leaders also hide their own AI use, despite being the most frequent users of all. The people setting the tone are hiding the same thing they expect their teams to be transparent about.

This is not a fringe problem. It's a structural one.

The Sparagus read: When employees hide how they work, organisations lose visibility into what's actually happening on the ground. You can't measure what you can't see. You can't improve what you don't know is there. And you certainly can't build a coherent AI strategy on top of a culture where people feel they need to work in secret.

Why employees are hiding their use (it's not what you think)

The instinctive explanation for AI shame is fear: employees are afraid of being seen as lazy, or of being replaced by the tools they're using. That narrative is clean and intuitive. It's also largely wrong.

Research consistently points to a different root cause: lack of training and unclear policies.

According to HR Dive's analysis, employees are staying silent primarily because they don't feel equipped. They've taught themselves how to use tools like ChatGPT or Claude through trial and error, without organisational endorsement, without guidelines, and without knowing whether what they're doing is approved. In that context, staying quiet is the rational choice.

62.6% of Gen Z workers have completed work using AI and presented it as entirely their own. That's not confidence in their AI skills. That's anxiety about being judged for using them.

The EY Agentic AI survey found that a majority of workers are actually enthusiastic about AI and want to use it more. What's blocking them is not resistance. It's a lack of communication from leadership and a lack of practical guidance on how to integrate it into their actual work.

The Sparagus read: AI adoption in most organisations isn't failing because employees are resistant. It's failing because employers have bought the tools and skipped the rollout. You cannot expect people to adopt something confidently if you haven't told them what it's for, how to use it properly, or whether it's even sanctioned in their role. The silence from employees is feedback. It's just not being heard.

The training gap is worse than most companies realise

If there's one number in this entire debate that should prompt immediate action, it's this one:

Only 7.5% of employees have received extensive training with AI tools. That figure is nearly identical to the year before, when it was 7.0%. One year of widespread AI adoption, and training penetration moved by half a percentage point.

It gets worse. Of the employees who have received some AI training, 85% cannot apply it to their actual job. The training that does exist is too generic, too theoretical, and too disconnected from the specific tasks people perform every day.

Only 44% of US employees have received any AI training at all. In a period where AI tools are being rolled out across every sector, that means the majority of the workforce is navigating this shift entirely on their own.

The result is a hidden adoption layer: employees who are using AI informally, without framework, without quality control, and without any organisational learning from what's working. Every day that passes without structured enablement is a day of compounding missed opportunity.

The Sparagus read: Companies are spending on licences and infrastructure. They are not spending proportionately on people. That imbalance is not a budget question. It's a prioritisation question. An organisation that deploys AI tools without training is building on sand. The productivity gains, the quality improvements, the competitive advantage: all of it depends on people who know how to use the tools well. That doesn't happen by osmosis.

The manager variable: one thing that changes everything

Gallup's research identifies the single strongest predictor of AI adoption within a team: whether the manager actively endorses and models AI use.

When managers support AI, team adoption reaches 79%. Without that support: 34%. That's a 45-percentage-point swing driven by one variable.

This finding has a direct implication that most AI strategies ignore: middle management is the critical lever, not the C-suite announcement, not the all-hands presentation, not the tool selection process. If managers aren't using AI themselves, demonstrating it openly, and creating space for their teams to experiment, adoption stalls regardless of what's decided at the top.

BCG's AI at Work 2025 report connects to this: 42% of executives say generative AI adoption is creating tension and division in their organisation. Most of that tension sits in the middle layers, where the transformation is most disruptive and where investment in change management is most often absent.

The Sparagus read: If you want to know why your AI rollout isn't gaining traction, look at your managers. Not at whether they approved the initiative, but at whether they're using the tools themselves, whether they're talking about it openly with their teams, and whether they're creating psychological safety for experimentation and failure. AI adoption is a leadership behaviour problem before it's a technology problem.

The data privacy dimension: a real constraint, not an excuse

There's a third factor driving hidden AI use that deserves to be taken seriously rather than dismissed: genuine data privacy concerns.

Many employees are working with sensitive information: client data, internal financial figures, personal records. They genuinely don't know whether it's safe to input that data into an AI tool. In the absence of a clear company policy, the cautious choice is to either not use AI, or to use it quietly with anonymised or altered data.

This is not irrational behaviour. Under the EU AI Act, with obligations for Belgian companies before August 2026, and under GDPR, there are legitimate questions about where data goes when it enters a third-party AI system. The legal and regulatory landscape around AI in employment is evolving rapidly, and employees working without guidance are right to be cautious.

The problem is that most organisations haven't answered these questions for their people. There's no approved tool list. No data classification guidance. No clear boundary between "use this freely" and "never put client data here."

The Sparagus read: A clear, practical AI usage policy is not a compliance formality. It's an enablement document. It tells employees what they can use, how, and with what data. Without it, the cautious ones hold back and the less cautious ones take risks that could create real liability. Both outcomes are bad. Writing that policy is one of the highest-leverage things an HR or legal team can do right now.

What good actually looks like

The organisations getting this right share a few consistent characteristics.

  • They have a named AI policy that specifies approved tools, acceptable use cases, and data handling rules. It's accessible, readable, and updated as the landscape changes.
  • Their managers model adoption visibly. AI use isn't something that happens behind closed doors. Senior people talk openly about how they're using tools, what works, what doesn't, and what they're still figuring out.
  • Training is role-specific, not generic. Instead of a one-hour overview of what large language models are, they invest in showing a marketing team how to use AI for content briefs, or a finance team how to automate reporting workflows. Practical, applied, immediately useful.
  • They measure what's happening. They know which teams are using which tools, how often, and to what effect. They treat AI adoption as a business metric, not an assumption.

These aren't complex or expensive interventions. They're primarily about intentionality and communication.

In short

The AI shame problem is a symptom of a much simpler underlying issue: most organisations launched AI tools without launching the cultural and training infrastructure needed to support them. The result is a workforce that's ahead of where leadership thinks it is, using tools it hasn't been taught, in ways it hasn't been guided, and staying quiet about it.

The cost isn't just unrealised productivity. It's the compounding disadvantage of an organisation that doesn't know what its people are actually doing, can't learn from it, and can't improve it.

The fix is not sophisticated. It requires a clear policy, visible leadership, role-specific training, and managers who are trusted to carry the change. The companies that invest in those four things in the next 12 months will be measurably ahead of those that don't.

FRENQUENTLY
ASKED QUESTIONS

Purple Elipse - Sparagus
FAQ
Purple Elipse - Sparagus
NEWSLETTER

Stay up-to-date

By subscribing to our newsletter, you agree to receive communications in accordance with our privacy policy.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.