News

Why Your Systems Can Make It Worse - And What to Do About It

AI Anxiety Series - Part Five

In the first four parts of this series, I explored how AI anxiety spreads through teams, how that anxious energy can be reframed as creative fuel, and how leaders can regulate their own nervous systems while guiding others through uncertainty. Each of those pieces focused on people - individuals, teams, and leaders navigating what has become a profoundly disorienting moment.

There’s also something profound I’ve noticed working with organisations across sectors and geographies: even when team members are willing, even when leaders are self-aware, even when teams have the right mindset - the organisation itself can still be generating anxiety faster than anyone can metabolise it.

This final instalment shifts the lens from people to systems. Because AI anxiety is not just a human problem, it’s an organisational design problem. And until we treat it as such, even the best individual and team-level interventions will struggle to gain traction against systemic headwinds.

The Trust Architecture
Let’s start with the elephant in every boardroom: people’s response to AI at work is shaped far less by the technology itself and far more by how much they trust the organisation introducing it.

Research and experience tell us that employee sentiment around AI reflects broader confidence in leadership. Workers are not just evaluating whether a tool is reliable - they are asking whether their leaders will use it to empower or to replace them (De Cremer, 2025).

Deloitte’s TrustID Index, which tracks employee sentiment daily, found that trust in company-provided generative AI fell 31% between May and July 2025. Trust in agentic AI systems - those that can act autonomously - dropped 89% in the same period (Reichheld, Brodzik & Youra, 2025). These are not marginal declines. They represent a crisis of confidence that no amount of training or upskilling will resolve if the underlying trust architecture remains fractured.

A particularly illuminating study published in the Journal of Management Studies by Vuori, Burkhard and Pitkäranta (2025) tracked the introduction of an AI tool in a Scandinavian software company and identified four distinct trust configurations among employees: full trust (high cognitive, high emotional), full distrust (low cognitive, low emotional), uncomfortable trust (high cognitive, low emotional), and blind trust (low cognitive, high emotional). Each configuration produced different behaviours - and crucially, the “uncomfortable trust” group, who intellectually understood the tool’s value but felt uneasy about its implications, began manipulating their digital footprints. This created a vicious cycle: biased data inputs degraded AI performance, which further eroded trust.

The implication is profound. Trust is not a precondition for AI adoption - it is the operating system on which adoption runs. Organisations that treat it as an afterthought, a communications problem to be managed, or a feeling to be overcome are building on sand.

Governance as Anxiety Container
In my work with organisations implementing significant change, I’ve consistently observed that clear, fair, and genuinely protective governance policies function as what clinicians might call “containers” for anxiety. When people trust that there are boundaries, oversight, and recourse, their nervous systems settle enough to engage productively. Ambiguity, by contrast, amplifies threat.

Yet governance remains one of the most underdeveloped dimensions of AI implementation. A 2025 KPMG global study found that half of the US workforce uses AI tools at work without knowing whether it is even allowed, and 44% are knowingly using it improperly. More than half rely on AI output without properly evaluating it (KPMG, 2025). This is not a training failure. It is a governance vacuum - and vacuums generate anxiety.

Without clear policies that define what data will be collected, how AI will be used, and what protections exist, employees are left to fill the void with their own worst-case scenarios. As the World Economic Forum emphasised in its 2025 analysis of digital trust, transparency is not just a governance principle - it is an enabler of trust. Workers need reassurance that AI will not be used to monitor, rank, or replace them without their knowledge or input (World Economic Forum, 2025).

The practical takeaway is this: organisations need to develop AI governance frameworks that address both the technical and the emotional dimensions. Policies that only regulate data usage and compliance are necessary but insufficient. Effective governance also communicates at the systems level clear intent:
· Here is what we will and will not do with this technology
· Here is how you are protected
· Here is how you can raise concerns.

This is governance as psychological safety infrastructure.

The Middle Management Squeeze
If there is one layer of the organisation that bears a disproportionate AI anxiety burden, it is middle management. These are the people responsible for implementation without having shaped strategy. They supervise people whose roles may be most affected while their own positions face significant disruption. And increasingly, they are being told that AI might make their function redundant altogether.

Rather than eliminating middle management roles, AI is fundamentally reshaping them. We’re not seeing evidence of fewer manager positions, but a marked shift in the competencies required - away from supervisory oversight and toward collaborative, people-centric skills (Sadun & Tamayo, 2025). Middle managers are increasingly needed as coaches and translators, helping employees navigate transitions, acquire new capabilities, and interpret AI-driven outputs within context that algorithms cannot grasp.

Yet many organisations are not investing in this transition. Organisations that do not specifically address the middle management layer often find their change efforts stalling - not because of resistance at the top or disengagement at the front line, but because the crucial connective tissue between them has been neglected, stressed, or dismantled.

The Pace Problem: When Change Models Break
Traditional change management was built for a world where organisations could control adoption timing. Plan the change, communicate the change, implement the change, stabilise the organisation. The classic Lewin model of unfreeze-change-refreeze assumed that at some point, the ice would set again. With AI, there is no refreeze. The landscape is shifting underfoot continuously, and models designed for sequential, bounded change simply do not apply.

This matters enormously for anxiety. One of the deepest human needs is for predictability - the brain is fundamentally a prediction machine. When the environment changes faster than our brain-based predictive models can update, the result is chronic uncertainty, which the nervous system reads as chronic threat.

As collectives of people, organisations need adaptive capacity: the ability to continuously absorb, process, and integrate change without requiring periods of stability in between. This is not about moving faster. It is about building systems that can remain coherent while in constant motion. The metaphor I often use is the difference between a rigid structure that can withstand a specific earthquake magnitude and a structure that flexes with any tremor. The former eventually fails. The latter endures, and could even be described as anti-fragile.

Organisational History as Amplifier or Buffer
One of the most underappreciated factors in AI anxiety is institutional memory. Not the formal kind, but the lived, embodied, whispered-in-corridors kind. Every organisation has a history with technology, and that history shapes present attitudes.

Companies with histories of using technology to intensify work, increase surveillance, or reduce headcount will face far greater anxiety than those with strong employee value propositions. The AI moment is revealing underlying organisational health with unusual clarity.

For leaders, this means the AI conversation must be preceded by - or at minimum, run in parallel with - a trust conversation. Organisations cannot bolt AI adoption onto a fractured relational foundation and expect smooth integration.

Skills as Moving Targets
The conventional response to AI disruption is reskilling. Identify the capabilities that will be needed, design programmes, upskill the workforce. There is nothing wrong with this instinct - except that it assumes we know what skills will be needed. With AI, the skill landscape is shifting as the technology develops, sometimes faster than training programmes can be designed, let alone delivered.

BCG’s 2025 AI at Work survey of more than 10,600 workers across 11 countries found that while 72% of leaders and managers use generative AI regularly, frontline adoption has stalled at 51% - a “silicon ceiling” that persists despite years of investment. The organisations extracting real value are not those with the best tools but those that have moved beyond deployment to redesign workflows end-to-end. Critically, when leaders demonstrate strong support for AI, the share of employees who feel positive about the technology rises from 15% to 55% - yet only a quarter of frontline employees currently receive that level of support (BCG, 2025).

This points to something important. The meta-skills matter more than the specific skills. Visible leadership engagement. Learning agility. Experimental mindset. Comfort with ambiguity. The ability to work alongside AI as a collaborator rather than competing with it as a rival. These are not competencies that can be taught in a two-hour workshop. They are cultural capacities that must be cultivated through how work is structured, how performance is evaluated, and how failure is treated.

Leadership engagement and the systems that support them must provide clarity, guardrails, training and support. Expecting teams to simply freestyle their way through experimentation is the organisational equivalent of throwing someone into deep water and calling it swimming lessons. The anxiety this generates is not irrational. It is a proportionate response to being set up to fail.

A Systems-Level Response: Four Organisational Interventions
Drawing together the evidence and my own experience working with organisations through AI transitions, I want to propose four systems-level interventions that address AI anxiety at its structural source rather than its symptomatic surface.

  • Build Trust Before You Build Capability
    Audit your organisation’s trust infrastructure – ideally before, but realistically while - rolling out AI initiatives. What is the existing level of psychological safety? Do people trust that leadership has their interests at heart? Have previous technology introductions been handled well or badly? The answers to these questions will predict your AI adoption trajectory more accurately than any technology readiness assessment. If trust is low, invest there first. AI implementation on a low-trust foundation is not just slower - it actively generates the resistance and anxiety that make future trust-building harder.
  • Treat Governance Like it’s Your an Organisational Nervous System
    Move beyond compliance-oriented AI governance toward policies that explicitly communicate intent, boundaries, and recourse. People need to know not just what the rules are, but that the rules were designed with them in mind. This means involving employees in governance design, not just informing them of its outcomes. It means making AI policies legible, accessible, and genuinely protective - not buried in legal language that nobody reads. And it means treating transparency as an ongoing conversation, not a one-time announcement, because the technology itself is continuously evolving.
  • Invest in Middle Management as Change Infrastructure
    Stop treating middle managers as implementation mechanisms and start treating them as your most critical change asset. Provide them with the skills, support, and psychological space to navigate their own anxiety while facilitating others’ transitions. This means dedicated development in coaching, sense-making, and adaptive leadership - not just AI tool training. It also means acknowledging their unique vulnerability honestly: they are being asked to champion a transformation that may fundamentally alter their own roles. Organisations that pretend this tension does not exist will find their middle management layer becoming a bottleneck rather than a bridge.
  • Replace Change Management with Adaptive Capacity
    Shift from project-based change management to building organisational adaptive capacity. This means designing structures, rhythms, and cultures that assume continuous change as the baseline rather than stability as the norm. Practically, this looks like shorter iteration cycles, more distributed decision-making, standing experimentation spaces where teams can test and learn without requiring full business cases, and feedback loops that operate in weeks rather than quarters. The goal is not to eliminate anxiety - some degree of adaptive arousal is healthy and, as Part Three of this series explored, potentially generative. The goal is to prevent chronic threat states from becoming embedded in how the organisation operates.

The Organisational Mirror
There is a phrase I return to with my clients: AI is not creating new organisational problems. It is revealing existing ones with uncomfortable clarity.

If your organisation has a trust deficit, AI will expose it. If your governance is vague, AI will test it. If your middle managers are unsupported, AI will overwhelm them. If your culture punishes failure, AI will freeze people in place.

But the reverse is also true. Organisations that have invested in relational infrastructure - psychological safety, transparent governance, genuine development pathways, and cultures of learning - are finding that AI adoption is not the existential crisis that headlines suggest. For these organisations, it is an accelerant rather than a threat, amplifying capabilities they have already built.

Throughout this series, I have posited that AI anxiety is real, legitimate, and neurologically grounded. It spreads through teams. It can be reframed as creative energy. It requires leaders to regulate their own nervous systems. And at the systems level, it demands that organisations look honestly at the structures, policies, and cultures that are either containing it or amplifying it.

The organisations that will thrive through this transition are not necessarily those with the best technology. They are those with the best trust.

References
Boston Consulting Group. (2025). AI at work 2025: Momentum builds, but gaps remain. BCG Global Survey. Retrieved from https://www.bcg.com/publications/2025/ai-at-work-momentum-builds-but-gaps-remain

De Cremer, D. (2025). Employees won’t trust AI if they don’t trust their leaders. Harvard Business Review. Retrieved from https://hbr.org/2025/03/employees-wont-trust-ai-if-they-dont-trust-their-leaders

Edelman. (2025). Trust Barometer Flash Poll: Trust and Artificial Intelligence at a Crossroads. Cited in Fortune. Retrieved from https://fortune.com/2025/11/18/trust-missing-ingredient-in-ai-boom-edelman-barometer-poll/

KPMG. (2025). Trust, Attitudes and Use of Artificial Intelligence: A Global Study 2025. KPMG LLP. Retrieved from https://kpmg.com/us/en/articles/2025/trust-attitudes-and-use-of-artificial-intelligence.html?utm_source=other&utm_medium=referral&utm_campaign=701dV000009wQLiQAM&cid=701dV000009wQLiQAM

Reichheld, A., Brodzik, C., & Youra, R. (2025). Workers don’t trust AI. Here’s how companies can change that. Harvard Business Review. Retrieved from https://hbr.org/2025/11/workers-dont-trust-ai-heres-how-companies-can-change-that

Sadun, R., & Tamayo, J. (2025). What’s the future of middle management? Harvard Business Review. Retrieved from https://hbr.org/2025/04/whats-the-future-of-middle-management

Slalom. (2025). The speed of AI: Is your organization adapting or stalled at the starting line? Slalom Consulting. Retrieved from https://www.slalom.com/us/en/insights/speed-of-ai-adapting-or-stalled

Vuori, N., Burkhard, B., & Pitkäranta, L. (2025). It’s amazing - but terrifying!: Unveiling the combined effect of emotional and cognitive trust on organizational members’ behaviours, AI performance, and adoption. Journal of Management Studies. https://doi.org/10.1111/joms.13177. Retrieved from https://onlinelibrary.wiley.com/doi/10.1111/joms.13177

World Economic Forum. (2025). Re-building trust for the Age of AI. World Economic Forum Stories. Retrieved from https://www.weforum.org/stories/2025/01/rebuilding-trust-ai-intelligent-age/

Read next

Transform your workplace


With Enmasse, you’re not just hiring a consultancy. You’re partnering with a team dedicated to transforming your organisational culture and community for the better.

Let’s talk