AI Anxiety is contagious. Here's how to inoculate your organisation
Part Two of a Five-Part Series on AI Anxiety in the Workplace
The senior manager notices it first. During a routine team meeting about implementing a new AI-powered analytics tool, she watches a senior analyst’s shoulders tighten almost imperceptibly. His questions become sharper, more probing. By the meeting’s end, the energy in the room has shifted. Towards the end of the meeting, half the team is quietly questioning whether their roles have a future.
This pattern is playing out in organisations worldwide. But what makes AI anxiety particularly potent isn’t just its prevalence - it’s its social nature. Unlike individual concerns about job security or technological competence, AI anxiety in teams operates through distinctive social mechanisms that amplify threat and accelerate spread. Understanding these dynamics is the first step toward transforming them.
The Contagion Mechanism
Emotional contagion - the phenomenon by which one person’s emotional state spreads to others - is among the most well-documented dynamics in organisational psychology. Research by Sigal Barsade demonstrated that a single team member can influence the emotional tone of an entire group, affecting cooperation, conflict, and performance outcomes. When we observe a colleague’s furrowed brow or hear tension in their voice, many of us unconsciously mirror these signals, and our own emotional state begins to shift accordingly.
AI anxiety spreads through these same channels, but with distinctive features that intensify the effect. The uncertainty is genuinely novel - we have limited collective experience navigating technology that can mimic human cognitive work. The threat isn’t merely social embarrassment but professional survival. And the rate of change means the anxiety isn’t about a single implementation but an ongoing transformation with no clear endpoint.
Recent studies examining AI adoption have found concerning patterns. Research published in Nature: Humanities and Social Sciences Communications demonstrated that AI adoption can significantly reduce psychological safety - the shared belief that a team is safe for interpersonal risk-taking - which in turn increases depression risk among employees. When psychological safety erodes, people become less willing to voice concerns, share mistakes, or experiment openly, creating conditions where anxiety festers rather than being processed and addressed.
Amplification Pathways
AI anxiety amplifies through several primary social pathways in teams.
Status Threat Cascades
Professional identity is fundamentally social. We understand ourselves through our recognised expertise, our position in the team’s informal hierarchy, and our track record of contribution. Research on AI identity threat has identified three central predictors of anxiety: changes to work tasks, anticipated loss of status position, and challenges to professional identity itself.
When one team member perceives AI as threatening their expertise, that perception becomes visible to others through subtle behavioural shifts: increased criticism of AI outputs, performative displays of “human judgment,” withdrawal from collaborative discussions about AI implementation. These signals communicate that threat is present, which activates similar concerns in colleagues who might otherwise have felt secure.
A recent analysis in Trends in Cognitive Sciences identified that unlike previous technologies, generative AI can demonstrate cognitive, creative, and interpersonal capabilities that challenge traditional human-machine boundaries. This isn’t just about efficiency - it’s about the very characteristics that defined knowledge work as distinctly human. The threat therefore feels existential in ways that previous automation waves did not.
Transactive Memory Disruption
Teams function effectively partly because members develop shared knowledge about “who knows what” - what researchers call transactive memory systems. This collective meta-knowledge allows teams to coordinate efficiently, matching problems with the people most likely to solve them and reducing cognitive load by distributing expertise.
AI disrupts these established patterns of recognised competence. When a junior team member can suddenly produce analysis that rivals a senior colleague’s output, the team’s mental map of expertise becomes unreliable. People become uncertain not just about their own skills but about how to evaluate each other. A study of ICU physicians and nurses working with AI found that accessing information from AI agents differs significantly from accessing it from human colleagues, and that effective integration of AI into a team’s transactive memory requires fundamentally different coordination mechanisms than those teams have developed for human knowledge-sharing.
This cognitive disorientation creates fertile ground for anxiety. If I no longer know who the expert is, or whether expertise itself has shifted meaning, my own place in the team becomes uncertain, and that uncertainty spreads as team members observe each other’s confusion.
The Transparency Paradox
Psychological safety enables people to voice concerns openly without fear of negative consequences. In many contexts, this openness is adaptive. It surfaces problems early and allows teams to address them collectively. But with AI, transparency can paradoxically amplify anxiety rather than resolving it.
When one team member articulates a fear - “What if this tool makes half our team redundant?” - that fear becomes available as a cognitive frame for others who hadn’t yet articulated it themselves. The very act of naming the concern can crystallise vague unease into specific anxiety. Meanwhile, teams without psychological safety suppress the conversation entirely, which doesn’t eliminate the anxiety but drives it underground, where it manifests as passive resistance, cynicism, or quiet disengagement from AI initiatives.
Neither extreme serves teams well. What’s needed is not unrestricted anxiety-sharing or enforced silence but a structured approach to processing collective uncertainty - something I’ll address in detail in Part Three of this series.
The Pace Differential Problem
Beyond these three primary pathways, teams face an additional amplification dynamic: adoption pace differentials. Team members inevitably engage with AI at different speeds based on their roles, prior technological comfort, and individual risk tolerance. This creates social fractures that intensify anxiety in both directions.
The enthusiastic early adopter, visibly excited about new capabilities, may inadvertently signal to slower-moving colleagues that they’re falling behind. Their very proficiency becomes a status marker that creates pressure. Meanwhile, the person who raises questions or concerns about AI implementation risks being labelled as “resistant” - which discourages the very critical thinking that effective AI integration requires.
Research on AI resistance in the workplace has conceptualised it as a three-dimensional phenomenon embodying employees’ fears, inefficacies, and antipathies toward AI. When teams lack explicit frameworks for managing adoption pace, these dimensions fragment the group precisely when collective adaptation is most needed.
Techno-identity Threat
Organisational psychologists have studied technology-related stress for decades under the rubric of “technostress.” The classic model identifies five stressors: techno-overload (increased workload), techno-invasion (blurred work-life boundaries), techno-complexity (effort required to learn new systems), techno-insecurity (threat to job security), and techno-uncertainty (constant change).
AI triggers all five simultaneously while adding a sixth dimension the original model didn’t anticipate: techno-identity threat. When a technology can perform cognitive tasks that were previously considered uniquely human, it doesn’t just change what we do, it challenges who we understand ourselves to be. The accountant’s identity isn’t merely “I work with numbers” but “I see patterns others miss”; the lawyer’s identity isn’t merely “I produce accurate legal advice” but “I find the right words.”
Research into workers’ coping strategies has identified five responses to AI-induced psychological threats: direct resolution (acquiring AI skills), symbolic self-completion (emphasising irreplaceable human qualities), dissociation (mentally separating oneself from AI users), escapism (avoiding AI contexts), and fluid compensation (shifting identity to other domains). Teams where members pursue conflicting strategies - one colleague doubling down on skill development while another dismisses AI as “just a tool” - experience additional friction that compounds existing anxiety.
Prosocial Reality
Understanding why AI anxiety spreads so readily requires looking at the brain systems involved. The neural circuitry that processes social threat evolved in environments where group exclusion was genuinely dangerous - social rejection could mean death. These systems don’t distinguish between physical danger and professional threat; the brain processes threats to status and social standing similarly to how it processes physical injury.
When someone perceives AI as a status threat, their sympathetic nervous system activates, releasing cortisol and adrenaline. These physiological changes are visible to colleagues through micro-expressions, body language, and vocal tone - the same signals that drive emotional contagion. The observer’s brain, detecting these threat cues, begins producing its own stress response. This happens largely below conscious awareness; we catch anxiety before we understand why.
Critically, the brain’s threat response also narrows cognitive processing. Under threat, we become more vigilant for danger signals and less able to process complex information or consider alternative perspectives. This creates a vicious cycle: anxiety spreads, cognitive flexibility decreases, and teams become less capable of the adaptive thinking that successful AI integration requires.
The Counter-Contagion Question
If anxiety can spread through teams, can positive states spread equally effectively? The emotional contagion research suggests yes, but with important caveats.
Studies have found that positive emotional climates can enhance creativity, decision quality, and collaborative behaviour. Leaders who consistently display positive emotions can shift team affect toward more constructive states. However, the research also suggests that negative emotions may be more contagious than positive ones - an asymmetry that has implications for how organisations approach AI implementation.
Forced positivity, or requiring people to “get excited about AI”, tends to backfire, increasing cynicism and resistance. What works instead is creating conditions where authentic positive engagement can emerge: psychological safety for experimentation, structured opportunities for skill development, and honest acknowledgment of genuine uncertainties. The goal isn’t to eliminate anxiety but to create environments where anxiety can be processed productively rather than amplified destructively.
Implications for Organisations - What Leaders can Do
The social nature of AI anxiety has practical implications for how organisations approach implementation. Treating AI adoption as primarily a technology initiative - focused on tools, training, and processes - misses the emotional and relational dynamics that determine whether people can actually engage with new capabilities.
Several principles emerge from the research:
1.Psychological safety assessment should precede AI rollouts. Teams with low psychological safety will struggle to voice concerns, experiment openly, or learn from mistakes - all essential for effective AI integration. Measuring and addressing psychological safety first creates better conditions for technology adoption.
2. Adoption pace needs explicit management. Rather than allowing pace differentials to emerge organically and create social friction, organisations can structure exploration phases, create learning cohorts, and establish norms that legitimise different adoption speeds for different roles and contexts.
3. Identity conversations must happen alongside skills training. Teaching people to use AI tools without addressing what AI means for their professional identity leaves the deeper anxiety unprocessed. Organisations that create space for explicit discussion of changing roles, evolving expertise, and ongoing human contribution build more resilient adaptation.
4. Leaders’ emotional states matter disproportionately. Given their visibility and influence, leaders’ anxiety about AI will spread rapidly through their teams. Before asking leaders to guide others through AI transitions, organisations need to address leaders’ own uncertainties, concerns, and skill gaps. This will be the focus of Part Three in this series.
Where This Leads
Understanding that organisational AI anxiety is fundamentally a social phenomenon - spreading through teams via emotional contagion, status signalling, and identity threat - reframes the challenge. The question isn’t simply how to reduce individual anxiety but how to create team conditions where anxiety can be acknowledged, processed, and transformed.
In the next part to this series, I’ll explore how teams can harness the physiological arousal that anxiety creates - redirecting it from threat response to creative energy. Research on stress reappraisal suggests that the bodily sensations of anxiety and excitement are nearly identical; the difference lies in interpretation and context. Teams that learn to channel their collective nervous energy toward experimentation and growth may find that AI anxiety becomes a resource rather than an obstacle.
For now, the starting point is recognition: the anxiety rippling through your team isn’t a collection of individual problems to be solved with training programmes and reassurance. It’s a social phenomenon with its own dynamics, its own amplification pathways, and ultimately, its own potential for transformation.
References
Barsade, S. G. (2002). The ripple effect: Emotional contagion and its influence on group behavior. Administrative Science Quarterly, 47(4), 644–675. https://doi.org/10.2307/3094912
Barsade, S. G., Coutifaris, C. G. V., & Pillemer, J. (2018). Emotional contagion in organizational life. Research in Organizational Behavior, 38, 137–151. https://doi.org/10.1016/j.riob.2018.11.005
Brooks, A. W. (2014). Get excited: Reappraising pre-performance anxiety as excitement. Journal of Experimental Psychology: General, 143(3), 1144–1158. https://doi.org/10.1037/a0035325
Edmondson, A. C. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44(2), 350–383. https://doi.org/10.2307/2666999
Golgeci, I., Ritala, P., Arslan, A., McKenna, B., & Ali, I. (2025). Confronting and alleviating AI resistance in the workplace: An integrative review and a process framework. Human Resource Management Review, 35(2), Article 101075. https://doi.org/10.1016/j.hrmr.2024.101075
Kim, B.-J., Kim, M.-J., & Lee, J. (2025). The dark side of artificial intelligence adoption: Linking artificial intelligence adoption to employee depression via psychological safety and ethical leadership. Humanities and Social Sciences Communications, 12, Article 704. https://doi.org/10.1057/s41599-025-05040-2
Mirbabaie, M., Brünker, F., Möllmann Frick, N. R. J., & Stieglitz, S. (2022). The rise of artificial intelligence – understanding the AI identity threat at the workplace. Electronic Markets, 32(1), 73–99. https://doi.org/10.1007/s12525-021-00496-x
Lewis, K., & Herndon, B. (2011). Transactive memory systems: Current issues and future research directions. Organization Science, 22(5), 1254–1265. https://doi.org/10.1287/orsc.1110.0647
Hermann, E., Puntoni, S., & Morewedge, C. K. (2025). GenAI and the psychology of work. Trends in Cognitive Sciences, 29(9), 802–813. https://doi.org/10.1016/j.tics.2025.04.009
Schmutz, J. B., Meier, L. L., & Manser, T. (2023). Human-AI teaming: Leveraging transactive memory and speaking up for enhanced team effectiveness. Frontiers in Psychology, 14, Article 1208019. https://doi.org/10.3389/fpsyg.2023.1208019
Tarafdar, M., Tu, Q., Ragu-Nathan, B. S., & Ragu-Nathan, T. S. (2007). The impact of technostress on role stress and productivity. Journal of Management Information Systems, 24(1), 301–328. https://doi.org/10.2753/MIS0742-1222240109