
By Enya Miller (24-25)
Artificial intelligence (AI) is no longer just a concept for the future; it’s already integrated into many workplaces. From chatbots responding to customer inquiries to software that monitors employee productivity, AI is transforming organisational operations. For employers, these tools offer the promise of a more productive workforce. However, for employees, AI monitoring can often feel quite different: intrusive, unfair, and for some, threatening.
How does AI monitoring impact employees’ trust in their employers? Trust is essential to bind workplace relationships. When it’s lacking, employees can become demotivated, disengaged, or even resentful towards their organisation.
Why trust matters at work:
When individuals have trust in their employer, they are more inclined to share ideas, collaborate, and remain loyal to the organisation.
However, trust is delicate. Suppose employees perceive themselves as being under constant surveillance, especially from a machine they don’t completely comprehend. In that case, they may see this as a signal that their employer questions their competence. Studies have indicated that when trust drops, employees may resist, disengage, or even depart from the organisation.
Therefore, the stakes are significant: implementing AI monitoring could enhance oversight, but it also threatens to undermine the very trust that organisations need to succeed.
What I did in this study:
I conducted an online study involving over 100 working adults from varying work sectors. Rather than monitoring them directly, participants were presented with short scenarios (referred to as vignettes) depicting a workplace situation.
Some scenarios featured AI monitoring (for instance, software tracking engagement, health and safety, or well-being).
Others illustrated human monitoring (where a human manager was responsible).
Participants also completed a baseline trust measure prior to encountering any monitoring.
Following each scenario, participants evaluated how much they trusted their employer.
The study also asked: did they feel the AI was acting in their best interest?
What I found:
The results indicated: trust was significantly lower when AI monitored employees compared to being monitored by humans or not monitored at all.
Interestingly, my study did not uncover strong evidence that the monitoring domain (engagement, well-being, or health and safety) significantly impacted trust levels. Even when AI was monitoring for positive reasons, such as ensuring employee safety, the decline in trust persisted.
However, there was one encouraging finding. Employees who felt that AI was being utilised fairly and in their best interest reported higher levels of trust. This highlights that the way organisations introduce, explain, and apply AI can significantly influence outcomes.
Why does this matter for organisations?
For managers, leaders, and HR professionals, these findings suggest:
AI undermines autonomy. Self-Determination Theory suggests that employees need autonomy, competence, and relatedness to stay motivated. AI monitoring risks autonomy, making individuals feel controlled rather than empowered.
Psychological safety is at risk. A workplace should be an environment where employees feel safe to take risks, make mistakes, and be themselves without fear of judgment. AI monitoring can create a sense of constant observation, resulting in pressure and anxiety.
Fairness is crucial. Procedural Justice Theory indicates that individuals are more likely to accept decisions, even unfavourable ones, if they believe the process is fair. If employees perceive AI as being implemented ethically, transparently, and with their well-being in mind, trust is far more likely to endure.
What should organisations do?
Be transparent. Employees should be informed about what is being monitored, the reasons behind it, and how the data will be utilised.
Emphasise support, not control. AI should be positioned as a tool for development, rather than solely for evaluation.
Build trust before technology. Organisations that prioritise strong relationships with their employees are better equipped to adopt new technologies without provoking resistance.
Check for fairness. Regular reviews should be conducted to ensure that AI monitoring is applied consistently, without bias, and in alignment with employees’ expectations.
Consider Leadership training to ensure empowered employees, to mitigate any effects of AI implementation.
What this research adds
This study contributes significantly to the ongoing discussion regarding AI in the workplace. It reveals that:
The presence of AI monitoring is enough to diminish trust, regardless of its stated purpose.
Perceived fairness and intention serve as critical buffers, helping to sustain trust even in the presence of AI.
Organisations must carefully consider not just what AI monitors, but also how it is introduced and communicated to employees.
Limitations and next steps
Like all research, this study has its limitations. The scenarios were hypothetical, meaning participants didn’t undergo real monitoring. Future research should examine AI in actual workplace environments to determine if similar effects occur. My sample size was also limited, and group imbalance might have affected statistical power.
Nevertheless, the findings offer a solid foundation for understanding employee responses to AI monitoring and guiding practical solutions.
Final thoughts
My research indicates that when AI monitoring is applied ethically, transparently, and with genuine concern for employees, AI could evolve into a supportive tool rather than a means of control.


