As we've explored in previous research, generative AI adoption isn't just about far more than mere technology. More importantly, it's about how leaders shape the employee experience around that technology. Now, as GenAI becomes further embedded in everyday work, a deeper divide is emerging. This divide separates employees who feel prepared to work in an AI-augmented workplace from those who don't. And all of it turns on trust, especially in the organization and its leadership.
In a recent panel study of more than 6,000 full-time employees, we found a strong link between AI confidence and employee engagement. When employees lack confidence in their ability to work effectively in an AI-augmented environment, it becomes harder for them to see a path to success, undermining their motivation and involvement at work.
Consider the striking difference: 70% of those who feel confident using AI are highly engaged, compared to just 35% of those who are not confident. In short, employees who feel prepared for an AI-enabled future are twice as likely to be engaged on the job.
Confidence in AI adoption reflects the ways that the surrounding environment enables people to adapt and grow alongside these new tools. To unlock the full value of both the technology and the workforce, we need to understand what's building that confidence, or standing in its way.
Over the past year, trust in leadership has become a defining issue within organizations. As emerging technologies reshape industries (and leaders face growing pressure to drive efficiency and do more with less) employees are paying close attention. When they lack confidence in leadership's ability to adapt, or sense that changes are being made without care for people, engagement declines. Voluntary attrition rises, especially among shared services and non-frontline roles.
It's often high-performing employees who opt out first. These are the individuals leaders rely on most, often assigning them critical or complex work. But when that reliance isn't matched by clear recognition, ongoing support, or visible affirmation of their value to the organization, trust erodes and top talent begins to disengage.
Trust, as we've long known in behavioral science, rests on three pillars: competence, integrity, and care.
These are not abstract ideas. They directly influence how employees interpret change. Two people can face the same new technology, the same workload shift, or the same leadership message yet walk away with very different experiences. For one, it may feel like an opportunity to grow, experiment, and expand their role. For another, it may feel like a threat to their stability or a signal that they're less valued.
Mindset becomes the filter. When trust, psychological safety, and recognition are present, employees are far more likely to see disruption as a chance to contribute and adapt. Without those conditions, the same disruption breeds fear, disengagement, and resistance.
Confidence doesn't come from access alone. It grows in environments where people feel safe asking questions, admitting what they don't know, and challenging decisions without fear of retribution. In organizations where employees report psychological safety, almost 70% feel confident using AI effectively. In low-safety environments, fewer than half say the same.
This underscores that adoption is not just a matter of rollout plans or training programs. It's about whether people believe they can safely learn, adapt, and contribute. Leadership behaviors play a decisive role here in modeling vulnerability, admitting mistakes, inviting dialogue, and creating space for dissent all signal to employees that their input matters.
The Vroom-Yetton decision tree — a classic framework for determining when to involve employees in decisions — offers valuable guidance for this moment. The model shows that when implementation success depends on employee commitment (as AI adoption absolutely does), leaders must involve people in the decision-making process. With AI and other disruptive technologies, organizations can't afford a top-down, "done-to-you" approach. When employees are given a voice in decisions that directly affect their work, they gain not just clarity on the tactical side of change, but also a sense of agency.
Matching the conversations to what employees need is just as critical. Yes, they need information about new tools, processes, and expectations. But they also need to feel that they belong, that their concerns are heard, and that the technology will support, not diminish, their value. Adoption accelerates when both the tactical and emotional dimensions of change are addressed.
Research shows that employees who feel heard are 4.6x more likely to feel empowered to perform their best work. In the context of AI, this means creating forums for honest dialogue about fears, opportunities, and the changing nature of work itself.
Too often, AI strategy is treated as a systems initiative, while culture and behavior are left as side conversations. Yet the evidence is clear: culture sets the tone for what is valued, and associated behaviors show up in the daily actions that determine how work actually gets done. Without attention to both, even the best tools fail to take hold.
Many organizations still fall into the trap of thinking about AI through an "if you build it, they will come" mindset. With the flood of tools entering the market, we know that simply introducing technology is not enough. Employees need to see what's in it for them and how AI will support their work, not threaten it. And leaders must model the behaviors that create psychological safety and instill confidence that AI will benefit both people and the organization.
The most successful AI adoptions we've seen follow a behavioral science approach. They start with understanding employee concerns through multi-channel listening. They use behavioral nudges to help managers have the right conversations at the right times. They measure not just usage metrics but psychological indicators like trust, safety, and confidence.
This is why at Perceptyx, we put behavioral science at the center of transformation. Our AI agents are designed to help employees make subtle, personalized behavior shifts that align individual goals with organizational priorities. Far from merely deploying technology, we empower people to adopt it in ways that transform how work gets done.
AI adoption without trust is doomed to fail. Organizations that focus solely on the technical aspects of AI implementation while ignoring the human dimensions will find themselves with expensive tools that nobody wants to use.
If your organization is struggling with AI adoption, stop looking at your training programs and start examining your trust metrics. Ask yourself: Do employees believe we'll use AI ethically? Do they trust their managers to understand this technology? Most critically, do they believe we care more about their growth than their replacement?
Ready to measure what really matters for AI adoption? Partner with us to assess and strengthen the cultural and behavioral foundations that make AI transformation sustainable. Schedule a demo to see how our own AI-powered platform measures trust, psychological safety, and AI confidence across your workforce. For more insights on the human side of technological change, subscribe to our blog and check out our FAQ to learn more.