Perceptyx Blog

Key Takeaways for HR and Employee Experience Leaders from SIOP 2026

Written by Multiple Contributors | May 15, 2026 12:00:01 PM Z

At the Society for Industrial and Organizational Psychology annual conference this year, AI was the assumed backdrop in nearly every session rather than the new topic, which freed up space for deeper discussions of specific issues. Buyers and CHROs wrestled openly with who governs AI rollouts, how organizations evaluate whether their agents are actually doing useful work, who gets access to which tools, and what happens to the well-being of employees whose efficiency gains are being absorbed by added expectations rather than reinvested in growth. Alongside that, the knowing-doing gap got named more clearly than it has in years.

Multiple sessions on listening, action planning, and people analytics arrived at the same uncomfortable conclusion: organizations have richer data than they have ever had and most cannot translate it into changed behavior at scale. The active-versus-passive listening debate also moved in a direction that worried our team, with some vendors pitching algorithmic engagement scoring as the future of employee voice. And underneath all of it, the I/O psychology field kept being told to simplify its message and get into the room earlier when AI tools are being designed.

What surprised our team most at SIOP 2026?

Bradley Wilson, Ph.D., Global Head, Workforce Insights & Innovation: “What surprised me in a positive way was how much ground simplicity has gained. I/O psychologists tend to love complex methodologies and 40-slide presentations, and one of the sessions made the point that we cannot overwhelm leaders into action. A quote that stuck with me was that bad science with a good story is more persuasive than good science that fails to tell a compelling story. The science still has to be sound. The ability to translate it into a story leaders can use is what determines whether it changes anything. I was also on a panel about humor at work, where we presented data from 3,900 respondents showing that the core principles of improv, including active listening and ensuring every voice gets heard, were strongly associated with engagement, belonging, and confidence in change.”

Brittany Head, Ph.D., Lead Behavioral Transformation Scientist: “The session I keep thinking about reframed the CHRO role as a Chief Workforce Officer. The argument was that ‘workforce’ is no longer exclusively human. It includes people, internal flex talent, synthetic labor managed by IT, and robotic labor. The Chief Workforce Officer has to work across finance, IT, and HR with one operating system instead of parallel ones. That reframe should also change how we think about employee experience and customer experience. They stop being parallel programs and become one feedback loop.”

Megan Steckler, M.A., Director, Behavioral Science Strategy: “Skills were everywhere at SIOP this year, and the Kirkpatrick model kept coming up as the foundational framework for training evaluation. What stuck with me was a speaker who gave the audience permission not to discount Level 1 metrics. We focus so much on moving up to higher evaluation levels that we forget Level 1 is signaling something important about whether the conditions for learning even existed. Training does not have to be a bad experience. The learning experience is part of the employee experience and deserves attention on its own terms.”

Jonathan Elbaz, M.B.A., Director, Leader & Workforce Transformation: “The session I went into not knowing what to expect was AbbVie’s More Better Now framework, presented by their CHRO. The premise is that high performers will drive a disproportionate share of organizational impact, so investment should be disproportionate too. More means hiring well rather than quickly and refusing to settle. Better means upgrading existing talent through deliberate development. Now means identifying critical roles and building pipelines around them. When I asked how this applies after a major acquisition like Allergan, the CHRO said they tell acquired employees from day one that they will be held to the same performance and behavior standards within 6 to 12 months. That level of intentionality from a Fortune 100 executive was the most concrete talent strategy framework I encountered all week.”

Why is AI showing up in every employee experience conversation, and what is the better question to ask?

The AI conversation has matured past whether AI matters. Instead, there are now ongoing debates in over governance, design involvement, and the durable human capabilities that determine whether the technology produces value.

Stephanie Schloemer, Ph.D., Senior Workforce Transformation Consultant: “What surprised me was how grounded the CHRO conversations felt. The hype is about AI replacing workforces overnight, and the CHRO panel brought the discussion back to operational reality. The consistent message was that AI governance is everything. The organizations making progress are being deliberate about defining approved tools, setting frameworks, and creating enterprise-wide methodologies for redesigning work. It was much more disciplined than I expected.”

Megan Steckler, M.A.: “The theme that kept coming up was where HR sits in the AI development process. The traditional model is that HR receives the technology, manages the rollout, and supports adoption. Change management is critical to AI success, and we are experts in the human side of technology rollouts, but we need to be involved earlier. If HR and I/O practitioners are not in the room when tools are being designed, we are reacting to decisions instead of shaping them. There is a new table we need to earn a seat at.”

Brittany Head, Ph.D.: “I decided to be optimistic about AI because pessimism is not useful. The question is what is uniquely human in this environment, and the answer is not the skills. The answer is judgment and critical thinking. If I am making a hire ten years from now, when more Gen Z and Alpha employees are in the workforce and have never done work without AI assistance, I want to know they can write good prompts, manage agents, and apply critical judgment to the output. Experiences that demonstrate judgment will tell me more than a resume of legacy skills.”

Why is more data not producing more action?

Across sessions on employee listening, people analytics, and engagement, the same problem surfaced. Organizations are awash in data and starved for action.

Bradley Wilson, Ph.D.: “There is a very clear awareness of the knowing-doing gap. We have more data than ever, and we are not consistently able to translate that into insight, persuade non-technical audiences, or drive behavior change at scale. No one at SIOP had a clear answer for that last part. From a product and technology standpoint, I wish there had been more conversation about nudge theory and managing the environment so it becomes easier for people to do the right thing.”

Ellen Lovell, Ph.D., Principal Consultant, Leader & Workforce Transformation: “In the session Stephanie [Schloemer] and I ran, we cited research showing only 20% of managers create an action plan after receiving survey results, 14% ever return to it, and 92% of organizations fail to act on feedback and see results stagnate or decline. The conversation in the room was practical. Leaders from Hanger, Cisco, and Onvita Health all talked about how they are investing in nudges and shared accountability to make action a sustainable part of how the organization operates rather than a check-the-box activity.”

Zachary Warman, M.S., Senior Behavioral Scientist: “Action planning came up in nearly every conversation I had. No one is doing it well, no one thinks they are doing it well, and if anyone thinks they are, I did not meet them. The simplicity panel I moderated landed on a related point. If you have to explain a dashboard, or hold a meeting to explain why you are taking action on a specific finding, the dashboard is too complex and nobody will act on it. Lowering that barrier is about helping people take the next step, not just helping them understand the data.”

Bradley Wilson, Ph.D.: “One session that genuinely shifted how I think about measurement was a debate on whether ROI is dead. The argument was that ROI is incomplete. It does not account for time horizons or net present value. A $1,000 investment that returns $500 next month is better than a $1,000 investment that returns $2,000 in five years, but a pure ROI calculation can mislead you. ROI is still useful, just not enough on its own.”

Where does the human belong in AI-enabled work?

The conversation about active versus passive listening intensified this year, and not all attendees at the conference landed in the same place.

Bradley Wilson, Ph.D.: “Last year I argued in favor of AI-powered listening that measures behavior rather than asking people for input. The consensus at the end of that debate was that there is real opportunity, but the act of asking people for input and giving them a sense of agency should not go away. This year I saw the field take another step toward passive measurement, with some sessions essentially proposing that organizations monitor digital communication and calculate engagement from an algorithm. The risk is that it stops feeling like listening and starts feeling like surveillance. There is a meaningful difference between having a conversation and wiretapping a house.”

Ellen Lovell, Ph.D.: “The Walmart example captured the balance well. They automated customer service calls but kept all of the call center employees. The job changed. Those employees now follow up on one-star reviews and handle the unique challenges that an automated system cannot. Headcount stayed the same, while the work shifted toward what only humans can do well. That is the question for our field. How do we advocate for the right level of human observation of the technology, and where do we redeploy people toward higher-value work?”

Brittany Head, Ph.D.: “The high-performer squeeze is going to get worse. As skill distributions get more uneven, the people who are learning AI, working faster, and keeping up are absorbing the work of two or three colleagues. They are the ones managing AI output and sorting through the slop. The pressure is climbing, and most development programs were built for a different distribution of capability.”

Zachary Warman, M.S.: “One of the themes through nearly every session was that work itself is the unit of analysis now. AI is driving a redesign of tasks and systems, and AI fluency keeps coming up as a capability set practitioners are trying to define. Most organizations are grappling with different directions on how jobs are changing, do not have a full answer yet, and the answer is probably going to depend on the circumstance.”

What conversations did SIOP 2026 miss?

Megan Steckler, M.A.: “Most of the AI conversation was about tools and adoption. There was very little about agent evaluation. Are the agents really having the intended impact? How well are they performing the target tasks? Who is defining what good looks like in HR contexts? The evaluation work is lagging behind the deployment work. The other gap was AI access inequity. Not everyone on the same team or in the same organization has the same access to AI tools, and the employees who are not being given access are not waiting. They are likely using personal tools that are not sanctioned and do not show up in any adoption metric. Our picture of how AI is actually being used inside organizations is incomplete.”

Stephanie Schloemer, Ph.D.: “There was a lot of discussion about agility, enablement, and reskilling, but less about the human cost of constant adaptation. We talked about capability building without enough discussion of whether organizations are pacing change in a way employees can absorb.”

Ellen Lovell, Ph.D.: “I came in hoping to see more innovation and walked away thinking the field is still talking about a lot of the same things in slightly different language. The pockets of real innovation were narrow, and I had to hunt for them. The bigger question for our field is how I/O psychology advocates for the right things to stay constant while change happens around us, and how we keep the human component where it matters while also bringing trust and confidence in technology where it is warranted. That tightrope did not get walked enough in the sessions I attended.”

Jonathan Elbaz, M.B.A.: “The disconnect that struck me most is that AI is not making people feel better. Time savings from AI are being met with added expectations and leaner staffing. Efficiency is rising, while well-being and work-life balance are not following. Share price might be. That gap deserves more attention as we move further into 2026.”

What will Perceptyx leaders do differently after SIOP 2026?

Heading into the rest of the year, the team converged on several practical moves related to AI.

Stephanie Schloemer, Ph.D.: “Stop treating AI adoption as a technology implementation. The organizations moving fastest are approaching it as a workforce transformation issue. That means being clearer about governance, more intentional about redesigning work, and more proactive about helping employees understand how their roles are evolving.”

Megan Steckler, M.A.: “I am coming home thinking about new ways to give employees agency in the listening process. We talk about closing the feedback loop, but typically that means a manager taking some general action based on results. What if the loop closed faster and the employee had control over how it closed? An onboarding survey flags that an employee has not received access to a tool they need. Instead of that going into a reporting site, the listening agent asks if it should open a ticket. That speeds the feedback loop and builds trust in the process.”

Jonathan Elbaz, M.B.A.: “Get ahead of AI rather than reacting to it. Co-pilot licenses are being distributed and experimentation is being encouraged, but the governance, the use cases, and most critically the question of what employees do with the time AI saves them are usually unaddressed. Is the capacity going toward learning, growth, or higher-value work? Or is the expectation that the employee just gets more done at the same pace? That second outcome is what hollows out the well-being conversation.”

Bradley Wilson, Ph.D.: “Invest more time in having ROI conversations earlier and with broader metrics. ROI is incomplete, but the alternative is not to abandon it. The alternative is a balanced scorecard. We can help the people we work with sharpen the questions leaders actually need answered.”

What should HR and employee experience leaders take from SIOP 2026?

SIOP confirmed that the gap between deploying technology and changing how people work is the central problem facing HR leaders in 2026. AI tools are everywhere, while behavior change at scale, the discipline that turns those tools into business outcomes, is still rare. Active listening, faster feedback loops, more deliberate talent investment, and clearer governance came up repeatedly as the moves that distinguish the organizations making real progress.

If you want to go deeper on how engagement and activation produce measurable business outcomes, our research report on employee engagement lays out the data. For a closer look at the AI adoption question that ran through every session at SIOP, our generative AI research report addresses much of what we are seeing across the workforce. And if you want to talk through any of this with our team, schedule a meeting and we can dig into the specifics for your organization.