At Perceptyx, we pride ourselves on being at the forefront of innovation through the ethical application of artificial intelligence (AI), including machine learning (ML), natural language processing (NLP), and generative AI (GenAI). These technologies power features across our platform while upholding rigorous privacy and security standards.
As organizations increasingly turn to AI-powered solutions to understand and improve their employee experience, transparency becomes crucial. Leaders need confidence that AI technology enhances rather than compromises their employee listening efforts. This means understanding not just what AI can do, but how it does it — and most importantly, how it protects the people behind the data.
Transparency, security, and fairness are central to our AI strategy. The purpose-built AIl we deploy undergoes rigorous, ongoing evaluation to ensure consistent, inclusive performance across diverse use cases that AI is uniquely suited to address. We apply thorough vetting to our large language models (LLMs) and generative AI features to minimize risks such as hallucinations, bias, and unintended data exposure. Our goal is to empower organizations with advanced technologies that not only perform reliably but also respect and safeguard their data at every step.
Understanding where AI operates within your employee experience platform is essential for building confidence in its application. At Perceptyx, we utilize several distinct AI methods across our products and features:
Here at Perceptyx, AI-generated insights are aggregated across datasets, enabling users to analyze trends without identifying individual responses. These models are designed to surface patterns, not to single out or interpret any one comment or employee. Insights from LLM-based products are similarly intended for high-level analysis, not individual employee decisions. Comment Copilot aggregates and summarizes feedback across multiple responses, ensuring insights remain broad and anonymous.
AI Coach, which provides personalized guidance, has guardrails in place in the event of attempted misuse, such as seeking information about specific employees or advising on employment-related decisions. However, customers are encouraged to train their employees and administrative users on the customer's standards related to the use of AI tools.
This question represents one of the most significant concerns organizations have about AI: whether their sensitive employee data is being used to improve models that could benefit competitors or compromise confidentiality. The answer is definitively no: We do not use customer data to train our models.
For AI ML and NLP training, we use separate proprietary and anonymized open-source and workforce industry datasets. For GenAI/LLMs, our in-house experts refine responses and engineer prompts to align with workforce industry best practices and perspectives.
We leverage a third-party sub-processor for our generative AI features like Comment Copilot and AI Coach — and these features also do not train on our customers' data. This commitment to data sovereignty ensures that your organization's insights remain your own while still benefiting from cutting-edge AI capabilities.
At Perceptyx, security isn't just a feature; it's our foundation. Our program is architected and led by an industry veteran with over two decades of experience in strategic security leadership and hands-on incident response. This expertise, which spans from building defensible architecture to leading in a crisis, is shared across our entire team of certified professionals, who bring a deep understanding of both comprehensive security management and the offensive mindset needed to safeguard data. We strive to embed this vigilance into every stage of our development process and validate our practices through continuous testing and independent audits, including our SOC 2 Type 2 and ISO 27001 certifications.
All of our AI features and models respect minimum n-size reporting requirements (the minimum number of responses required to display summary data for a group) that’s implemented as part of reporting practices on our platform. For our comment summarization generative AI tools, before sending data through to third-party APIs, we apply a Named Entity Recognition mask to identify and remove names as part of our data preparation process.
This process securely transmits properly masked and cleaned data through the API for tagging and summarization. While designed with safeguards, it may not completely eliminate all risks. However, because we can mask names (and our guidance to respondents is to avoid including names in comments), this typically applies to only a tiny subset of responses. As a result, the overall risk remains minimal.
Our open chat interfaces, such as AI Coach, use information that users choose to include in their messages. This could potentially include PII, but it is solely up to the user when interacting with the tool. We recommend being thoughtful about the information you choose to enter into the tool.
Bias prevention is not a one-time effort but an ongoing commitment woven throughout our AI development process. The Perceptyx Data Science Team regularly assesses models to minimize bias and maximize fairness. If issues with bias are identified in input datasets, our ML Engineers implement pre-processing data augmentations (such as pronoun replacement) and in-processing procedures (such as over/under sampling) to address demographic imbalance.
This systematic approach ensures that our AI tools provide equitable insights across diverse employee populations, helping organizations build more inclusive workplaces rather than perpetuating existing biases.
Despite the sophisticated technology behind our AI capabilities, our platform is designed to be intuitive, and the AI-generated insights are presented in a clear, easy-to-understand format. HR teams don't need data science backgrounds to benefit from AI-powered employee experience insights.
The complexity of ML and NLP happens behind the scenes, while users interact with straightforward dashboards, clear recommendations, and actionable insights that directly support their employee experience goals.
For comment analytics, our AI models can detect and categorize sensitive topics, helping HR teams address workplace concerns. However, results are always aggregated and depersonalized to protect employee confidentiality. AI Coach is also trained to recognize sensitive topics, such as those on our "hot topics" list, and to direct employees to speak to HR or a trusted leader when relevant.
This approach ensures that AI enhances rather than replaces human judgment when it comes to sensitive workplace issues.
Enterprise customers want clear administrative control over AI functionality, and we provide exactly that. Our generative AI features are entirely optional and available only through opt-in. Organizations can choose which AI capabilities align with their comfort level and compliance requirements.
This control ensures that each organization can adjust its AI usage to match its needs, policies, and cultural considerations.
AI-generated responses may occasionally include inaccuracies or hallucinations. While this cannot be entirely eliminated, we take several steps to minimize risk and enhance quality:
We leverage state-of-the-art models. Comment Copilot, for example, uses carefully selected and tested language models to deliver high-quality, reliable responses tailored to our use cases. Users can pre-filter data (by themes and demographics) and use pre-set prompts in Comment Copilot to keep AI responses relevant and aligned with their needs.
AI Coach is tuned to draw on our curated body of behavioral science research and nudges when generating its answers, which helps mitigate against irrelevant or inaccurate responses.
All of our models are equipped with automated monitoring systems, and a human-in-the-loop review is triggered whenever any potential issues are detected. We've also established an internal AI Governance Committee, comprised of cross-functional leaders from across the company, aimed at consistently developing AI solutions that are safe, responsible, and aligned with industry standards.
All AI features are designed to work seamlessly with your existing data, and no additional setup is required. The only exception is for optional generative AI features, which require explicit opt-in. Once enabled, data flows are fully automated, requiring no further action from your team.
Different products use different types of data, and our platform's flexibility ensures that AI capabilities enhance rather than complicate your existing employee experience workflows.
Perceptyx is dedicated to complying with emerging AI regulations and maintaining responsible AI practices that prioritize data privacy and security. Perceptyx also holds key certifications, like the EU-U.S. Data Privacy Framework, to help our customers meet their own legal obligations under international data privacy laws, such as GDPR and CCPA. Our commitment to compliance means organizations can confidently leverage AI capabilities while maintaining their regulatory obligations.
As AI continues to make quantum leaps in evolution, transparency will remain central to building trust between organizations, employees, and technology. At Perceptyx, we believe that the most powerful AI is AI that employees and leaders understand and trust.
Our approach demonstrates that organizations don't have to choose between innovation and transparency. By maintaining clear standards for privacy, security, bias prevention, and user control, we help organizations unlock the full potential of AI while preserving the human-centered approach that makes employee experience initiatives successful.
Ready to explore how transparent, ethical AI can transform your employee experience strategy? Register for our upcoming webinar, “AI in Employee Experience: How HR Leaders Can Balance Innovation and Risk,” and then schedule a demo with our team to see how our AI-powered insights and action planning capabilities can help you build a more engaged, productive workforce while maintaining the highest standards for data privacy and security.
For more insights on AI innovation and employee experience best practices, subscribe to our blog.