FAQs: Artificial intelligence
What recourse is available for model errors or other AI-related issues that impact customers?
All services and models deployed in Genesys Cloud have defined service level agreements (SLAs) on various relevant metrics defined at the design stage, and control and rollback mechanisms for models in specific scenarios. For example:
- Genesys predictive routing incorporates failsafe mechanisms to have the call routed to a specific pool of agents when the model is unable to identify a suitable agent within a pre-defined period.
- Genesys Predictive Engagement has a pacing implementation failsafe that limits the number of engagements offered to customers when the model is erroneously targeting too large of an audience.
- Model Life Cycle Orchestration defines SLAs on certain metrics and rolls back newly deployed active models to the previous version when the number of prediction errors exceeds defined thresholds.
These mechanisms are monitored actively following typical processes for cloud software development. This process can include alerts to the on-call team if such model metrics as missing feature value thresholds, prediction errors, and so on, exceed pre-defined thresholds.
What approaches have you used to reduce bias and disparate impact in model selection?
The Genesys approach to managing AI bias and model impact is focused primarily on:
- Assessment and curation of model input data to make certain that no sensitive features are included in the models.
- Model monitoring tracks various model metrics to help avoid data drift and concept drift.
- Model cards and dataset cards document various characteristics of AI models and training datasets in a standardized format. These features also relate to bias detection.
What is your enterprise governance process and risk assessment framework for AI and ML technologies?
Genesys governance processes are tightly coupled with the development processes. As part of these processes, Genesys incorporates mechanisms such as Data Privacy Impact Assessments, Security and Compliance reviews, and AI Model Risk-focused reviews. These reviews occur at various stages of the software life cycle to promote responsible AI development and risk mitigation.
Genesys Cloud model training and inference pipeline, including metrics, sampling, and validation, follows the standard review process shared by all Genesys Cloud software. The first round of reviews takes place during the design phase as part of the software design life cycle, where AI architects and data scientists review and approve the design, alongside privacy and security reviews. The second phase of reviews occurs at the development stage, during which all model training and inference code is peer-reviewed before merging and deploying. This stage often includes benchmarks and the addition of model-specific tests.
Also, the Genesys AI Ethics Committee, which spans across functions, including privacy, security, architecture, product, and AI, holds regular quarterly meetings to assess our products and processes from an ethics and governance perspective.
This comprehensive governance framework helps to ensure that artificial intelligence and machine learning technologies deployed by Genesys meet the highest standards for security, privacy, and ethical considerations.
What guardrails does Genesys Cloud AI ethics provide to protect customer privacy?
Genesys Cloud AI Ethics enables customer privacy through the following key principles:
- Balance value creation with empathy: Genesys prioritizes understanding and addressing the needs of all stakeholders during the value-creation process, with privacy considerations integral to any decision.
- Incorporate privacy design principles: Privacy is embedded by design at Genesys. The right to privacy is protected from the outset, governed by explicit customer consent through mechanisms like master service agreements (MSA). These principles include opt-in clauses and data-use consent, with a focus on anonymization and regulatory compliance.
- Understand and reduce bias: Genesys actively works to mitigate bias in AI models to support ethical and fair decision-making, considering the broader context when handling data.
- Value transparency: Genesys takes measures to make sure that stakeholders are informed and understand the decision-making processes behind AI models, promoting trust in how data is used and managed.
Can customers bring our own AI models, choose from existing models, or customize models to suit our business needs?
Yes, as a customer, you can bring your own AI models, select from existing models, or customize models to suit your business needs. Genesys supports a “bring your own” (BYO) approach through connectors and the open platform, allowing you to build custom versions of solutions like Genesys Agent Assist or Genesys Agent Copilot. Also, Genesys Cloud supports integration with third-party services, such as speech-to-text and text-to-speech engines, providing further flexibility in tailoring the AI capabilities to your specific requirements.
What are the key steps for deploying Genesys Cloud AI solutions?
Every AI implementation is unique, so it is important to tailor the deployment to your specific business goals. Start by selecting the correct AI capabilities, configuring and integrating them with your existing systems, and then testing thoroughly.
To support you further, Genesys Professional Services experts who specialize in rapid deployment, customization, and multivendor integration are available. Genesys consultants have decades of experience, so you can avoid common pitfalls and be certain your AI solution is optimized for both customer and employee experience goals.
How does Genesys monitor compliance with industry standards and regulations?
Genesys enables customer compliance with regulations through a robust framework that includes 23 accreditations and certifications for adhering to local, regional and global regulations. Additionally, we require that Genesys Cloud AI solutions successfully pass compliance checks each year to meet market expectations and Genesys regulatory requirements. This can give customers confidence in the security, ethics and compliance of their AI solutions.
What is your privacy policy regarding AI services?
Privacy by Design and Privacy by Default are embedded in the processes around the design, set up and update of our products and services — from development to release and improvement. Our standard baseline for compliance with privacy and data protection is the EU General Data Protection Regulation (GDPR) and related legislative pieces, which are foundational for our privacy program.
In addition to that, our risk management framework incorporates AI model actions, including privacy as a pillar. Our corporate and product privacy teams are knowledgeable of other AI risk frameworks incorporating privacy to the assessment of AI products, such as the NIST AI Risk Management Framework, ISO/IEC 23894:2023, and additional resources such as the Assessment List for Trustworthy AI developed by the High-Level Expert Group on AI set up by the European Commission. This provides a robust framework designed to protect fundamental rights from the design phase of AI models.
Here are additional details about the Genesys position on privacy and compliance:
- Building large-scale systems that apply AI to optimize customer experience often requires very large datasets that can contain data on many individuals coming from a variety of sources. Genesys is committed to core principles of privacy by design, limiting the data collected about individuals as the default.
- Beyond enforcing privacy by design principles across systems, Genesys uses rigorous processes to monitor compliance of our AI products with regulations such as GDPR and any other applicable legislation globally.
- While Genesys has technical and administrative controls in place to limit the access to customer data, we have established additional safeguards designed to ensure that all data used for the development of new products is anonymized and governed by a set of processes detailed in the Genesys data anonymization framework.
How does Genesys fine-tune large language models (LLMs)?
Hallucinations are mitigated via fine-tuning models with conversational datasets that are selected for the use cases such as Customer Care, and industry verticals such as healthcare, financial services, retail. This process can reduce hallucinations significantly by reweighting the models to the use case.
Prompting best practices are set to instruct the large language model (LLM) so that it avoids fabricating answers and says “I don’t know” if the question isn’t relevant or answerable. This behavior gives the LLM a high degree of confidence in the answer, constraining the response with examples of correct outputs and setting the deterministic temperature to be as low as possible.
Retrieval-augmented generation (RAG) constrains responses so that they are derived from a known-good set of data from the business.
In what cases is customer data used to train your AI models?
Customers can consent to participate in service improvements through a rigorously controlled process. Data is sampled and fully anonymized in the production environment before it can be used for AI model training purposes. By default, the Genesys Master Service Agreement (MSA) opts customers out of any data donation.