AI is a helper, not a decision-maker
AllVoices AI is designed to support - not replace - your people, your decisions, and your processes. AllVoices AI helps streamline workflows, surface relevant policies, and summarize factual information documented by humans.
Our AI never takes action or creates documentation without human review and approval. And our AI never makes decisions or draws conclusions from sensitive data.At the same time, by ensuring timely resolutions, standardizing documentation, and giving employees a fair process, AllVoices helps reduce the risks legal teams care about most. Below is a breakdown of the most common questions we hear from legal teams—along with how we intentionally address each one.
1. Does using AllVoices reduce legal risk?
Yes.
AllVoices and our AI implementation helps your team reduce legal risk, not increase it.
Ensures a consistent, compliant process across all employee relations cases
Helps teams resolve cases up to 70% faster, often de-escalating issues before they reach litigation
Ensures case timelines, evidence, and communication logs are organized and tamper-proof—critical if a case ever moves to litigation
In the event of legal action, everything you need is clearly documented and accessible to the appropriate parties - no digging through spreadsheets, Slack messages, and email chains.
2. Does AllVoices AI document recommendations?
No.
AI-generated recommendations are not logged or discoverable unless a human accepts them. Examples include:
Drafted employee responses
Suggested tasks or resolutions based on company policies
Surface-level recommendations like policy lookups
Ensures case timelines, evidence, and communication logs are organized and tamper-proof—critical if a case ever moves to litigation
In each case, the system requires explicit human review and confirmation. Unaccepted or ignored suggestions disappear and are never stored in the case file.
3. Does AllVoices recognize Attorney Client Privilege?
Yes.
AllVoices has an Attorney Client Privilege option on all cases. This allows admins to identify any case that contains communication between legal (either internal or external) and other members of the company.
These conversations (including all notes, messages, etc) are considered privileged and not subject to discovery.
The Attorney Client Privilege feature enables your team to denote which cases have communication with attorneys that needs to be excluded in the event that the case is in litigation.
4. Does AllVoices AI make decisions?
No.
Our AI cannot take any action—such as documenting a recommendation, creating a task, or sending a message—without a human reviewing and approving it. AllVoices AI supports human decision-making. It does not replace it.
5. Does AllVoices AI write reports?
No.
Our AI drafts Case Summaries based on details provided by employees submitting reports or provided by admins creating cases.
Our AI summarizes the information provided and does not inject any additional information or draw any conclusions in the brief Case Summary.
Our AI also drafts Investigation Summary Reports, exclusively based on structured information, evidence, and interviews provided by the Investigator. Before a case can be closed, the Investigator must confirm they’ve reviewed, edited, and approved the report.
Recommendations, Resolutions, and Outcomes are determined by humans and kept completely distinct from the Investigation Summary Report, which is a factual summary of information provided; initially drafted by AI and edited/approved by a human Investigator.
Once a Case or Investigation is closed by the admin or Investigator, summaries and reports are locked and cannot be edited - ensuring integrity and preventing post - close tampering.
6. Does AllVoices help ensure consistency across cases?
Yes.
Vera references your company’s uploaded policies and previous case precedent to surface relevant information that supports consistent decision-making over time. This helps HR professionals apply standards more evenly and reduces the risk of:
Disparate outcomes
Inconsistent application of company policy
Claims of unfair or biased treatment during investigations
7. Does AllVoices actively prevent AI bias?
Yes.
Fairness and consistency are at the core of AllVoices’ mission. We know that bias - whether conscious or unconscious - is one of the most critical risks for HR and legal teams to manage, especially when handling sensitive employee relations issues. That’s why our AI implementation is carefully designed to reduce opportunities for bias:
Vera is explicitly instructed in multiple ways to avoid bias and to remain neutral in its responses. Additionally, information is provided in contexts that encourage neutral analysis and a fact-based approach in compiling information
The AI is limited to summarizing information manually entered by humans—it does not draw conclusions or speculate
By surfacing relevant policies and past precedent, Vera supports fair, consistent application of your standards across similar cases—reducing the influence of subjective decision-making
8. Does AllVoices actively prevent AI hallucinations?
Yes.
AllVoices AI does not access the internet and is restricted to:
Your uploaded company policies/handbook
Manually entered case details
General knowledge that the LLM is trained on
By working only with verified inputs, we significantly reduce the risk of hallucinated or fabricated content you see on consumer-facing LLMs which have fewer constraints and access to the internet.
Furthermore, AllVoices employs various tactics to prevent hallucinations including automated verifications of responses to compare against provided data and ensure the response is accurate and contains no hallucinations, carefully monitored automated testing when changes are made to prompts or logic to ensure responses meet strict requirements, and frequent manual testing to ensure hallucinations or potential symptoms of hallucinations do not occur.
9. Does AllVoices keep my company policies private and secure?
Yes.
AllVoices takes data privacy and security seriously. All data you upload to our platform—including your company’s policies—is stored securely in AWS infrastructure and protected using best-in-class security protocols.
We are SOC 2 Type II certified and GDPR compliant, meaning we meet rigorous standards for protecting data both at rest and in transit.Your policies are never shared outside your organization. They are used exclusively within your instance to support case handling, recommendations, and insights.
No other customer’s AI instance will have access to or benefit from your policies, and your policies are never used to train or inform the AI models in a shared or general way.Additionally, access to your data is tightly controlled, monitored, and restricted based on role-based permissions. We do not use your policy data for any purpose other than enabling functionality within your secure environment.
10. If I delete a policy, will AllVoices AI retain the information?
No.
When you delete a policy from your AllVoices instance, it is permanently removed from our system. Our AI does not retain, remember, or continue to reference any information from deleted policies in future tasks or responses.
The AI operates based on the current set of uploaded policies in your instance at the time a task is completed. It does not “remember” prior versions, nor does it store deleted content for future use. This ensures that your AI assistant is always referencing the most up-to-date policy information you’ve provided, and nothing else.
This behavior is intentional and important for compliance. It allows your team to maintain full control over what information is available to the AI and ensures that outdated or deprecated policies do not influence future case work or decision-making.
11. Does AllVoices allow customers control over AI features?
Yes
AllVoices allows customer to enable or disable AI features throughout the flow.
For example, if you do not want AI to recommend tasks you can disable the feature. If you do not want AI to identify policy gaps (even though these are not documented or discoverable), you can disable the feature.
12. Is my data used to train Large Language Models (LLMs)?
No
Your data is never used to train or fine-tune large language models (LLMs). Per OpenAI’s API and Enterprise Terms, data passed through their API is not retained or used to improve the model.
As an Enterprise customer of OpenAI, we have a Zero Data Retention policy. With ZDR enabled, OpenAI does not retain customer content from API interactions, ensuring that data is processed only transiently during the request. We also have a signed DPA between AllVoices and OpenAI that ensures strict data security and privacy standards.
Visit our Trust Center for more information on our security practices and certifications.
Got more questions? Email us at support@allvoices.co and we'll respond ASAP.
Security
AllVoices employs multi-factor authentication and strict access controls based on PoLP to protect against unauthorized access.
Yes, AllVoices conducts regular security audits, continuous monitoring, and maintains SOC2 compliance. AllVoices also does annual penetration testing with a reputable third party auditor.
AllVoices follows a Secure Development Policy with formal change control, version control, and security testing.
Security policies are reviewed annually and updated as needed.
While AllVoices has never experienced a data breach, we have prepared a detailed incident response plan, including notification, containment, and remediation steps.
No, AllVoices has never experienced any data breach of any kind.
Yes, third-party vendors are assessed for security as part of the vendor management process.
Anti-malware software is used on all employee devices, and regular scans are conducted via Vanta and Bitwarden.
AllVoices uses TLS for securing data in transit and AES-256 for data at rest.
AllVoices employs multi-factor authentication and strict access controls based on PoLP to protect against unauthorized access.
Yes, AllVoices complies with GDPR and other relevant data protection regulations.
Yes, AllVoices maintains SOC 2 Type 2 compliance.
Data Privacy & Retention
AllVoices collects minimal personal data (PII) necessary for providing its services and ensures transparency during agreement signing.
Yes, AllVoices complies with CCPA and other relevant data protection regulations.
Yes, AllVoices is transparent about data collection practices during agreement signing and in its privacy policy.
Users can request data deletion by contacting AllVoices support with their specific request.
Data retention periods vary based on regulatory and business requirements, and data is securely deleted post termination of contract or requested by user.
No, AllVoices does not use customer data to train or improve any AI LLM model. As an Enterprise OpenAI customer, we enforce Zero Data Retention—your data is processed only during each request and never stored. We've also signed a Data Processing Agreement (DPA) with OpenAI to ensure the highest privacy and security standards.
Our pricing depends on a few factors, such as the features being purchased and the number of employees at your company. For more information, check out our pricing page.
AI Co-Pilot
VERA (Virtual Employee Resource Assistant) is an AI-driven tool designed to enhance efficiency in HR case management, investigations, and data.
VERA leverages GPT-4o, GPT-3.5 turbo and GPT-4 models from OpenAI.
VERA offers case summarization, auto-drafted messages, data analytics (VERA Insights), task suggestions, support chat and much more.
Yes, VERA can be customized to fit an organization's specific needs, including uploading company policies and handbooks.
Yes, AllVoices allows disabling VERA for any company that prefers not to use it.
No, AllVoices has an Enterprise level agreement with OpenAI to not use any data of any sort for training or model improvement purposes. This means OpenAI never uses your data for model training.
VERA adheres to strict data privacy standards and does NOT use customer information and data for any AI training.