Table of Contents
- What does “AI-first” mean for HR leaders today?
- Why HR needs to create safe spaces for AI experimentation
- How HR can clarify performance expectations in an AI era
- Practical AI use cases HR teams can adopt right now
- How to measure the performance of AI inside organizations
- Avoiding “snake oil” in the crowded AI vendor market
Resources
- If you wanted to connect with Alex Seiler, here is his Linkedin. He has great insights, and is really easy to talk to.
- We also put together an ebook: 5 Practical Use Cases for AI in HR download it here
- AllVoices Responsible AI page
Every week, another company declares itself “AI-first.” Shopify told managers to “prove AI can’t do it” before requesting headcount. Duolingo paused contracts to focus on automation. Box rolled out a 3,500-word memo announcing AI as a strategic priority. Klarna credited AI with cutting service costs by double digits.
On paper, these memos are about technology. In practice, they’re about people.
When leadership frames AI adoption as a mandate, employees look to HR for translation. Is this about efficiency, innovation, or cuts? Should they be excited or anxious?
Alex Seiler, a longtime Chief People Officer and startup advisor, highlighted the stakes:
“Positioning means a lot. It will go a long way in determining whether people are excited about what’s coming or fearful of what it may bring.”
— Alex Seiler
Why positioning matters for culture
- Employees anchor their reactions to leadership messaging
- An “AI-first” stance without context can trigger job insecurity
- Framing adoption as opportunity instead of threat builds momentum
HR isn’t just implementing tools. They’re shaping the story that determines how employees interpret change.
Why HR needs to create safe spaces for AI experimentation
Adopting AI is messy. Some use cases will click, others will flop. Without psychological safety, employees won’t experiment — and adoption stalls.
Alex argued that HR should lead the charge in creating environments where trying and failing is expected.
“When you fail using AI, it doesn’t mean you’re a failure. It means you’re learning. Teams need sanctuaries where they can test new ways of working without being judged.”
— Alex Seiler
What safe experimentation looks like
- Defined sandboxes: HR designates low-stakes workflows (like drafting emails) where AI can be tested freely
- Permission signals: Leadership explicitly says experimentation is encouraged
- Shared learning: Teams run debriefs on what worked and what didn’t, turning missteps into collective progress
- Psychological guardrails: Employees know failure won’t cost them credibility or career growth
Claire Schmidt added that this framing is what gives adoption staying power:
“If people feel supported as they navigate this change, they’ll remember that HR was the function that kept them grounded.”
— Claire Schmidt
Safe experimentation reframes AI adoption from a compliance exercise into a cultural opportunity.
How HR can clarify performance expectations in an AI era
Performance management has always been tricky. Add AI to the mix, and the ambiguity multiplies.
Alex posed the critical question:
“Are we measuring how well you serve AI, or how well the AI serves you? That’s a huge distinction.”
— Alex Seiler
Without clarity, employees may overuse AI to look efficient or avoid it out of fear. Neither extreme benefits the company.
Rethinking performance standards
- Define which tasks are expected to use AI and which require human ownership
- Emphasize outcomes like creativity, empathy, and accuracy over raw output
- Create metrics that value effective collaboration with AI rather than competition against it
- Revisit standards quarterly as tools evolve and adoption patterns shift
This ensures AI remains a tool in service of people, not the other way around.
Why this matters for HR credibility
- Ambiguity erodes trust in performance reviews
- Clear guidelines reduce employee anxiety about “using AI wrong”
- Transparent standards reinforce fairness across teams
By clarifying expectations, HR leaders protect both productivity and morale.
Practical AI use cases HR teams can adopt right now
The heart of the webinar was about action: where can HR apply AI today in ways that are safe, valuable, and measurable? Both Alex and Claire agreed that the key is to start small and expand gradually.
“The tech should be solving your problem, not the other way around. Start with your pain points and build from there.”
— Alex Seiler
Recruiting and talent acquisition
AI can speed up some of the most repetitive recruiting tasks:
- Resume parsing to filter for minimum qualifications
- Candidate matching against role profiles
- Automated scheduling of interviews across multiple calendars
- Drafting outreach emails with standardized, inclusive language
These tools save hours while reducing bias in first-pass screening. But Alex stressed that final decisions must always remain human.
Onboarding and offboarding
Onboarding often drowns new hires in paperwork. AI can help:
- Generating personalized onboarding checklists from job descriptions
- Automating document distribution and reminders
- Summarizing policy handbooks into digestible guides
- Drafting offboarding surveys and knowledge transfers
Small touches like these make transitions smoother for employees and managers alike.
Performance reviews
Performance management is ripe for AI augmentation:
- Drafting review templates with consistent criteria
- Highlighting rating inconsistencies across managers
- Flagging biased phrasing in written feedback
- Summarizing upward feedback into clear themes
Claire connected this to fairness:
“AI can’t replace a manager’s judgment, but it can flag where inconsistencies might disadvantage employees.”
— Claire Schmidt
Employee relations and case management
Employee relations is one of the most sensitive areas, but it also benefits from AI support:
- Automatically linking related cases to surface precedent
- Drafting first-pass investigation summaries
- Tagging and categorizing issues for easier trend analysis
- Highlighting emerging culture risks from case patterns
AllVoices’ AI assistant Vera was designed specifically for this:
“Vera writes the first draft of an investigation summary report. That used to take up to twenty hours. Now, HR can start from a draft and spend their time refining instead of starting from scratch.”
— Claire Schmidt
How to measure the performance of AI inside organizations
Adoption only sticks if leaders can prove it works. That means measurement. But measuring AI isn’t just about speed — it’s about impact.
Claire explained that clarity of goals drives clarity of metrics:
“It totally depends on what your goals are and what you’re using AI for. For us, things like anti-bias certification or customer satisfaction scores are clear signals that our AI is performing.”
— Claire Schmidt
Alex expanded on the range of metrics leaders should track:
“Process efficiency, customer satisfaction, decision quality, scalability — those are the levers most companies are looking at right now.”
— Alex Seiler
Metrics to track AI effectiveness
- Efficiency: Hours saved, reduced backlog, automation rates
- Experience: Employee and customer satisfaction with AI-assisted outputs
- Decision quality: Accuracy, fairness, and transparency of AI-informed outcomes
- Scalability: Ability to handle higher volumes without proportional headcount
Why measurement matters for adoption
- Proves AI is more than hype
- Builds credibility with executives for future investment
- Reassures employees that performance is being monitored fairly
Without measurement, AI becomes a black box. With it, AI becomes a strategic asset.
Avoiding “snake oil” in the crowded AI vendor market
The final theme of the webinar was one of caution: not every “AI-first” vendor is worth trusting. With hype at an all-time high, HR leaders risk wasting budgets on tools that don’t deliver.
Alex’s advice was direct:
“Make sure you’re doing due diligence. RFPs, comparisons, proof of concept, client testimonials — the same rigor you’d apply to any HR tech purchase applies here, even more so.”
— Alex Seiler
Claire added a warning about brand-new players:
“If someone just popped up five minutes ago, that might be a red flag. You want partners with real experience solving HR problems, not just adding AI to their marketing.”
— Claire Schmidt
How to separate real value from hype
- Proof of outcomes: Ask vendors to show results, not just demos
- Bias testing: Request documentation of fairness audits and data sources
- Client references: Talk to existing customers about wins and challenges
- Problem fit: Map solutions to specific HR pain points before signing contracts
The risk of skipping due diligence
- Wasted spend on tools that go unused
- Compliance exposure if models are biased or opaque
- Employee backlash if adoption feels rushed or careless
The vendor market will only get noisier. HR’s role is to be the filter that distinguishes real impact from empty promises.
Final word: AI in HR is about readiness, not hype
AI in HR isn’t a finish line — it’s a journey. Leaders don’t need to chase every tool or trend. They need to focus on readiness: readiness with guardrails, readiness with pilots, readiness with clear metrics.
Alex left HR leaders with this reminder:
“Start small with clear metrics, and then expand based on what you learn. AI adoption is a journey — don’t let anyone tell you otherwise.”
— Alex Seiler
And Claire closed with a call to keep humans at the center:
“AI should support human judgment. It doesn’t replace it.”
— Claire Schmidt
That’s what makes AI in action different from AI in hype. For HR leaders, the future isn’t about racing to automate everything. It’s about using AI to elevate people — and proving that in practice, every step of the way.
Quick Recap

AI in Action: Practical use cases for HR Teams
.jpg)
Got more questions? Email us at support@allvoices.co and we'll respond ASAP.
Stay up to date on Employee Relations news
Sign up to our newsletter
Got more questions? Email us at support@allvoices.co and we'll respond ASAP.




.png)




.jpg)
.jpg)
