Ask a candidate "how would you handle a difficult team member?" and you get a polished answer about active listening and empathy. Ask them "tell me about a specific time you dealt with a difficult team member, what you did, and how it turned out" and the answer gets concrete or falls apart. Behavioral-based questions work because they force specificity. Generic theory is easy to produce on command; specific examples require actual experience. The predictive validity advantage shows up clearly in hiring research and in outcomes six months after hire.
How Behavioral Questions Are Structured The standard format asks the candidate to describe a specific situation, the actions they took, and the results. Common stems include "Tell me about a time when...", "Describe a situation where...", and "Give me an example of...". Good questions target specific competencies relevant to the role: leadership, conflict resolution, influence without authority, judgment under uncertainty, customer empathy.
The STAR framework (Situation, Task, Action, Result) is the standard scaffold for both asking and answering. Interviewers probe for each element if the candidate skips any. "What was the situation?" "What were you specifically trying to accomplish?" "What did you personally do?" "How did it turn out?"
Why Behavioral Questions Predict Performance Meta-analyses of selection research, including work referenced in Monthly Labor Review and other academic sources, show behavioral interview questions produce predictive validity coefficients in the 0.35-0.45 range. Unstructured interviews typically land at 0.15-0.20. The advantage comes from behavioral consistency: how someone handled a similar situation before is a reasonably reliable predictor of how they'll handle one in the role.
The caveat is that the question has to map to an actual competency required in the role. A behavioral question about leading a 10-person team is irrelevant to a role that doesn't involve team leadership. Selection gains evaporate when interviewers chase behavioral examples that don't tie to job performance.
What's the Difference Between Behavioral and Situational Questions? Behavioral questions ask about past actions ("tell me about a time"). Situational questions present a hypothetical scenario ("imagine you're in this situation, what would you do"). Both outperform unstructured interviews, but behavioral questions have a slight validity edge because past behavior data is more concrete than predicted future behavior.
Common Behavioral Questions by Competency For leadership: "Tell me about a time you had to lead a team through a significant change." For conflict resolution: "Describe a specific conflict with a peer and how you resolved it." For judgment: "Tell me about a decision you made with incomplete information. What happened?" For resilience: "Describe a failure and what you learned from it." For customer focus: "Tell me about a time you pushed back on a customer request."
For roles involving sensitive workplace situations, including manager-level roles: "Tell me about a time an employee raised a grievance or concern to you. How did you handle it?" Questions in this category probe how a candidate would respond to the types of real situations they'll face in the role.
Scoring Behavioral Responses Consistently The weakness of behavioral interviewing is rater inconsistency. Two interviewers asking the same question can score the same answer differently. Structured scoring rubrics with behavioral anchors close that gap. Define what a 5 (strong), 3 (acceptable), and 1 (weak) answer looks like for each question. Train interviewers on the rubric with sample responses before they interview live candidates.
Calibration matters too. Panels should discuss sample candidates together before starting real interviews, surface rating disagreements, and agree on the reasoning behind each score. Without calibration, rubrics are decorative; with it, they produce the predictive validity the research promises.