The halo/horn effect is one of the most documented biases in evaluation research, and also one of the most invisible to the people doing the evaluating. Psychologist Edward Thorndike identified it in 1920 studying how military officers rated soldiers. His finding: ratings on unrelated traits were so correlated that once an officer formed a general impression, every dimension got rated in line with it. A hundred years later, the same pattern shows up in performance review data at almost every employer that looks for it. Strong communicators get rated high on technical skill. One missed deadline drags down ratings on teamwork, judgment, and communication for the same review cycle.
How the Halo Effect Actually Shows Up at Work The halo is the more common version in hiring. A candidate who presents well in the interview, makes eye contact, speaks with confidence, and asks sharp questions often gets rated higher on competencies the interview didn't actually test, like technical depth or operational judgment. The interviewer fills in the gaps with optimism.
In performance reviews, the halo usually attaches to a big recent win. An employee who led a high-profile project in Q4 gets rated high across the board in the annual review, even on competencies where their actual performance was mixed.
What's the Difference Between the Halo Effect and the Recency Effect? The halo effect is trait-based (one trait colors other traits). The recency effect is time-based (recent events color the whole review period). They often overlap in annual reviews where a recent big win or big miss creates a halo or horn for the whole year.
How the Horn Effect Damages Evaluation Data The horn is the negative mirror. An employee who had a visible conflict with a peer, missed a significant deadline, or was late to a few meetings can end up with suppressed ratings across every competency in the review cycle. The manager sees the negative pattern everywhere, even where the underlying performance was actually fine.
Horn effects are especially common in manager-employee relationships where trust has eroded. Once the manager forms a negative general impression, the review tends to confirm it regardless of data. Teams with high horn-effect patterns often show lower employee engagement scores on trust-related items.
How to Structure Reviews to Reduce the Halo/Horn Effect Two things reduce the bias meaningfully. First, use structured rating scales with specific behavioral anchors for each competency, so the manager has to cite concrete behavior rather than impression. Second, use calibration sessions where multiple managers review each rating together, because independent comparison exposes outlier ratings quickly.
Structured, behavior-anchored ratings plus calibration is the combination research consistently shows improves evaluation reliability. Separating trait ratings in time (rating different dimensions on different days or in different sessions) also helps because it interrupts the mental shortcut that produces halo/horn in the first place.
Making Halo/Horn Awareness Part of Your Evaluation Process The halo/horn effect is not a character flaw in any specific manager. It's how the brain processes incomplete information under time pressure, which is most of what performance reviews actually are. The fix is structural, not behavioral. Build rating scales that require specific evidence per competency. Run calibration sessions that surface outliers. Train managers to notice when their rating pattern is a flat line across dimensions (which is often a halo/horn signature rather than accurate evaluation). Track rating distributions by demographic to catch halo/horn effects that correlate with implicit bias . These disciplines make evaluation data more useful for promotion, pay, and development decisions, and they protect the organization from the downstream fairness questions that flat ratings always invite.