A 360 survey, also called 360-degree feedback, gathers performance and behavior data about one person from the people who actually work with them: manager, peers, direct reports, and sometimes external stakeholders or clients. The premise is simple. No single vantage point gives a complete picture of how someone shows up at work. Done well, a 360 surfaces blind spots and growth opportunities that a traditional top-down performance review can miss. Done badly, it becomes a popularity contest that tells you nothing useful.
What a 360 Survey Actually Measures Most 360 surveys evaluate two broad dimensions: competencies (skills and behaviors tied to the role) and impact (how the person's work lands with others). Competencies are usually rated on a scale. Impact shows up in open-ended comments.
Typical question areas include communication style, leadership, collaboration, decision-making, how feedback is given and received, how conflict is handled, and how well the person supports the people around them. The specific questions should match the role level. A 360 for a senior leader asks different things than a 360 for an individual contributor.
What's the Difference Between a 360 Survey and a Performance Review? A performance review is an evaluation by one person, usually the manager, tied to job duties, goals, and sometimes compensation. A 360 survey is a development tool: multiple data sources feeding into a composite picture, usually used to plan growth rather than drive a pay decision. Some organizations use 360 data as an input into formal reviews, but the two are different instruments with different purposes. Mixing them tends to compromise both.
Who Should Be in the Feedback Loop? A solid 360 invites feedback from five categories of respondent: the employee's manager, their direct reports (if they manage people), peers at the same level, cross-functional colleagues who rely on their work, and the employee themselves via a self-assessment.
Group sizes matter. Fewer than three respondents in a category often gets skipped or anonymized away, and very small groups can identify individual raters by process of elimination. Five to eight peer respondents is a reasonable target. For small teams or senior roles where peer count is low, external stakeholders (board members, key clients) can fill the gap.
Pitfalls That Kill 360 Survey Data Quality Three problems derail most 360 programs. Respondent fatigue is the first: if every employee runs a 360 every year and every 360 has 40 questions, raters start speed-running the surveys. Keep question counts tight (15-25 items) and stagger the calendar.
Vague questions are the second. "How effective is this person at collaboration?" produces noise. "When this person disagrees with you, how likely are they to keep working the problem?" produces useful data. Specificity in the question drives specificity in the answer.
No follow-up is the third. If employees get 360 results and nothing happens next, the data becomes theater. Pair every 360 with a one-on-one debrief (often with a coach or HR business partner) and a concrete development action.
Should 360 Feedback Be Anonymous? Usually yes, with exceptions. Anonymity produces more candid peer and direct-report feedback, which is the main reason to run a 360 at all. The exception is feedback from the manager, which typically isn't anonymous since the manager is identified as the manager. Very small respondent groups effectively aren't anonymous regardless of what the survey promises, which is why group sizes matter. If employees know exactly who said what, the data quality degrades back to normal-review levels.
Using 360 Survey Results Without Backfiring The most effective 360 programs treat results as a starting point, not a verdict. Share the aggregated data with the employee. Walk through patterns (not individual comments) together. Identify two or three focus areas for the next six to twelve months. Revisit them in regular one-on-ones. Don't use one round of 360 data to make promotion, compensation, or disciplinary decisions.
Organizations that combine 360 data with employee engagement surveys and one-on-one feedback channels tend to get better signal than any single input alone. The 360 is one lens, and it works best when it's not carrying the whole weight of a talent decision.