Employee performance is tricky to measure, but one strategy used by millions of domestic and international companies is performance rating. What better way to determine employee abilities than to ask their supervisors, subordinates, and peers who work with them every day?
Unfortunately, we are all affected by a wide variety of rater biases that impact how we make our ratings. These biases might skew employee ratings too high or too low. Ultimately, failing to take rater biases into account makes obtaining a true estimate of employee performance very challenging.
Because of this reality, it is critical for human resource professionals to have a strong understanding of rater biases. Understanding them can prevent decision making errors, which strengthens a company’s ability to use performance rating to its full potential.
Keep these 8 rater biases in mind when reviewing employee rating data.
The human mind is primed to focus on single attributes that stand out. If that attribute is positive, researchers have found that it will actually affect ratings of other attributes. That’s the halo effect in action. The halo effect is the tendency for a single positive rating to cause raters to inflate all other ratings. It’s almost like the rater is thinking, “If she’s good at this, then she’s probably good at that, too.”
Nobody is perfect; HR professionals know that all employees have unique strengths and weaknesses. Positive ratings across the board aren’t particularly helpful when making decisions. That’s why it is important to watch carefully for evidence of the halo effect in employee rating data.
If the halo affects makes you think of coworkers as perfect angels, the horns effect makes you think of them as devils. The horns effect is the tendency for a single negative attribute to cause raters to mark everything on the low end of the scale. One bad attribute seems to spoil the bunch.
Like the halo effect, the horns effect makes decision making challenging. Universal negative scores might lead to unfair sanctions or inappropriate employee dismissal. Those are land mines every HR department wants to avoid. For that reason, keep the horns effect in mind when reviewing employee ratings.
Scores can be high. Scores can be low. And scores can be right in the middle. Some raters apparently think only the latter is an option. The central tendency bias causes some raters to score every question on a scale near the center. A rating of “3” on a 5 point scale for every question is a clear example of the central tendency bias at play.
The leniency bias is exactly what it sounds like – it means the rater is lenient and is going “too easy” on the person they are rating. That means all scores will be very high. Like the halo effect, the leniency bias makes it challenging to know an employee’s true pattern of strengths and weaknesses.
The strictness bias is the opposite of the leniency bias. As you’d expect, it means the rater is going “too hard” on the person they are rating, causing all scores to be very low. This creates an unfair negative representation of the person being rated. Like the horns effect, inaccurate, negative scores can have serious implications for employees and for HR decision making. Monitor performance reviews carefully to search for the strictness biases – if one person is rating someone very low on everything, while others are rating them normally, the strictness bias may be the culprit.
Comparisons can be helpful when making ratings. But the contrast effect is too much of this particular good thing – it causes raters to overuse comparisons when making their scores.
Take Mike for example. Mike is very detail oriented, but slightly less detail oriented than his coworker Sharon. The contrast effect might cause Mike’s boss to rate him low because the boss can’t help comparing him to Sharon. The contrast effect can lead to overestimates or underestimates of a person’s abilities.
Often, performance reviews are made with a particular time frame in mind. Perhaps a supervisor is asked to think about the last quarter or the past fiscal year when making their rankings. The recency bias creeps in when a recent event clouds memory of previous performance. The recency bias leads to overestimates if the person being rated had a recent “good streak.” On the other hand, it will lead to underestimates if the person being rated had a recent “bad streak.” Either way, it leads to inaccurate ratings, which ultimately makes decision making difficult.
The similar-to-me effect is an interesting concept, but we see it in both nature and in the workplace. Birds of a feather flock together – and people are prone to favor someone who is similar to them. Men rate men higher than women. Women rate women higher than men. Older employees rate their contemporaries higher than younger employees. The list of possible similarities is huge. Similarity in age, gender, race, and experience all affect ratings. Even similar work habits, similar attitudes, or similar personalities lead to inflated ratings. The similar-to-me effect is everywhere – it shows up when rating supervisors, rating subordinates, and rating peers.
Measuring employee performance is important – and asking employees to rate one another is a valuable piece of that puzzle. Don’t let rater bias prevent you from using this important information.
One of the best was to counteract rater bias is to carefully review employee rating data. High-quality performance review software, like Trakstar, is designed to make this a breeze. Performance review software gives you the tools you need to know when rating data might be compromised. In this way, you can prevent your company from using inaccurate data to make the wrong decisions about employee performance.