“You can be the best salesperson in the world, but if you’re alienating your internal or external customers, you’re not our best employee,” says Kathy Baum, HR director for Glacier Hills Retirement Community, in Ann Arbor, Michigan. She became familiar with the 360 a few years ago as HR manager at the National Center for Manufacturing Sciences (NCMS), a nonprofit Ann Arbor research-and-development organization. To identify and evaluate employee behavioral competencies, the NCMS included a 360 element in its evaluation process in 1995. Supervisors selected reviewers for their subordinates, although they allowed the subjects to suggest some changes to the list. Reviewers then rated employees on a one-to-five scale in various categories, and results were presented to subjects in a graphic format.
In line for her first review, “I was terrified,” says Baum. “But when I finished panicking, I became converted to the entire process.” For one thing, she explains, the review carries far more weight if it includes peer opinions. “If you were my supervisor and you were always saying that I needed to do something better, eventually I’d write that off as being picked on. But if my peers were saying the same thing, I’d realize that it’s not just my boss.”
NCMS employees’ merit increases were based 20 percent on the results of the 360 review and 80 percent on direct performance criteria, such as meeting budget or accomplishing a previously stated job goal. In the last two years, the NCMS has stopped doing 360s, citing workforce shrinkage from 120 to 34 employees — a size that made it “hard to keep the personal feelings out” of a 360 review, according to current HR director Sue Cruden. “A hundred people is breakpoint for a truly objective, anonymous 360,” says Cruden. “Any less and it doesn’t really work.”
In a way, Cruden’s concerns mirror those of the critics of 360s, some of whom claim the reviews are basically popularity contests. While Baum concedes that can be the case, she says companies can prevent this by developing a consistent, repeatable, and fully anonymous system.
“It’s like fire,” says Baum. “If used correctly, it can do wonderful things. If you don’t handle it correctly, it will burn you badly.” Her system compensates for reviewer biases by aggregating at least three reviewers’ scores into one average score in each reviewer category. In addition, anyone giving a particularly low score must give feedback explaining why. The Boeing Co.’s Sears says that during his group discussions with an executive’s direct reports, he encourages participants to create a “consensus picture” of the executive in question, helping to eliminate such “outlying” views.
Despite the success that some companies have with 360s, the level of commitment managers must give to maintaining these time-consuming programs leads other companies to try them and drop them, says Colleen O’Neill, talent-management leader for Mercer. She sees another problem if companies aren’t “clear about what the main objective is for the 360. Is it just developmental, or will it have an effect on pay? If it [has an effect], that’s going to really change what you’re measuring.”