July 11, 2007

Designing of the 360 Feedback

For managers who would like to know more about item writing, scale construction, and survey length, and how these issues impact the feedback process, this is the basics of the design and administration of 360 surveys.

360 items are usually grouped into categories called competencies. Competencies are the knowledge, skills, and abilities required for success and are a useful way to communicate performance expectations.

Most 360 survey items are written in multiple choice format: Respondents choose one response for each item, such as Agree, Disagree, etc.

Survey designers follow several guiding principles when writing the items. Survey items should:
Be clearly and concisely written; Describe only one behavior or skill and not describe personal characteristics; Assess what they are intended to measure (items are generally piloted extensively and statistically analyzed.)

Many 360s include additional items written as open-ended questions. Raters like these types of questions because this format gives them an opportunity to provide feedback in areas not captured on the multiple-choice survey items; they like being able to illustrate or emphasize points in their own words.

360 participants also like open-ended questions because the responses provide rich supplemental feedback and frequently clarify ambiguities or inconsistencies revealed in the rating.

Rating Scales
Rating scales are used to capture raters’ perceptions about whether, or how well, the manager being rated demonstrates the surveyed behaviors and skills.
Most scales associate number with anchors (for example, 1 to 5, where 1=Strongly Disagree, 5=Strongly Agree); these are used to compute a numerical score. Some scale use only verbal descriptors, such as Strongly Disagree and do not associate the verbal rating with a numerical value; these descriptors are, however, later converted into numerical values for reporting purposes.

Scales can differ in the number op points and the number of choices that are included. Generally, scales range from three to 15 points. Most 360 designers use a five-point scale, or they use four or six points so that there is no middle point. By eliminating a middle point, survey designers overcome the problem of the raters’ propensity to overuse the safest choice on the scale, the middle or average rating.

It’s often debated whether to include a Not Applicable (NA) or Don’t Know (DK) rating choice. The rationale here is that raters need to be able to distinguish items that aren’t relevant or that they haven’t observed.

The advantage to including NA or DK as a rating choice is that these choices are not computed in the item’s average score. When there is no NA or DK raters often choose the middle point of the scale to express Not Applicable or Don’t Know; this can lead to confusion about what the middle point actually represents.

Survey Length
The length of a 360 survey affects the rater’s motivation to complete it, the time it actually takes the rater to complete it, and the rater’s overall impressions of the process.

Longer surveys, especially those with more than 100 items, can take up to an hour to complete. And the time it takes to complete 360 surveys can multiply very quickly for boss and peer raters, some of whom receive rating requests from more than ten individuals at one time. Because of this, many companies who use the 360 for all management employees opt for a shorter survey or exclude one rater category altogether, such as peers.

360 Survey Administration
The survey itself can be administered in a number of ways, including one or a combination of the following:Paper and pencil (mail or fax responses); Telephone
Disk-based; Intranet/Internet.

Raters are typically given two weeks to complete the survey. Once the established cutoff is reached, a feedback report is generated.

No comments:

Post a Comment