Good Practice: Using Response Scales

You may wish to use a response scale to ask respondents to rate a particular aspect (for example, how much they agree with a statement).

For more about different kinds of response scale, see the Guide to Likert Scales (in Downloads box to the right) or the web page linked below:

If you are trying to decide whether your response scales should run from negative to positive in a left to right or right to left direction:

  • This is your choice – as long as it is clear.
  • There is conflicting advice on whether you should remain consistent with your scales throughout the whole questionnaire or mix up the order of response scales (so some run negative to positive left to right, while other run from positive to negative left to right).

In some cases mixing up the direction can stop respondents just filling in a pattern and force them to think about their response. On the other hand there is no guarantee that respondents will notice and this can lead to errors or incorrect data. Also if you make respondents work too hard, by having to interpret each question in order to formulate their response, they may give up part-way through your questionnaire. In our experience, mixing up the order of scales like this can confuse respondents and lead to unreliable data, so we prefer to keep a standard direction throughout the whole questionnaire.

The length of numeric scales

How long your scale is and how it is ordered will depend on what kind of data you want to get out of it. Think about what it means for your data, the level of detail you need and how you want to analyse the results.

  • Again, expert advice is mixed as to whether you should always include a neutral mid-point option so respondents don’t feel they have to adopt a positive or negative response but can sit on the fence. If you want to include a mid point five or seven point scales often work best.
  • Either way ALWAYS include a ‘don’t know’ / ‘not sure’ option to give respondents a ‘get out’ if they can’t answer the question!

Again, the key thing to think about is what you want to get out of the data (in particular what data will enable you to meet your original aims and objectives in undertaking the research).Asking colleagues to create sample data for you by piloting the questionnaire will enable you to trial your analysis and get a feel for what the outcomes might look like. This could also help you decide how long a scale to use.

For example, you have a seven point scale and the response is broadly positive, but spread across the top three categories. Does this give you enough information? How would you summarise this? Would a five point scale be more useful here? In our experience, if we don’t need very fine discrimination between different positions, a five point scale often seems easier for respondents to complete.

Likert_Chart

(By the way, experts suggest that an ‘agree to disagree’ scales can encourage a respondent bias to ‘agree’ responses. They suggest that it is often more effective to ask respondents to score responses from high to low).

 

Primacy Effect

Beware of the bias which encourages respondents to pick the first option given to them.

 

Continuous verses ordinal scales

Are you asking respondents about a range of discrete or continuous values?

"Select your age 0-10, 11-20, 21-30?*"

If you are breaking up a series of continuous values, try to make sure that your scale is balanced, doesn’t encourage response bias and is properly calibrated for the respondent cohort (again – think about what the resulting data might look like).

*By the way, watch out for the easy mistake of duplicating values at the end of each range and the start of the next (e.g. 0-10, 10-20, 20-30 – which option would a 20 year old pick?).

 

Balance and bias

How much did you like the activity? a) Loved it b) Really liked it c) Liked it a lot d) liked it e) Didn’t really like it

This is obviously biased towards positive responses. This is an extreme case, but it is easy for more subtle bias to sneak in. Again, read your questions and response scales critically and preferably also ask someone else to do so.

 

Scale Calibration

Asking a whole cohort of new undergraduate students to indicate their age on the following scale is likely to give the following result. If you’re interested in picking out mature students, you may need a different scale.

Example of bad likert scale

 

Lack of proportionality

Be aware that rankings give no sense of proportionality – ranking 6 items in order does not tell you, for example, if they have a clear preference for one item and are relatively indifferent to the other five – or how much more they prefer option 1 to option 2 etc. (If this is important – consider asking respondents to distribute a number of points (say 50) between the 6 options).

Again, what you decide may depend on your research question / evaluation plan and what information is important to you. Focus groups and interviews are often more effective at getting at data about respondents strength of feeling or fine levels of discrimination – so you may decide that a broad strokes questionnaire coupled with some focus groups might give you a better overall picture of what people think.