Feedback and Assessment Banner

Assessment


Validity: Assessment and curriculum design | Defining formative and summative assessment | Principles of Assessment | Ensuring reliability of assessment |


In Practice | Resources

Image of posterStudents often take their cues about what they need to learn more from assessment than from teaching. Helping students understand the different forms of assessment is part of the process of guiding them through the transition to the postsecondary environment. In our culturally diverse campus, it is also important to support students who are unfamiliar with some of the assessment methods we use.

With the range of assessment methods available in learning and teaching, it is important that students understand what is expected of them and how to achieve the results to which they aspire by providing a clear and transparent explanation of the method used. As the most important aspect of the learning experience, employing innovative forms of assessment and looking at the assessment of process as well as product can help ensure validity and reliability of your methods.

It may be useful to discuss different forms of assessment, including formative and summative, in group tutorials. You can also make the connection to how the different methods help students to think critically and support the development of the broad skill set of a ‘Sheffield Graduate’.

It is also important to advise students what constitutes unfair means in assessment, for example, plagiarism or collusion, and help them develop good academic practice. See Guidance for Staff in the Use of Unfair Means in Assessment.

“We aim to energise student learning through inspiration rather than through the necessity for assessment.” Learning and Teaching Strategy, 2011-16.

Planning and marking assessments

There are two key issues to consider when designing and marking assessments:

  • Validity - that the assessment is measuring what it intends to measure
  • Reliability - that marking is consistent across all assessments

Validity: Assessment and curriculum design

As highlighted by Biggs’ theory of Constructive Alignment, assessment is a key part of curriculum design. Within Constructive Alignment, the purpose of assessment is to measure whether students have achieved the intended learning outcomes, and to what level. It is therefore crucial that when learning outcomes are set, they clearly outline what students are expected to learn as part of the module and consideration is given to how they should be measured, so that the assessment tasks can be closely aligned with specific learning outcomes. Similarly, assessment tasks and teaching activities need to be closely related to ensure that the teaching activities support the students to achieve the learning outcomes.

Go to the Higher Education Academy website for a comprehensive overview of Biggs’ theory of Constructive Alignment.

Assessment methods can then be identified that meet these learning outcomes and relate to the teaching activities. For example:

Learning outcome: Students will be able to identify different types of unfair means.
Teaching activities: A lecture on the types of unfair means.
Assessment task: Multiple choice exam questions which require identification of different types based on a series of case studies.

Learning outcome: By the end of this session, students will be able to evaluate the impact of the Second World War on local German communities.
Teaching activities: Pre-reading before the seminar, small- and large-group structured discussions during the seminar based on what they've read.
Assessment task: Essay question asking them to evaluate the arguments they have read and put forward their own view.”

Go to the pages on the use of technology for assessment for ideas about innovative technological methods you could use when designing assessment tasks.

Defining formative and summative assessment

Formative assessment is designed to help a student learn by "restructuring their understanding/skills and build more powerful ideas and capabilities." (Nicol and MacFarlane-Dick, 2005). It is characterised as assessment for learning. It helps learners by giving feedback at an early stage that can influence the future learning process. It is seen as low stakes as it gives students the opportunity to act on any feedback before their final grade.

Summative assessment tries to summarise what a learner has learnt at a particular point in time. It contributes to grades and gauges the student’s ability to meet specified learning outcomes. It is characterised as assessment of learning. It is important to note that the difference between formative and summative assessment is about how it is used, rather than the type of student work leading to feedback (e.g. a piece of coursework could be summative or formative depending on the information given to the student).


Formative assessment for learning

Summative assessment of learning

  • Lower stakes
  • Higher stakes
  • Informs students
  • Sums up achievement
  • Emphasis on feedback
  • Performance indicator
  • ‘High level learning’
  • ‘Low level learning’

Worth considering: Formative and summative are not necessarily mutually exclusive. A single assessment task could be both formative and summative (for example, a mid-term piece of coursework might contribute to a student’s final grade, but might also give them some feedback to consider for a final examination at the end of the module.

Principles of Assessment

The University sets out principles to promote assessment that is fair, valid and reliable. These apply to all methods, for example, essays, presentations, portfolios or exams.

The following University-wide Principles of Assessment are intended to inform the approach to assessment in all departments:

1. Assessment should be valid.

2. Assessment should be reliable and consistent.

3. Information about assessment should be explicit, accessible and transparent.

4. Assessment should be inclusive and equitable.

5. Assessment should be an integral part of programme design and should relate directly to programme aims and learning outcomes.

6. The amount of assessed work should be manageable.

7. Formative and summative assessment should be included in each programme.

8. Timely feedback that promotes learning and facilitates improvement should be an integral part of the assessment process.

9. Staff development policies and strategies should include reference to the development of assessment practices.

Further information about the policies and processes, including more detail around the Principles of Assessment, can be found in the Assessment and Feedback Processes and Policies pages.

Ensuring reliability of assessment

Even when the intended learning outcomes and assessment tasks are clear and closely aligned, it is important to also set marking criteria in order to identify whether a learning outcome has been achieved, and to what level it has been achieved. Marking criteria help to ensure consistency, both between markers and even with the same marker as the criteria provide a clear indication of what assessors should be looking for. It is therefore essential that you stick to the marking criteria when marking each assessment, regardless of your personal opinion of the criteria. It is also helpful for students to be able to access the marking criteria so they know what you are looking for.

The University has institution-wide marking criteria that define each level of progression through a programme of study. Some departments, modules, and individual assessment tasks will have their own marking criteria based on the institution-wide set, so it is worth asking colleagues about your departmental processes.

However, it is still possible to interpret marking criteria in different ways:

  • Between markers
  • Between different students' assessments (even if the marker is the same)
  • Between questions
  • Between different years of study (even if the assessment task is the same)

Processes therefore need to be put in place to ensure that markers are consistent. One method is a process of standardisation, where sample assessments are used to demonstrate what the criteria look like in practice and markers are required to mark to the standard assessments. This helps to ensure that all markers are using the criteria in the same way across the board.

Another is a process of moderation, which ensures consistency among markers through meeting to discuss how they have approached assessments. Approaches used include:

  • sampling (where a selection of assessments are looked at by a second assessor to ensure they are consistent. The selection usually includes assessments marked at the top, middle and bottom of the grade range)
  • blind/anonymous marking (where assessors don’t know which student’s work they are marking)
  • double blind marking (where two people mark the same assessment without seeing each other’s comments or grading, then meet to discuss).

Moderation is particularly important when an assignment is judged to be borderline between two grades, especially if the result may cause a student to fail that assessment.

Departments have a duty to support new staff around moderation:

“Departments are responsible for ensuring that all staff and any postgraduate research students involved in marking and moderation are adequately prepared for this activity, particularly those with less experience or who are new to the department.”

Further information and guidance about departmental responsibilities around moderation can be found here.

What your students should know about assessment

It is important to make your departmental policy on feedback and assessment explicit to students including information on the following:

  • methods of assessment
  • how work is marked – anonymous marking, double marking/internal moderation of assessments, marking criteria
  • feedback – what constitutes feedback, how and when it is given
  • reference to General Regulations relating to First Degrees, Calendar
  • deadlines for assessed work, penalties for late submission, extensions
  • departmental policies on unfair means should be made explicit

See also Providing Information for Students in the Processes and Policies section.


IN PRACTICE

Example: Assessment Handbook

Dr Lorna Warren, Senate Award Fellow, Department of Sociological Studies

The Department of Sociological Studies provides an assessment handbook, clearly outlining the types of assessment, expectations, marking schemes and what constitutes unfair means.

Example: Examination manual

Dr Adrian Jowett, School of Clinical Dentistry

The School of Clinical Dentistry have developed an Examination Manual to provide a standard of how assessments should be quality assured. This helps to ensure a consistent approach is taken between different courses and different academic levels.

Example: An aid for checking the consistency of marking

John Wright, Management School

John has developed a package for promoting consistency of marking between multiple assessors when there are a large number of students involved e.g. core Level 1 modules. Set up in Excel, it enables the graphical distribution of each set of marks to be easily checked for consistency with the overall profile of all marks. No two sets of marks will have exactly the same profile, but providing the number of scripts per marker are large enough, then the profiles should be similar. The tool also includes prompts for scores from second markers and the conversion of non-standard marking scales. Staff can then use their professional judgement alongside this tool to ensure consistency.

Marks Profiler

Demo version

Example: Approaches to assessment and feedback

The iSchool has received high feedback ratings in the National Student Survey. Following are some examples of their approaches to feedback and assessment:

1) A varied range of assessments, relevant to the wide range of learning outcomes that students are set, are offered at the iSchool. Assessment methods include writing essays, writing reports based on a brief from a local company, creating a web site or designing a content management system, making a short film, undertaking an evaluation of a local information service and preparing a group presentation, and writing a reflective diary analysing their experiences as a manager. For taught Masters students, assessment is by coursework (rather than examination) and for undergraduates some modules are 100% coursework assessed, and the remainder has the majority of the assessment by coursework.

2) The iSchool has standardised coursework and oral presentation feedback forms, which contain a matrix to be marked up and a section for specific comments. This latter section has to be filled in so every student gets specific feedback on their piece of work. Where these are not used, a feedback form is devised which relates to the specific criteria for that assignment.

3) All taught Masters students and the BSc Information Management students (i.e. all students other than undergraduate duals) have a “test” assignment which is administered and marked exactly as a “real” assignment, handed in Week 4 or Week 5 of Semester 1. This work is handed back and discussed in individual tutorials, so that students understand how work is assessed in the department. This also provides an opportunity, for example, to give feedback to international students on their language skills.

Example: Evaluating learning: Assessing peer assessment

Dr Anthony Rossiter and Linda Gray, Department of Automatic Control and Systems Engineering (ACSE)

This case study describes some ways in which peer assessment is used in the first and second years to encourage ACSE students to develop critical evaluation skills by facilitating active reflection on the quality of both their own work and the work of their peers.

Example: Skills integration for students through in-class feedback and continuous assessment - Click 10a to download

Presentation by Dr Konstantinos Dimopoulos (City College) at the 6th Annual Learning and Teaching Conference, 2012

This presentation looks at how students can be supported to apply their knowledge and respond to feedback and continuous assessment.

Example: Validity & Reliability in Assessment Planning (YouTube video - Duration: 1:40)

Dr Susan McCahan, University of Toronto

RESOURCES

Bullet Student Assessment and Feedback Research (password required)

The University of Sheffield commissioned a study of assessment methods and student perspectives on assessment in higher education as part of their response to the National Student Survey (NSS). Although the report looks at 2006 survey results, it offers useful insights into the challenges.

Bullet QAA, (2011) Code of practice for the assurance of academic quality and standards in Higher Education: Assessment of students

Bullet Nicol, D. & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A
model and seven principles of good feedback practice. Studies in Higher Education, 31 (2),
199-218.

Bullet Race, P. (2010). Making Learning Happen. Sage, London

Bullet Rust, C. (2002). Purposes and Principles of Assessment, Oxford Centre for Staff and Learning Development, Learning and Teaching Briefing Papers,

Bullet University of Ulster Assessment Handbook, 2011



Light Blue Line
Flash

Comments or suggestions - contact: lets@sheffield.ac.uk

Learning and Teaching Services Processes and Policies:

Assessment Policies