Jump to ContentJump to Main Navigation
Advances and Innovations in University Assessment and Feedback$

Carolin Kreber, Charles Anderson, Noel Entwhistle, and Jan McArthur

Print publication date: 2014

Print ISBN-13: 9780748694549

Published to Edinburgh Scholarship Online: January 2015

DOI: 10.3366/edinburgh/9780748694549.001.0001

Show Summary Details
Page of

PRINTED FROM EDINBURGH SCHOLARSHIP ONLINE (www.edinburgh.universitypressscholarship.com). (c) Copyright Edinburgh University Press, 2021. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in ESO for personal use. Subscriber: null; date: 28 July 2021

Shifting Views of Assessment: From Secret Teachers’ Business to Sustaining Learning

Shifting Views of Assessment: From Secret Teachers’ Business to Sustaining Learning

Chapter:
(p.13) 1 Shifting Views of Assessment: From Secret Teachers’ Business to Sustaining Learning
Source:
Advances and Innovations in University Assessment and Feedback
Author(s):

David Boud

Publisher:
Edinburgh University Press
DOI:10.3366/edinburgh/9780748694549.003.0002

Abstract and Keywords

Offers a historical overview of how assessment practices have changed over the past four decades. Boud observes that despite some drastic changes and shifts in positive directions, the emphasis on formative assessment has lessened today in that now much assessment ‘counts’ towards a final mark. He calls for a new agenda in assessment practices, one that focuses explicitly on developing students’ capacity for judgement. This capacity for judgement is associated with the emergent agenda of ‘sustainable assessment’, which suggests that the purpose of assessment is not just to generate grades, or to influence present learning, but importantly to build students’ capacity to learn and assess themselves beyond the current task, for example as they are engaged in lifelong learning in professional practice.

Keywords:   History of assessment, A new agenda in assessment, Capacity for judgement, Learning for the longer term

Introduction

Despite common assumptions that assessment practices in higher education-are enduring, the past forty years have seen remarkable changes. A key change has been from the dominance of unseen end-of-year examinations, through the move to ‘continuous assessment’ and on to a range of diverse assessment approaches. Another notable change has been from assessment weightings being regarded as confidential to the transparency of assessment standards and criteria of today. Assessment has thus shifted in some positive directions. Unfortunately, during the same period, the emphasis on what we now call formative assessment has lessened: from a general acceptance that all courses involve the production of considerable written work that is used purely to aid learning, we now have regimes based on the assumption that all assessment must ‘count’ towards final marks or a grade point average.

My aim in this chapter is to briefly sketch these developments with the intention of projecting forward to explore emergent assessment practices. These are practices that move beyond current innovations in areas such as authentic assessment, self-and peer assessment and improved feedback to students. They represent new views of assessment based upon developing students’ capacity for judgement and involve practices that emphasise an active role for students beyond the production of written work. The chapter will (p.14) explore this emerging agenda and consider what changes might be possible, given the continuing dominance of accountability mechanisms that have had the effect of constraining the development of assessments for learning.

Assessment as Taken-For-Granted

One of the problems of discussing assessment is that we all have a prior or pre-existing conception of what it is and thus an immediate reaction to it, established through our early, formative experiences. Sometimes assessment has touched us deeply; sometimes it has left bruises (Falchikov and Boud 2007). While changes may occur around us, our point of reference is often what assessment was like when we were most influenced by it. This conception can easily get locked in and provide a personal yardstick against which we continue to judge assessment. It is important to resurface these prior events and experiences, as they influence what we regard as legitimate. In some cases, we see them as the gold standard of assessment, in others we resolve never to subject our students to the practices that impacted badly on us.

Many of the changes in assessment that have occurred over the past half century are reflected in my own biography: first, as an undergraduate student in England and then later as an academic, mainly in Australia. There have been minor differences of emphasis between the two countries from time to time, but the main trajectory of assessment is similar. When I entered university there were two main activities that we now label as ‘assessment’. First, there were set tasks that were completed mostly out of class time, but occasionally within it. These were commonly handed in and ‘marked’. A record may have been kept of these marks, but as students we were not very conscious of them beyond the point of the return of our work. Marking usually involved assigning numbers or grades along with some form of brief comment. Work of this kind was commonplace. We completed it because it was the normal expectation of what students did at that time. Second, there were examinations. These occurred at the end of the year and sometimes at the end of each term. These were unseen tests undertaken under examination conditions. No notes were allowed and all information that was needed had to be recalled from memory (Black 1968). No details of examination performance, other than the final mark, were made available to students. Degree (p.15) performance was based predominantly on final-year examination results. The course I took in physics was part of a wave of innovations in the 1960s in that it also included a final-year project, which was also marked and contributed in part to the final degree classification.

While different disciplines and different universities used variants on this approach, these variations were minor and the mix of regular, marked work with modest comments that did not count towards final grades and grading based on examinations was commonplace. In the language we use today, there was a clear separation between formative and summative assessment.

For me, the most influential assessment events were not exams, but the ones that did not feel like assessment at all at the time. The first was a project conducted over many weeks in pairs in a laboratory during the final semester of first year in which we had to design and conduct experimental work on a problem, the solution to which was unknown or at least not easily located in texts by undergraduates. It was far from the stereotypical recipe-like lab problem that was common at the time. The second assessment event was a substantial final-year project in theoretical physics that involved me in exploring a new way of looking at statistical mechanics. What these activities did for me was to give me a taste of the world of research, rather than learn more subject matter. It showed me that it was possible to contribute to the building of knowledge in a small way and not just to absorb it. While it led to a resolve not to undertake physics research, it was also influential in my becoming a researcher.

During my undergraduate years there was a substantial degree of secrecy about assessment processes. We were not told what the criteria for marking would be and how different subjects would be weighted into final results was confidential. The head of the department in which I studied (Lewis Elton) first took the then daring step of formally disclosing the weightings of the elements that would comprise our degree classification in the year of my graduation. Assessment was secret teachers’ business: it was not the position of students to understand the basis on which they would be judged.

Over the late 1960s and the early 1970s, a campaign for what was known as ‘continuous assessment’ was mounted by student organisations (Rowntree 1977). Their argument was that: (1) it was unfair to base degree performance on a limited number of examinations taken at the end of the year/course, as (p.16) examination anxiety among some students led them to underperform; (2) assessment should be spread throughout a course to even out both workload and anxiety; and (3) multiple points of judgement should be used and final grades should be a weighted accumulation of assessments taken across the curriculum, which should be disclosed to students. Assessment for certification moved from one or two points late in a programme to a continuous sampling over the years. The battle for ‘continuous assessment’ was comprehensively won by students, and this practice is now so universal that the term for it is fading from our common language. Later, the massive expansion of higher education that occurred without a commensurate increase in unit resources meant that the amount of regular coursework that could be marked was severely reduced. Continuous assessment commonly transformed into two or three events within each subject per semester. In the Western world, it appears that Oxford University and Cambridge University are rare exceptions that continue to maintain traditional methods of assessment.

Every change in assessment has unintended consequences and the move to continuous assessment has had quite profound ones. First, students have come to expect that all tasks that they complete will contribute in some way towards their final grades. The production of work for the purpose of learning alone, with no extrinsic end, has been inhibited. ‘Will it count?’ is a phrase commonly heard when asking students to complete a task. This shift also indicates a change in the relationship and contract between teachers and students. Trust that work suggested by teachers will necessarily be worthwhile has disappeared in an economy of grades. Second, having separate events for formative assessment and summative assessment has become unsustainable. When all work is summative, space for formative assessment is diminished. Poor work and mistakes from which students could have learned in the past and consequently surpassed are now inscribed on their records and weighted in their grade point average. Space for learning is eroded when all work is de facto final work. The dominance of the summative is well illustrated by the curious phenomenon pervasive in the US literature of referring to everything other than tests and examinations as ‘alternative assessment’ or ‘classroom assessment’, as if tests and examinations are the gold standard that defines the concept of assessment. Anything else is not quite the real thing; they are merely alternatives or confined to what a teacher might do in the classroom.

(p.17) The Educational Measurement Revolution

Alongside the primarily social change to continuous assessment, other forces outside the immediate community of higher education were influencing assessment and seeking to position it quite differently. In the 1960s and 1970s, the impact of the educational measurement revolution (for example, Ebel 1972) began to influence higher education assessment. The proposition articulated by educational testing specialists from a psychometric background was a simple one. In summary, they regarded student assessment as a folk practice ripe for a scientific approach. If only assessment could be treated as a form of psychological measurement, then a vast array of systematic techniques and strategies could be applied to it. Measurement assumptions were brought to bear on it. The prime assumption was that of a normal distribution-of performance within any given group. Whatever qualities were being measured, the results must follow the pattern of a bell curve. If this assumption could be made, then all the power of parametric statistics could be applied to assessment.

The educational measurement revolution was taken up with more enthusiasm-in some quarters than others. The impact on psychology departments was high and later medical education was strongly influenced by this tradition, but many disciplines were not touched at all. I recall joining the University of New South Wales in the late 1970s and discovering that grades in each subject in the School of Psychology were not only required to fit a normal distribution, but that this included an expectation that a requisite number of students needed to fail each subject in conformity with the normal distribution. (It took many years to acknowledge that the selection process into higher education –and into later years of the programme –severely skewed the distribution and made these assumptions invalid.) While not all disciplines shared the enthusiasm of the psychologists, norm-referenced assessment became firmly established. Students were judged against the performance of other students in a given cohort, not against a fixed standard.

The impact of educational measurement still lingers today. Notions of reliability and validity are commonly used in assessment discussions and the multiple-choice test –a key technique from this period –has become ubiquitous. The metaphor of measurement became entrenched for a considerable (p.18) time and is only recently being displaced. For example, it is interesting to note that between the first (2000) and second editions (2006) of the UK Quality Assurance Agency Code of Practice on Assessment of Students, the use of measurement in the definition of assessment was removed.

A less obvious influence from this period is that once student assessment became the subject of scrutiny beyond the immediate context of its use, it would forever be the object of critical gaze. Assessment was no longer a taken-for-granted adjunct to teaching, but deserved consideration of its own as a separate feature. Assessment was to be discussed independently of disciplinary content or the teaching that preceded it. Assessment methods became the focus of attention, as if they were free-standing and the most important element of the process.

Widening the Agenda

Following the period of influence of educational measurement, there have been a number of other shifts of emphasis of greater or lesser effect. The first major ones were the incremental moves from norm-referenced testing to a criterion-referenced and standards-based approach. It is impossible to date this precisely or even cite key events that mark the transition, but there has been a comprehensive shift, at least at the level of university policy, from judging students against each other to judging them against a fixed standard using explicit criteria. Desired learning outcomes that have to be disclosed to students are widespread as required features of course and programme documentation, and, increasingly, assessment tasks are expected to be accompanied by explicit criteria used to judge whether standards have been met. Prompted by the Organisation for Economic Co-Operation and Development (OECD) agenda to create minimum standards for equivalent courses across countries, there have been both national (www.olt.gov.au/system/files/resources/Resources_to_assist_dscipline_communities_define_TLOs.pdf) and international initiatives to document threshold programme standards (http://www.unideusto.org/tuning/).

Linked to this, the second shift of emphasis has been the influence of an outcome-oriented approach and a focus on what students can do as a result of their higher education. Until the 1990s, assessment had focused strongly on what students knew. Students were judged primarily on their understanding (p.19) of the specific knowledge domain of the subjects they were studying. There was some emphasis on practical activities in professional courses and project work in later years, but the main emphasis was on what knowledge and academic skills students could demonstrate through assessment tasks. In some quarters, this has been represented as bringing the debate about competencies and capabilities into higher education. Vocational education and training systems have established very strong views about organising learning around explicit operational competencies, but higher education has taken a weaker view, which embraces a more holistic approach to competencies. The focus has been on outcomes, but not on reducing these to a behavioural level of detail.

Various initiatives have led to an emphasis on transferable skills, generic attributes or core competencies (for example, Hughes and Barrie 2010) –that is, skills that all graduates should develop, irrespective of discipline. There has also been an increased emphasis on making assessment tasks more authentic (Wiggins 1989) –that is, creating tasks that have more of the look and feel of the world of practice, than activities that would only be found within an educational institution. This involves, for example, replacing the essay with a range of different writing tasks that involve academic writing adapted for particular contexts. Students learn not to perfect the standard academic essay, but to write in different genres for different audiences. This emphasis on authenticity has also permeated beyond vocational courses to ones which may be regarded as more conventionally academic.

These changes to widen the notion of assessment have positioned it as an indicator not of what students know, but of what they can do. And not only what they can do, but what they can do in a variety of different contexts. What is important here is not the various facets of learning, but how they can be put together into meaningful and complex tasks; the kind of tasks that professional practitioners encounter after they graduate.

Dilemmas and Contradictions in Assessment Practice

So, today, we have a single term –assessment –that is normally used without qualification to refer to ideas with quite different purposes. It means the grading and certification of students to provide a public record of achievement in summary form –summative assessment. It also means the engagement of (p.20) students in activities from which they will derive information that will guide their subsequent learning –formative assessment. However, the tasks associated with each have collapsed together. All set tasks now seem to have a dual purpose.

Severe problems are created by this arrangement, as it serves neither end very well. Let us take two examples. First, for the purposes of certification it may be satisfactory for grades to be used to summarise quite complex judgements about what has been achieved. These can be recorded on a transcript and provide a simple overview of patterns of accomplishment. However, this does not work for formative purposes. A grade or mark has little informational content. A ‘C’ on one assignment tells the student nothing in itself about what might be needed for a ‘B’ to be gained in the next assessment. Even when detailed grade descriptors are added, they only reveal what has been done, not what is needed to be done. For purposes of aiding learning, rich and detailed information is needed about what criteria have and have not been met, what is required for better subsequent performance and what steps a student might need to get there. For certification, summary grades are normally sufficient; for learning, much more detail is needed. Indeed, there is the suggestion in the research literature (for example, Black and Wiliam 1998) that the provision of a grade may distract students from engaging with more detailed information about their work.

There is a second tension between the two purposes of assessment. It involves the timing of assessment. For purposes of certification, in decision-making for graduation, employment and scholarships, assessment needs to represent what a student can do on the completion of their studies. Difficulties that a student may have experienced in earlier stages of their course –and which have been fully overcome –should not affect the representation of what a student will be able to do. The implication of this thinking is that assessment for certification should occur late in the process of study. Returning to assessment for learning, does late assessment help? The answer is, clearly, no. Information for improvement is needed during the process of study, not after completion of the course. Indeed, early information is most needed to ensure misconceptions are not entrenched and academic skills can be developed effectively. For certification purposes, assessment needs to be loaded later in courses; for learning, it needs to be loaded earlier.

(p.21) While logic might demand that these two purposes be separated so that both can be done well without the compromises that are required when each is associated with the other, it is now unrealistic to imagine that we can reverse the clock and return to a simpler time when different activities were used for formative and summative purposes. The demands that summative assessment cover a much wider range of outcomes than in the past, along with reductions in resources per student, mean that there may be little scope for additional formative activities.

This pessimistic view of the overwhelming dominance of assessment for certification purposes needs to be balanced by the rediscovery and consequent re-emergence of discussion on formative assessment. The review paper by Black and Wiliam (1998) on formative assessment was one of the very few from the wider realm of educational research that has had an impact on higher education. Many authors took the momentum of this initiative to seek to reinstate the importance of formative assessment (for example, Yorke 2003). However, it is difficult to know the extent to which the consider-able discussions of formative assessment in the higher education literature have embedded themselves into courses. Like many innovations that have been well canvassed with positive outcomes (for example, self-and peer assessment),-there are many reports of practice, but, unlike the initiatives mentioned above, little sense that the uptake of this idea has been extensive.

In summary, it is apparent that the present state of assessment in practice is often a messy compromise between incompatible ends. Understanding assessment now involves appreciating the tensions and dilemmas between demands of contradictory purposes.

The Emerging Agenda of Assessment

Feedback

Notwithstanding the dilemmas and contradictions of two different purposes of assessment operating together, where is the assessment agenda moving and why might it be moving in that direction? If we look at what students are saying, we could conclude that the greatest issue for them is feedback or, rather, their perceptions of its inadequacy. In student surveys across universities-in both Australia and the UK, the top concern is assessment and (p.22) feedback (Krause et al. 2009; HEFCE 2011). This is commonly taken to mean that students are dissatisfied by the extent, nature and timing of the comments made on their work. We should be wary though of coming too readily to an interpretation of what is meant. As an illustration, surprisingly, students at the University of Oxford also complained of a lack of feedback, even though they were getting prompt, detailed and useful comments on their work (a defining characteristic of the Oxford tutorial system). However, they were concerned that the formative information that they received, while helping them improve their work, did not enable them to judge how well they were tracking for the entirely separate formal examinations conducted at the end of their second and third years (Oxford Learning Institute 2013).

This concern with feedback has led to a range of responses. At the crudest level, there are stories in circulation of pro vice chancellors urging teaching staff to ensure that they use the word feedback at every opportunity when commenting on anything that might be used by students to help them in assessed tasks, so that they remember this when filling in evaluation surveys. More importantly, the concern has prompted researchers to explore differences in interpretation between staff and students as to what they mean by ‘feedback’ (Adcroft 2011). More substantially again, in some cases the concern has led universities to appoint senior personnel to drive improvement and mount systematic and research-based interventions designed to improve feedback in many forms (Hounsell 2007). The initiatives and substantial website developed by Dai Hounsell and his colleagues is particularly notable in this regard (http://www.enhancingfeedback.ed.ac.uk/). Most important for the present discussion, it has prompted scholars to revisit the origins of feedback and question what feedback is and how it might be conducted effectively (Nicol and Macfarlane-Dick 2006; Hattie and Timperly 2007; Boud and Molloy 2013; Merry et al. 2013).

The use of language in assessment lets us down again. As discussed earlier,-the term ‘assessment’ is used in everyday language to mean quite different things, but for the term ‘feedback’ the problem is even more severe. We use the term ‘feedback’ in the world of teaching and learning to refer to the information provided to students, mainly by teachers, about their work. This use of the word appears ignorant of the defining characteristic of feedback when used in disciplines such as engineering or biology. Feedback is not just (p.23) an input into a system, such as ‘teacher comments on student work’. In engineering and biology, a signal can only be termed ‘feedback’ if it influences the system and this influence can be detected. Thus, a heating system switches on when the temperature falls below a given level and switches off when a higher temperature is reached. The signal from the thermometer to the heater can only be called part of a ‘feedback system’ if it detectibly influences the output. If we apply this example to teaching and learning, we can only call the ‘hope-fully useful information’ transmitted from teacher to student feedback when it results in some change in student behaviour, which is then manifest in what students subsequently do. The present emphasis on what the teacher writes and when they give it to the student needs to be replaced with a view of feedback that considers what students do with this information and how this changes their future work (Hounsell et al. 2008). Feedback does not exist if students do not use the information provided to them (Boud and Molloy 2012).

We might speculate on how it is that academics who appreciate what feedback is in their own disciplinary area manage to so thoroughly change their understanding of it in teaching; however, it is more fruitful to focus on what are the implications of a clearer conception of feedback on higher education practice. The first implication is that we must focus attention on the student as an entity to be influenced, rather than solely on the teacher seeking to influence. The second is that we must focus not just on the single act of information provision at one single point in time (important as that might still be), but on what occurs subsequently: what effects are produced? When might these effects be demonstrated? Third, we must be conscious that students are not passive systems, but conscious, thinking agents of their own destiny. How can student agency influence the processes of feedback?

My view is that we are at a point of fruitful innovation that will lead on from a reappraisal of what we mean by feedback to concrete changes in practice. The starting point should focus on course design. Changing what teachers do when confronted with student work in itself presents limited options. Looking at where tasks occur, what their nature is, how they relate to learning outcomes and what follows in future tasks during the semester may start to make a difference. Most importantly, change will only occur if there are active steps in place to monitor what changes take place in students’ (p.24) work following input from others. Information on the performance of later tasks gives invaluable information to teachers about the effectiveness of their earlier inputs. The design of courses might start with consideration of how feedback loops can be adequately incorporated throughout the semester, so that, for example, students can learn, demonstrate their learning, receive comments about it, act on these comments and produce subsequent work on multiple occasions. Claims by students that feedback is inadequate can only be effectively countered by showing that it does make a difference.

Developing Judgement

While feedback might be prominent publicly, a more fundamental issue concerning assessment is emerging. As it becomes increasingly apparent that assessment is probably the most powerful shaper of what students do and how they see themselves, the question arises: does assessment have the necessary effect on students that is desired for higher education? As there are two purposes, it is likely to have two kinds of effect. First, for certification purposes, does it adequately and fairly portray students’ learning outcomes from a course? Second, does it lead students to focus their attention and efforts on what they will most need on graduation? While the generic attributes agenda is focusing attention on the features of a graduate needed across all programmes and how these might be developed, there is an underpinning issue that affects all other outcomes. Namely, how does assessment influence the judgements that students make about their own work? Clearly, graduates need to know things, do things and be able to work with others to get things done. But they also need to be able to know the limits of their knowledge and capabilities and be able to tell whether they can or cannot address the tasks with which they are faced. In other words, they need to be able to make judgements about their own work (Joughin 2009). Without this capacity to make effective judgements, they cannot plan and monitor their own learning or work with others to undertake the tasks at hand. How well do their courses equip them to do this? In particular, what contribution does assessment make?

A precursor to this focus on student judgement is found in the literature on student self-assessment and self-directed learning (Knowles 1973). Particularly since John Heron’s seminal article in 1981 (Heron 1988), there (p.25) has been a flourishing of studies about student self-assessment, often in conjunction with peer assessment. Unfortunately, much of this literature has been preoccupied with seeking to demonstrate that students can make similar judgements of their grades as their teachers (Boud 1995). While many students can do this reasonably well, research has identified that students in introductory classes and those whose performance is weaker than average tend to overrate themselves and that students in advanced classes and those whose performance is above average tend to underrate themselves. Regrettably, this research focus is an outcome of thinking about certification, not learning. The implicit –and sometimes explicit –aim of this research appears to be trying to judge if student marks could be substituted for teachers’ marks. This is, in the end, a fruitless endeavour for at least two reasons. First, there are likely to be different marks generated by students depending on how great the consequences will be for them. Second, the generation of marks does not address exactly what students are and are not able to judge in their own work.

Viewed from the perspective of assessment for learning, the problem changes dramatically. Of course, students on first encounter with new material will not be good judges of their own performance. As at the beginning of the process they will not sufficiently appreciate the criteria they need to apply to their work, it is understandable that they may err on the side of generosity towards themselves –they just do not know that their work is not good enough. As they gain a greater understanding of what they are studying, they will increasingly appreciate the criteria that generate successful work and be able to apply such criteria to their own work. Once they are sufficiently aware of the complexity of what they are doing and are conscious of their own limitations prior to having reached a level of mastery, they will be sparing in their own judgements and tend to underrate themselves. Self-assessment, then, should be seen as a marker of how well students are tracking in developing the capacity to judge their own work. We should not be dismayed that they are over-or underrating, as this is just an indicator of progress in a more important process: that of calibrating their own judgement (Boud, Lawson, and Thompson 2013). (p.26)

Assessment as Equipping Students for Future Challenges

This leads us to a view about where assessment for learning is heading. Unlike those who seek to set standards and articulate competencies to be achieved, my starting point, like that of Barnett (1997), is that the future is unknown and necessarily unknowable to us (see also Kreber in this volume). Acceptance of this creates constraints and possibilities for what we do in higher education. Of course, new knowledge, skills and dispositions will be required by our students in the future that cannot possibly be acquired now. So, whatever else, we must prepare students to cope with the unknown and build their capacity to learn when the props of a course –curriculum, assignments, teachers, academic resources –are withdrawn. What, then, does that imply for what and how we assess?

Returning to our original distinction between the purposes of assessment as certifying achievement (summative assessment) and aiding learning (formative assessment), it is possible to see a third purpose: fostering lifelong learning. It could be reasonably argued that this latter purpose is merely a subset of formative assessment; however, there are merits in separating it out. Formative assessment, both in the literature and in practice is predominantly concerned with assisting students to cope with the immediate learning demands required to enable students to address what they will encounter in summative assessment. Acknowledgement may be given to its longer-term importance, but strategies to support that are not so common.

To provide a focus for this idea, I coined the term ‘sustainable assessment’. Following the form of a well-known definition of sustainable development, sustainable assessment was described as ‘[a]ssessment that meets the needs of the present without compromising the ability of students to meet their own future learning needs’ (Boud 2000, 151). This does not refer to assessment being sustainable for the staff who has to carry the marking load, though that is also desirable. Rather, it clearly positions sustainability in terms of student learning. It focuses on assessment tasks that not only fulfil their present requirement –to generate grades or direct immediate learning –but also contribute to the building of students’ capacity to learn and assess them-selves beyond the current task. Thus, for example, a sustainable assessment task might involve students in identifying the criteria that are appropriate for (p.27) judging the task in hand and planning what further activities are required in the light of their current performance against these criteria. It does not imply that they receive no assistance in these processes, but it does mean that they are not specified in advance by a teacher. Sustainable assessment may not involve wholesale changes in assessment tasks, but it does require changes in the pedagogic practices that accompany them, especially with regard to feedback. Hounsell (2007) and Carless et al. (2011) have taken the notion of sustainable assessment and applied it further to the practices of feedback.

An Agenda for Assessment Change

Feature 1: Becoming Sustainable

As discussed above, acts of assessment will need to look beyond the immediate content and context to what is required after the end of the course. A view is needed that is not a simple projection of present content and practices, but encompasses what is required for students to face new challenges. A key element of this must be a strong focus to avoid creating dependency on current staff or courses. While assessment to please the teacher supposedly no longer takes place, any residue of this must be addressed. The more insidious challenge is to ensure that assessment does not involve always looking to teachers for judgement. Multiple sources to aid judgement must become a normal part of assessment regimes.

Feature 2: Developing Informed Judgements

As argued before, students must develop the capacity to make judgements about their own learning; otherwise, they cannot be effective learners, whether now or in the future. This means that assessment should focus on the process of informing students’ own judgements, as well as on others making judgements on their work for summative purposes (Boud and Falchikov 2007). The development of informed judgement thus becomes the sine qua non of assessment. Whatever else it might do, this is needed to ensure that graduates cope well with the future. We should be aware that summative assessment (p.28) alone is too risky and does not equip students for new challenges. Of its nature, it tends to be backward-looking, and it is not a strong predictor of future accomplishment. Assessment, then, is more important than grading and must be evaluated on the basis of what kinds of students it produces. Of course, opportunities for developing informed judgements need to be staged across a programme, as isolated opportunities to practice this will not plausibly lead to its development. Therefore, thinking about assessment across course modules becomes not just desirable, but essential.

Feature 3: Constructing Reflexive Learners

We have come a long way from when assessment was a secretive business to which students were blind. Transparency is now a key feature. However, there is a difference between openness and involvement. If students are to develop their own judgements and if assessment events are the focus of such judgements, students need to become active players. They need to understand what constitutes successful work, be able to demonstrate this and judge if what they have produced meets appropriate standards. Students must necessarily be involved in assessment, because they need to know how to do it for themselves. Assessment, then, needs to position students to see themselves as learners who are proactive, generative and drive their own learning. An example of this is in the use of rubrics. The provision of a rubric that specifies learning outcomes and the criteria associated with each task to be utilised by tutors when marking provides limited scope for reflexivity: students follow the path of others. However, in contrast to this, if the task prompts students to construct and use a rubric, they become actively involved in making decisions about what constitutes suitable criteria. This requires them to demonstrate that a learning outcome has been addressed, identify signs in completed work that indicate what has and has not been achieved and then gather evidence from other parties through seeking and utilising feedback. Fostering reflexivity and self-regulation is not something to be relegated to a limited number of tasks, but should be made manifest through every aspect of a course. The programme and all its components need to construct the reflexive learner.

(p.29) Feature 4: Forming the Becoming Practitioner

Finally, assessment needs to shape the becoming practitioner. All students become practitioners of one kind or another. It is only in particular professional or vocational courses that it is known what kind of practitioner they are likely to become. There are common characteristics of all those who practice in society: they address issues, they formulate them in terms of addressable problems and they make judgements. Assessment to this end needs to help students calibrate their own judgements (Boud, Lawson, and Thompson 2013). Learners act on their belief in their own judgements; if these are flawed, it is more serious than them having particular knowledge gaps. Assessment, then, needs to contribute to students developing the necessary confidence and skills that will enable them to manage their own learning and assessment. Understanding is not sufficient; showing that they can per-form certain tasks is not enough. Capable beginning practitioners need to be able to become increasingly sophisticated in judging their work. In particular, they need to be able to do so when working effectively with others, in order to assist each other in their learning and mutually develop informed judgement.

This view provides a substantive agenda for further changes in assessment. Any one of the particular elements mentioned above can be seen in the literature, but they are rarely seen in concert, even less so across a curriculum.

Conclusion

In conclusion, what would assessment that helped meet future challenges look like? It would start by focusing on the impact of assessment on learning as an essential assessment characteristic. It would position students as active learners, seeking an understanding of standards and feedback. It would develop their capacity to make judgements about learning, including that of others. It would involve treating students more as partners and less as subjects in assessment discussions. And it would contribute to building learning and assessment skills beyond the particular course.

Of course, the first question to be asked is: would such assessment practice be more demanding for teachers? It would require us to think much more clearly about what changes in students we expect courses to influence. It also would require an initial investment in redesigning courses, as well (p.30) as a redistribution-of where we focus our efforts. But following this initial adjustment, it could potentially lead to a more satisfying use of time with less time and energy spent on repetitive tasks and more on what really makes a difference.

References

Bibliography references:

Adcroft, A. 2011. ‘The Mythology of Feedback.’ Higher Education Research and Development 30, no. 4: 405–19.

Barnett, R. 1997. Higher Education: A Critical Business. Buckingham: The Society for Research into Higher Education and Open University Press.

Black, P. J. 1968. ‘University Examinations.’ Physics Education 3: 93–9.

1998. ‘Assessment and Classroom Learning.’ Assessment in Education 5, no. 1: 7–74.

Boud, D. 1995. Enhancing Learning through Self Assessment. London: Kogan Page.

Boud, D. 2000. ‘Sustainable Assessment: Rethinking Assessment for the Learning Society.’ Studies in Continuing Education 22, no. 2: 151–67.

2007. ‘Developing Assessment for Informing Judgement.’ In Rethinking Assessment for Higher Education: Learning for the Longer Term, edited by D. Boud and N. Falchikov, 181–97. London and New York: Routledge.

Boud, D., and E. Molloy, eds. 2013. Feedback in Higher and Professional Education. London: Routledge.

Boud, D., R. Lawson, and D. Thompson. 2013. ‘Does Student Engagement in Self-Assessment Calibrate their Judgement Over Time?’ Assessment and Evaluation in Higher Education. Accessed December 10, 2013. doi: 10.1080/02602938. 2013.769198.

Carless, D., D. Salter, M. Yang, and J. Lam. 2011. ‘Developing Sustainable Feedback Practices.’ Studies in Higher Education 36, no. 5: 395–407.

Ebel, R. L. [1965] 1972. Essentials of Educational Measurement. Englewood Cliffs: Prentice-Hall.

Falchikov, N., and D. Boud. 2007. ‘Assessment and Emotion: The Impact of Being Assessed.’ In Rethinking Assessment for Higher Education: Learning for the Longer Term, edited by D. Boud and N. Falchikov, 114–56. London: Routledge.

Hattie, J., and H. Timperley. 2007. ‘The Power of Feedback.’ Review of Educational Research 77: 81–112.

Heron, J. 1988. ‘Assessment Revisited.’ In Developing Student Autonomy in Learning, edited by D. Boud, 77–90. London: Kogan Page.

(p.31) Hounsell, D. 2007. ‘Towards More Sustainable Feedback to Students.’ In Rethinking Assessment for Higher Education: Learning for the Longer Term, edited by D. Boud and N. Falchikov, 101–33. London and New York: Routledge.

Hounsell, D., V. McCune, J. Hounsell, and J. Litjens. 2008. ‘The Quality of Guidance and Feedback to Students.’ Higher Education Research and Development 27, no. 1: 55–67.

Hughes, C., and S. Barrie. 2010. ‘Influences on the Assessment of Graduate Attributes in Higher Education.’ Assessment and Evaluation in Higher Education 35, no. 3: 325–34.

James, R., K. L. Krause, and C. Jennings. 2009. ‘The First Year Experience in Australian Universities: Findings from a Decade of National Studies.’ Accessed December 10, 2013. doi: http://www.cshe.unimelb.edu.au/research/experience/docs/FYE_Report_1994_to_2009.pdf.

Joughin, G., ed. 2009. Assessment, Learning and Judgement in Higher Education. Dordrecht: Springer.

Knowles, M. S. 1973. The Adult Learner: A Neglected Species. Houston: Gulf Publishing Company.

Merry, S., M. Price, D. Carless, and M. Taras, eds. 2013. Reconceptualising Feedback in Higher Education. London: Routledge.

Nicol, D., and D. Macfarlane-Dick. 2006. ‘Formative Assessment and Self-Regulated Learning: A Model and Seven Principles of Good Feedback Practice.’ Studies in Higher Education 31, no. 2: 199–218.

Oxford Learning Institute. 2013. ‘Assessment and Feedback.’ Accessed May 16, 2013. doi: http://www.learning.ox.ac.uk/support/teaching/resources/assess/.

Rowntree, D. 1977. Assessing Students: How Should We Know Them? London: Harper and Row.

Wiggins, G. 1989. ‘A True Test: Toward More Authentic and Equitable Assessment.’ Phi Delta Kappan 70, no. 9 (May): 703–13.

Yorke, M. 2003. ‘Formative Assessment in Higher Education: Moves towards Theory and the Enhancement of Pedagogic Practice.’ Higher Education 45, no. 4: 477–501.