Assessment is, in many ways and for many people, the single most essential thing that a University is about. This assessment, properly considered, is not simply a matter of considering how we look at or scrutinize the performance of students in specific tasks during their student careers. Indeed, it can equally well be related to the much more fundamental issues of access and of widening participation. Many of these issues are themselves determined by assessments that are made outside of and prior to any student’s University life. Further still, assessment and its surrounding practices and issues apply not only to students but also to faculty, who are themselves now routinely assessed, and who have always been assessed especially when seeking their academic post in the first place. Thus, from the reading of UCAS or other application forms from prospective students, via the scrutinizing of CVs and of prospective colleagues at job interview, on to the monitoring processes that govern the progress of both student and staff, to the summations of a career in some academic obituaries (and sometimes equally in obituaries of those directly and indirectly affected by University life), assessment looks to be a basic and pretty thoroughgoing aspect of what we do. In some ways, assessment more or less permeates University life and activity in our times.
In this chapter, I want to consider the very foundation of the idea of assessment itself, and its widespread ramifications and effects on the daily life of the University. The reason that we should carry out such an investigation into the now large body of philosophies of assessment is straightforwardly given: my case will be that we have made a rather fundamental shift, especially in the last third of the twentieth century, and one that has had effects beyond those foreseen when we made the changes. The shift in question is one that makes a move away from a system that we described by the term of ‘examination’ towards what we now know as the science of ‘assessment’. That this is a science is, at least ostensibly, entirely indubitable: we have Centres for Research into and for Assessment up and down the land, and we have graduate and other programmes of study in it. We even have pro-vice-chancellors whose brief places assessment at the core of their day-to-day governance activity. Assessment itself can now be assessed and accredited, and can find itself both governed and governing, in various ways. This chapter will in turn therefore be a kind of assessment of assessment, so to speak; or, to put it more neutrally, it will be a scrutiny of the consequences of our move towards assessment as a routine aspect of all practices within (and occasionally beyond) a University.
The shift from examination to assessment is far from being simply a neutral changing of semantic terminology; and the growth of the ‘science’ of assessment is likewise far from being a simple neutral encoding of activities after the fact. That is to say, once we have agreed that it is fundamentally a science of sorts, then it follows that the science of assessment does not rely on a description of practice, but prefers to strive to codify an ideal and agreed prescription for ‘best practice’, and to garner support for assessment methods that have a scientific and verifiable grounding or a reasoned or ‘scientific’ foundational basis. This becomes a whole philosophy of assessment; and it is this – as well as the political consequences of the philosophy – that I want to look at closely in the context of the University that I have been trying to describe in these pages.
It would be an oversimplification, and indeed a misrepresentation, of the case that I am about to advance here to suggest that I want to advocate a return to ‘examination’ systems. It is unquestionably true that the shift to assessment is, in many ways, a profoundly positive and progressive thing. However, I will argue that this change from examination to assessment, though inspired for the most part certainly by the utmost of good intentions, has turned out to be regressive in some of its fundamental aspects, and also that it has been consistent with the very authoritarian ideology that, as we have now seen, already dogs the questions around our leadership at every level, both within and beyond the University as an institution. That implicitly individualistic authoritarian ideology afflicts the society as a whole, of course. More pertinently for our present concerns, it not only impacts negatively upon the University as an institution, threatening its very credibility as a source of judgement and legitimate practices, but it also damages, perhaps beyond repair, the possibility of our students ever gaining access to a justice and freedom that is constituted through the free play of a critical knowledge that might be founded in ideas of how we ‘judge’ most justly. Adopted in the interests of a hoped for ‘democratization’ of the examination business, assessment – in its current practices and governing philosophies – nonetheless impairs our abilities to found a democratic community of shared interest.
In what follows here, I will offer an analysis of the philosophy of assessment in its current most widespread forms and in the light of its emergent ideology. In doing this, I will argue that the predominant form of thinking about assessment contrives to evacuate our Universities of the activities (most specifically, activities of the exercising of judgement) associated with critical knowledge, replacing that instead with a set of practices and beliefs that seek to prioritize the efficient and controlled management of information. This is consistent with the tendency that we have seen – and that I have already analysed to an extent in preceding chapters – towards the commodification of education and the no-longer-creeping-but-rather-galloping marketization of the University and its core work. I will then turn from the analysis of some shortcomings to a suggestion for how we might better proceed.
It used to be the case that the most important thing that a University teacher did was to examine. University life consisted, broadly, of three to four years in which the student engaged with her or his teachers, with work in the laboratories, with the resources of the library, and with other students, peers and colleagues in both formal and informal activities whose summative force is best caught in the term Bildung – a kind of formation of the self that is grounded in the possibilities of edification that are themselves guaranteed through our social and mutual interactions with each other. Along the way, the student would undertake a good deal of work, sometimes on a continuous weekly basis (such as writing essays, conducting laboratory experiments and so on). These would be graded and would thus give the student a guide to their standing and progress; but the grades themselves would not, in this first instance, be determining of final results or degrees.
Then, at the end of this period of study and intellectual growth, the teacher took it upon themselves to ask the student to show their knowledge of a discipline in some detail and at the levels of intellectual sophistication that would be required in order to gain the candidate their degree. What was being examined here is a combination of various things: competence in the field, such that the student can demonstrate a familiarity with the basic materials constitutive of the field or discipline of study; an ability in comprehending how the discipline ‘works’, in terms of its legitimized protocols or normative and agreed practices; cleverness, in showing how they can manipulate materials and offer inventiveness and new discovery; and a basic sense that the relevant work has been covered in some breadth and depth, perhaps further embellished with some extra-curricular aspects of an education in more general terms: Bildung. In this case, should the final results be unclear in any way, the work that had been done on continuous weekly assignments, and graded, across the years could be looked at; and, at this stage, the coursework (as we would now call it) would play a part in helping determine a degree result.
This all sounds rather vague. However, regardless of how vague it sounds, it ‘worked’ for a long period of time and was widely accepted practice. It was even accepted when the examiner realized that, in many cases, what was at work here was a mode of judgement that might have been based upon some aesthetic categories (of ‘the beauty of the well-formed argument’, for example). That is to say, the element of alleged ‘subjectivity’ in the governing principles of ‘examination’ was not widely regarded as especially troubling; after all, it was thought, the examiner was examining precisely because they were already rather expert in the field and knew its protocols and expectations extremely well, indeed so well that they would also be able to recognize the value of a novel thought that stepped outside of the usual protocols.
This means that, with regard to this procedural mode and understanding, the examiner, as the personified embodiment of their discipline and institution, had an authority and was expected to exercise that authority in their judgement. There was an expectation that objectivity would also be maintained, of course. It would be unacceptable for an examiner, as a Marxist (let’s say), to downgrade an English Literature essay on the grounds that it did not take a sufficiently ‘revolutionary’ perspective on the poetry with which it dealt. To this extent at least, the professionalism of examining would be maintained and I, as an authoritative examiner, would exercise personal judgement, based on my experience and authority in the field, while also eschewing personal bias. I would judge the work against the protocols and practices of the discipline, and not against my own personal ideas.
Further, a system of ‘blind’ and anonymous double-marking, where two examiners grade the same work independently of each other, to be complemented in turn by accreditation through a further examiner who would themselves be external to the institution (drawn from another University), would also ensure the ironing out and evisceration of bias and prejudice. The external examiner would not only help adjudicate, but would also be in a position to compare performance between this University and the institution in which they routinely examine as one of the internal examiners, thus striving to ensure a modicum of comparability in degree validations among different Universities nationwide. In short, it was a serious business, and colleagues were enjoined to work collaboratively in order to make it work.
But when one says that all of this ‘worked’, we also have to ask some corollary questions. Above all, what exactly is the system that is working here, with its wheels oiled by this complex examining process? What, in short, is the fundamental thing that is ‘working’? What is it that this mode brings about, what is it that it sustains, and how might it advance in any serious way the purposes of the University? There have been a number of complaints about the system of examining as I describe it above, of course. Three things in particular stand out: a) that it reproduces clones of the examiners; b) that it at least potentially lacks objectivity and formal verification procedures; and c) that its processes are obscure, partly because the judgements made are essentially ‘occasional’, demanded by the ad hoc nature of treating every examination activity or submission on its own terms, on its own occasion and as it arises.
Let us consider these objections in turn. First, we can look at the claim that examination is concerned above all to reproduce the already existing body of knowledge such that the student essentially becomes a repetition or reflection of the teacher. Writ large, the argument suggests that the system is designed to produce the next generation of lecturers and examiners. Writ yet more large, the argument is that this system is about the essential preservation of existing privilege by a procedure that is designed to ensure its onward self-reproduction. The allegation here is that the student ‘secretly’ knows that the teacher wants an examination submission in which the student essentially confirms the teacher’s own thoughts. Were this to be true, the consequences would clearly be serious. In common parlance, of course, this was referred to as ‘spoon-feeding’ the student, who would ‘regurgitate’ the food offered. Sickening, if so; and, perhaps more importantly, such a system will necessarily downgrade the work of any and every individual who is not destined to become the next generation of academics. The system thus looks self-perpetuating and self-validating, not to mention troubling in terms of its intrinsic politics, designed to sustain privilege.
More fundamentally, what is at stake is a system whereby teachers ‘teach to the test’ in order to get predetermined and desired results. The roots of this, in Britain at least, lie in the mid-nineteenth century. In 1862, Robert Lowe, then Vice-President of the Education Board in Lord Palmerston’s administration, promoted what was called the ‘Revised Code’ governing the costs of education. Fundamentally, the Code – a product of the Newcastle Commission’s review on the State’s commitment to the principles of providing a mass education – advanced the idea that schools should be ‘paid by results’: if children attended well, the school would receive some funding; and if the pupils also passed some tests in reading, writing and arithmetic, then the school would see its funding very much enhanced.
We should note something about this: whenever we see this system of governance whereby State interest in and payment for education are dependent upon performance or results (or, as in more recent times, the perpetuation of league tables and their respective standings), we should recall that its foundational roots are to be found not in any modern demand for improvement in education, but rather in a cost-cutting exercise from 1862. Further, we should also always remember that neither Lowe nor Palmerston, the earliest architects of a system of ‘payment by results’, were great supporters of democracy or the general democratization of society that we might now think of as being related to a widespread and generally free education system. If a University education has anything to do with the freedom and justice that we associate with increased democratization, then it would follow that ‘payment by results’ in any form or in any sphere of University activity would not be our preferred mode of governance or finance.
More immediately pertinent for the present argument, it would be inevitable, as Matthew Arnold and others pointed out, that in such a system teachers would, quite eminently reasonably, organize the work that they do with students entirely around the predicted demands of the test itself. This is not only reasonable in the face of an ideology of ‘payment by results’; it is also almost a requirement of teaching in such a situation. Examination here is something for which the student or pupil is to be ‘drilled’, as a series of repeated exercises to be gone through. The effect – though not necessarily the purpose – of such a state of affairs is indeed to produce a certain degree of educational conformity.
At one level, a certain degree of conformity in education should be regarded as non-controversial: we should probably strive to ensure that all conform to the belief that in arithmetic two plus two equals four, for instance. However, demands for conformity are not always as innocent. Consider the question of childhood handwriting. Here, an expectation of conformity is slightly more troubling in education, for in this the motor activity of the child’s body becomes involved, when there is an expectation or even a demand for a certain orthography. For some individuals, the matter of neatness of handwriting is a physical matter of such motor control. As we now also know, alongside the possibility of dyspraxia, certain cognitive activities can obstruct the easy practice of reading (dyslexia) or of computation (dyscalculia). By and large, however, the conformity in question here is untroubling, at least relatively. It is indeed good if we generally conform to the view that twice two is four, if only because it will allow us to compute degrees of economic equality and inequality in the wider society of which the University and its students are potentially leading participants.
In the more advanced setting of the University in recent times, the question of a potentially unwelcome conformity was not driven in the first instance by financial consequences; rather, it was allegedly driven by matters pertaining to class interest. The danger seen was that an education at advanced level, especially in the arts and humanities, was deemed to be something that reinforced class prejudices by cloning the faculty. The student ‘passed’ to the extent that they had successfully internalized the values of the teacher; and that teacher, themselves the product of earlier conformist thinking, was typically middle-class. Examination was the means of legitimizing middle-class values as normative. Examination, in these terms, was clearly seen to be potentially damagingly ‘political’ or at least to be a practice shaped and given norms by a tacit political ideology; and so the move to call its protocols and practices into question derives from a sense of required respect for other class positions, for multiple points of view and for an awareness of diversity among the student body. The period when this begins, unsurprisingly, dates from the expansion of the University sector, the beginnings of a substantial increase in the student population, from diverse backgrounds, and the settling in of the post-Robbins era institutions.
The process of University expansion in the UK that we usually trace back to Robbins in 1963 had in fact already begun prior to the commission of the Robbins Report, and was really partly initiated by the new settlement at the end of the Second World War. Indeed, a number of the Universities usually thought of as ‘post-Robbins’ initiatives had already opened (Keele, for instance opened in 1949, as the University College of North Staffordshire, before becoming renamed as the University of Keele in 1962; Sussex opened in 1961; East Anglia in 1963; Essex had been planned for opening in 1961 and so on). However, a yet more significant thing brought about by Robbins was the granting of degree-awarding powers to a number of Colleges of Advanced Technology. The constituency for these new institutions, which then started to flourish and grow at a rapid pace in the later 1960s and through the 1970s, was much more diverse than had previously been the case in the more homogenized University sector. Further, we already had, at this time, relatively large numbers of sizeable ‘civic’ institutions as well. The class composition of the student body, then, is becoming increasingly mixed.
In an earlier age, when University education was more solidly a preserve of the upper-middle class and aristocracy, it was obviously not considered to be a matter of troubling concern that those attending would have their existing class positions and prejudices confirmed through examination. That was, in fact, largely the point. Examination was the process by which they were validated, their identity legitimized, precisely as members of a class or at least of a social elite. Now, however, a constituency drawn from a much more diverse background caught the spirit of the times, and began to question – to judge adversely – the centrality and normative standing and authority of Establishment values. In the University, these Establishment values were thought to be encoded in the system of examination. If examination was the means whereby the Establishment would sustain itself by demanding conformity to its norms and values, then ‘to pass’ was to internalize and therefore unquestioningly to endorse those values. In an expanded system, with greater diversity and a burgeoning sense of democratization, this quite rightly comes under extreme pressure and speculatively critical scrutiny.
The second ground for concern was that the system lacked objectivity. In some ways, this follows directly as a consequence of the first concern discussed here. If it is the case that examination produces conformity, it follows that the values inscribed within the examination system are not themselves subject to the kinds of scrutiny that might permit substantive change. The examination is a kind of absolute authority here, not subject to criticism or questioning, not open to critical assessment of its own procedures. The kind of ‘new thinking’ that might be available from the new diverse student body would not be allowed to disturb the secure truths already established by those in authority – the middle-class teachers, legitimized by their position not as teachers but as representatives of an Establishment, and enforcers of its values. It is then a short step to suggesting that the already established values are not only old-fashioned but also that they are specific to a particular class and thus essentially subjective, endorsed by a necessarily partial (and thus blinded or at least myopic) point of view. This is what we have already seen in the figure of the Jean Brodie of my previous chapter.
The fear here is that teaching and examining are done according to the silent rules that govern what Pierre Bourdieu once called the ‘aristocracy of culture’. There is a parallel between the mercantile world and the world of culture. Both rest on kinds of capital; and, in both, those who control the capital have a certain position of authority and power. In the academy and, indeed, in all areas of the public sphere dominated by questions of intellectual capital, the members of the aristocracy of culture, Bourdieu argued, have their positions of power and authority not because of anything they actually do, but rather simply by dint of who they are. They are essentially always right: the essence of who they are is identified with what are proposed as eternal values, and these values in turn are what constitute the very identity – the being, not the doing or actions – of the aristocracy.
Thus, it would follow, in this account, that the hypothetical ‘aristocrat-teacher’ of the set-up has no need to justify their judgements in examining: the judgements are intrinsically correct because of who is making them. The judgement is a manifestation of the intrinsic rightness of the teacher themselves as an individual. It simply endorses again their identity, takes no significant account of anything else, and certainly takes no account at all of the possibility of historical change. It is only the vulgar – the examinee, from this newly diverse range of backgrounds – who have to define themselves by their actions. The fact of having to ‘prove’ themselves not by what they are but by what they do in and by examination is ipso facto proof of their intrinsic vulgarity, and thus a manifestation of the fact that they are excluded from the aristocracy, and rightly ‘judged’ by that self-sustaining aristocracy.
That is, the vulgar are thus excluded unless and until they ‘prove’ themselves by passing the exam, which means confirming the subjective identity of the teacher. The structure necessarily endorses a hierarchical view, but one based upon subjective being and not upon performance or activity or the action of the examinee. ‘Vulgar’ here, deriving from the Latin vulgus, meaning ‘of the common people’, is a term that, for political reasons, is to be rehabilitated in a move against the acceptance of the class structures that tacitly shape examination ideology.
Thirdly, all of this is, unsurprisingly, obscured from immediate view by the ‘vagueness’, as I already described it, of the system as a whole. The examinee does not know what is required of them, for there can be no published criteria for the examination process. What is examined is not what one does, but rather how ‘what one does’ reveals ‘who one is’. Thus, the examinee has to second guess what is going on; and there can be no published criteria, for the simple fact that the criterion for passing does not depend upon action but upon being, upon identification and consolidation of a pre-existing identity, the identity of members of the Establishment or ‘aristocracy of culture’. In some ways, the change that was required here is the most far-reaching. In an ideology of ‘openness’ that emerges essentially from the ideas of self-revelation (either deriving from the ‘letting it all hang out’ themes in the hippie parlance of the period, or – more sinister – from a post–Cold War anxiety about espionage and subterfuge), there grows a demand for something called ‘transparency’. Transparency will ensure that nothing untoward is going on, that there is no class bias or prejudice of any personal kind, that every judgement will be validated and justifiable; and, above all, that judgements can be measured and legitimized by reference to criteria for examination that are fully out in the open, known by and available to all participants in the process.
The inevitable demand for change from all of this negative examination ideology goes hand in hand with the growth and distribution of University education, with a watering-down of class prejudices and Establishment certainties, and with an ostensibly democratic demand for an opening of the doors of opportunity to all. In many ways, this last aspect of the change is the most telling: it relates to issues of access. The ‘examination’, as opposed to a system of ‘assessment’, essentially was considered as a kind of bar to further progress: intrinsic to its system, it requires failure in order to allow a number to proceed further, ‘qualified’ now to various ‘degrees’ by their examination. The exclusivity of the process was seen to be consistent with the closing of doors, the closing of opportunity; and the very movement – a politically inspired movement – of opening the doors and opening new Universities required a different and even opposite system. Instead of measuring qualification by failure, the idea now was to assess and to measure the extent of what participants could do, rather than to discover their limitations and what they could not do, or to judge who they are. Assessment is the first ideological step towards what is now termed ‘competency criteria’, which is the latest manifestation of what has become, essentially, a disregard for ‘qualification’ and the authorities that are invested in such qualifications.
In the contemporary world, we do still live under the very same 1862 ideology of Palmerston’s administration where we organize education in terms of ‘payment by results’; but we refer to it now as ‘competitiveness’, and signal its force through the existence and encouragement of ‘league tables’ at regional, national and global levels, and through competitive bidding for limited funding for all of our educational activities. Bids for such funding, of course, have themselves to be assessed; and we also now have a large armature of ‘Peer Review Colleges’ and the like, designed to regulate the competition. It might be noted, in passing, that peer-review, in these situations, is a system that requires the academic community to enforce cuts upon themselves: it is now our peers who judge us as lacking either in our research bids or in our research assessments. This is a further example of what I have called a delegating of the blame for what is going on: if we now fail to find adequate funding, it is because of our peers and their judgements about us; and governments that actually impose the cuts that require this remove themselves from responsibility. The single most important thing to note here is that, in order to secure funding – sometimes for even basic work – we are required to participate in the competition; but the rules governing that participation require that it is we ourselves who inflict the financial cuts. That is to say, we are required to internalize the ideology, like the good cadres that we saw at work in my chapter on leadership.
This generalized competitiveness is now so intense, and so finely granulated, as to require that not only do we have league tables, but we even have league tables of league tables. (Is the Shanghai Jiao Tong index ‘better’ than the Times Higher Education world-rankings? Or vice versa?) In this, it is important that we ensure a certain ‘profile’ from our institutional ‘results’: the more high-quality degrees we award in our individual institutions, the greater the prestige of the institution – and the obvious financial consequences follow on from this. As in 1862, it becomes almost incumbent on us to have an eye on the future safety of our institutions; and that will now require a careful attention to matters of degree results, and to ensuring that we have sufficient numbers at a high degree of excellence, at the end of our programmes.
We are moving towards the establishment of a normative acceptance of ‘assessment’ as a replacement for examination. Assessment is also how we are enjoined to judge institutions as well; and thus, as a practice, it permeates our system as a whole. This will necessarily give a positive view of the institution, since what it measures is the positive aspect of achievement, rather than the more negatively inflected measure of failure to qualify. The change involved is more than a simple change of nomenclature: it is also a change of ideology.
An examination, technically defined, is the ‘testing of the proficiency or knowledge of students or other candidates for a qualification’. In other words, it operates primarily as a kind of gate or bar, barring some people from being ‘qualified’ to do something, while allowing others to practise. It is concerned with qualification and thus with quality. To assess is (again, technically) rather vaguer: it is ‘to estimate the size or quality of something or someone’; and estimation, its central defining term, requires quantification (or measurement). The intended consequence of the change from examination to assessment is one that is determined by the desire to let more people ‘cross the bar’, as it were; and it will do so by prioritizing questions of measurement.
The idea – entirely admirable in principle at least – is to widen opportunity for progression such that more people can go further with their education without being excluded through a failure to qualify. In passing, however, we should note that we lose the determinations of quality and of the attendant legitimizations provided by ‘qualification’; and we now replace the solidity and assurance that this gives with the rather more vague idea of ‘progression’. In principle, progression implies an ever-onward and positive movement; but it also indicates that the qualification (or endpoint, point of ‘arrival’) is less important than the ‘journey’ that the student now makes. Further, the University career becomes but a stage in another, longer ‘journey’, the ideology of which we have already seen in the chapter on experience.
I have already indicated that the shift comes about because of the quite proper demands for the massification of the higher education sector; but, despite the primacy of that demand, the change here has little to do with increased democratization. There are a number of reasons for the growth of higher education worldwide at the end of the twentieth century; and all such reasons for this growth are intrinsically political, but with a rather small ‘p’. That is, the growth is not governed by any serious (and probably consensually agreeable) demand for a better educated citizenry, but rather for more local reasons pertaining to political preferment in elections and the like. It may be the case that, as in the 1980s and early 1990s, especially in the UK but also in other advanced economies, there was a strong political need to reduce the numbers figuring on public registers of youth unemployed; it may be the case that ‘modernization’ of a general economy is tied, for political reasons and especially in those cultures whose industrial base is eroded or non-existent, to a supposed ‘knowledge-economy’; or it may simply be that the University is now seen as itself an ‘industry’ of sorts, requiring ‘growth’ to justify itself in political cultures that believe economic growth to be more important than the sustainability of an ecology. In all these and similar cases, the driver for change, therefore, is not primarily pedagogical, but rather political; and there is thus a primarily political consequence for this change. It will follow, however, that the political change attracts further pedagogical turbulence in its wake, as I shall show.
It would indeed be a fine thing to have more people more highly educated, for, as Aristotle believed, knowing is eudaemonic: it makes you feel good and improves the quality of a life. We might even go so far as to argue that, indeed, knowing such as this improves the quality of lives or of our living together. However, such well-being, grounded in the idea of qualification and quality, is not our priority in the present climate. Rather, in the conservative ideologies that drive the now corporatized model of the University, it is taken for granted (or rather, it is rather rashly imagined) that there is some direct link between a University degree in a specific subject and highly paid employment in a field related to that same subject. The journey here takes us straight from University study in a discipline to paid employment in that same discipline; and the only real difference is that, in the earlier stage, the student pays their fees while, in the later stage, they are paid for the application of the same work that was done in study-form. Thus, the argument goes that education is indeed eudaemonic, but only in the sense that people feel good when they are more highly paid than others. Such a view presupposes a highly atomized society, in which lives interrelate only through market mechanisms: the individualized ‘atoms’ collide only when they compete for the greater individual benefit or profit.
The case that drives the argument of this present book, however, contrasts profoundly with this. Not only does it accept that there is indeed no such direct link (not everyone following an engineering degree becomes an engineer; not everyone doing English becomes a poet or an English teacher; and so on), it also accepts that the quality of life in question is determined not simply by a kind of neo-Hobbesian greed or demand for individual advancement over others. However, the case I put forward here is that, even if a graduate (or anyone else, for that matter) is unemployed, it is better for the life of the public sphere that, as a citizen in that public sphere, they are well-educated and thus well-qualified for taking as full a part as possible in that democratic civic community. In this way, a more general eudaemonia becomes possible.
The sad fact today, however, is that although more and more people attend institutions that are designated as a University (or have the fabled ‘student experience’), it does not follow that we are thereby substantively educating more people: rather, we are engaged in a primarily political process whose determinations are not primarily ethical, nor, actually, to do with the quality of life. As many will acknowledge, in an age of mass education, when there has been a systematic and repeated reduction in the unit of resource, it becomes increasingly difficult, if not impossible, to attend to particulars – including individual students carrying out specific and particular exercises or work – in ways that were the norm before. Of course, it is also difficult to acknowledge this, for in doing so, the lecturer opens themselves to the charge that they are admitting to not doing their job properly. And, crucially, as we also know, in the culture of ‘payment by results’, lecturers as much as students are ripe for assessment, with their performance to be monitored and measured.
The industrialization of the University, however, driven by codes of ‘efficiency’ and ‘performativity’ – as if we were a widget-business – tends to make the expected codes of conduct and behaviour within the institutions approximate precisely to the conditions that govern such industrial businesses: the task becomes one where, tacitly or not, we are expected to shift units in a highly productive fashion (lots of undergraduates and graduates) with maximum quality of output (highly classified degrees) and consequent sustaining of the brand (your own University name goes here, usually with a strapline indicating excellence in some generic way).
This – the brand and our sustaining of it – is what will be ‘examined’ now in the marketplace in which the University is to find and make itself. One consequence is a growing expectation that teachers will be prepared to observe a primary allegiance to the institution and its institutional brand rather than to the scholarly discipline and its protocols, of which they should properly be guardians. At the level of assessment, the task now is not only to give more people access to our mythic ‘student experience’, but also to measure or assess the quality of that experience and, crucially, to ensure that it is as homogeneous as possible and as highly evaluated as possible. If there is heterogeneity, there will be potential ground for complaint in that some people are being given ‘better’ (biased) treatment than others; and, of course, it is precisely this that was wrong with ‘examinations’. In scrutinizing this student experience, we need to assure the same excellent quality for all. How, then, do we discriminate among participants?
For many, the answer to this question is straightforward: we do not and should not discriminate in any serious fashion at all. We should not award classified degrees; but we should rather limit ourselves to a transcript that describes work done and, at most, a local rather general grading that is not very finely granulated. In this, there is a transparency of sorts, and there is no validation of an aristocracy of culture. We simply indicate that the student in the case has more or less satisfactorily completed certain requirements. However, this does not yet answer the problem regarding the ‘cloning’ of students, in which the student essentially rehearses what the teacher has said. To address this, we do indeed need some level of discrimination and distinction.
To maintain transparency and the supposed democratization of this system, while also addressing the issue of cloning, we need to publish the criteria that need to be fulfilled for the ascription or awarding of each grade: ‘grade-description’ is the technical term for this. Information in the form of such description becomes the driving force for this aspect of our move towards assessment. We inform the student of the criteria for each grade by providing a clear or transparent description of what is required to secure the grade; and the student, in principle at least, could thereby effectively grade themselves, for all they need do is compare the submitted work with the published criteria, match it up and pronounce the grade. We thus eliminate also the possibility of any human interference (such as the activity of external judgement) from the process: the straightforward means of elimination of potential human error turns out not to be the elimination of error, but elimination of human intervention. In sum, it is in principle possible for the student to secure their own ‘payment by results’: the result in question has its clearly designated ‘price’, as set out on the label marked ‘grade-descriptor’. Yet this itself, apart from an obvious queasiness about the validities of such self-assessment, raises a crucial issue: the issue of supposedly transparent information and its political import.
The demand for transparency – ostensibly an ethical good, ensuring that nothing untoward or covert is going on – becomes a key driver for the prioritization of information (which can be transparently given) over knowledge (which is, of necessity, less secure and murkier, a matter of dialogue and debate). There are large implications for this.
Information has become our poor substitute for knowledge, in exactly the same way that transparency has become our poor substitute for truth. The two, combined, form a deadly conjunction through which any demand for justice – which depends upon human intervention and judgement – can be safely circumvented. Instead of the difficult work of judging that would be required for any proper ‘examination’ of whether a specific outcome is just or not, we have a self-perpetuating and self-validating system against which there can be no real appeal, for the system’s legitimacy is given and guaranteed because it is (allegedly) transparent and replete with information. The relation of knowledge to information, as the relation of truth to transparency, is related to our central question in this chapter: how and why and what do we judge?
The so-called ‘knowledge-economy’ that is allegedly the main economic determinant for these changes in our practices is, as it happens, no such thing, for what we have in our time is not an increase in knowledge but an increase in information, aided and abetted by technology. The political economy, in general, has little time for knowledge (which tends to be provisional, relatively unclear, open to argument and debate); but it is by contrast obsessed with information (which at least has the appearance of stable certainty and resembles the solidity of the Gradgrindian ‘fact’). Further, it is the very structure of assessment (as opposed to examination) that encourages the confusion or confounding of knowledge with information, such that the key to success becomes one of having access to information, sometimes processing it, always ‘managing’ it, but rarely ever thinking about it or knowing anything as a result of finding it. Many cannot distinguish between knowledge and information – one reason why plagiarism is rife, of course. There is also an ideological dimension to this as well, related to technology.
Andrew Abbott, in a speech called ‘The Future of Knowing’ to the University of Chicago Alumni on 6 June 2009, makes a distinction between what I have called information and knowledge, but he recasts it in his own terminology as a distinction between knowledge and knowing. The latter – knowing – is what a University should be about; However, the former – knowledge (or what I call information) and our supposed measuring of it by assessment – is what structures all our teaching, especially via the prioritization of assessment over examination. Abbott points out that the present generation of students is the first to have gone through almost all their education in an electronic and computer-driven world. He argues that, in this world, knowledge has become something that students think of as being ‘contained’ in the data on the Web and that, essentially, students do not know how to manipulate that knowledge to make it into knowing.
We can easily recognize this as an ostensibly conservative neo-Platonic argument about technologies of memory: Plato, in Phaedrus, questions the technology of writing, saying that it damages human memory. Writing, it is alleged, provides a kind of repository of knowledge that can be located outside of the self, a self whose identity, in an oral culture, is given precisely by the interiorization of knowledge and the necessary memorialization of it. While an oral culture identifies the self with her or his body of remembered knowledge, a literate culture divorces the self from knowledge; and the feared result is a loss of the faculty of memory itself, memory which is vital to self-knowledge as much as it is to the everyday business of practical living. Walter Ong once argued something similar, suggesting that ‘modern’ thinking (he means post-Renaissance thought) is shaped by what he calls ‘place-logic’. As he puts it, in an argument concerning the technology of print and our earlier shift from oral cultures to literacy: ‘We ourselves think of books as “containing” chapters and paragraphs, paragraphs as “containing” sentences, sentences as “containing” words, words as “containing” ideas, and finally ideas as “containing” truth. Here,’ he concludes, ‘the whole mental world has gone hollow.’1
Abbott, however, takes the logic of this further. He conducts a series of classroom experiments, through which he discovers some interesting things about how his students tend to read. First, many read electronically: that is, they do not have the physical book, but read e-copies of the work on screen. Further, when reading for study, they often ‘read’ by cutting and pasting. In the texts that they study they come across passages that they think contain keywords; they then highlight these passages online and paste them into a Word document. They then construct occasional sentences to see if they can link the pasted collage of passages.
Asking his students how they read, and whether they prefer print to screen, he finds some interesting results. He gives the example of a reader who describes the process of reading online very well, explaining how he has gone through chapter one of The Great Gatsby. Then, the student writes:
I finish a page and there is a link at the end of the page to connect me to the next chapter. I double click it but before I can go on to the next webpage I am shown a Google ad with an opportunity to win a getaway cruise ship online. Reading a hardcopy of the novel would have saved me from this absurdity.
Yet the absurdity, of course, is the point, as Abbott argues: webpage design is structured to reduce the space for genuine independent thinking: instead, we ‘surf’; and we surf in order to be persuaded to buy things. Abbott says:
My point is that our students have been brought up spending much of their time – the time that we spent reading magazines, second-rate novels, and the occasional piece of fine literature – surfing an internet that has been optimized in terms of these retail-oriented principles of web-design. That’s where their model of cognition is formed. Ours was based on rubbish texts, to be sure. But at least they were texts. The current generation of students has been raised on a cognitive form that is deliberately designed to be as indulgent, as ‘user friendly,’ as preorganized as is humanly possible, all in order to hold the reader’s attention long enough to sell him something.2
These, of course, are the students that we are now enjoined to bring into the University: consumers, rather than students. The commodification of knowledge here is validated by procedures of assessment that do not require the student to demonstrate ‘knowing’, as Abbott terms it, but rather to demonstrate solely that they have gained access to the database of ‘knowledge’ (or, in my own terms, ‘information’), and that they have then manipulated or ‘managed’ that knowledge in its organization of cut-and-pasted parts into a new whole.
The economy in question here is not in any serious way a ‘knowledge-economy’; and it is not governed by the exercise of critical judgement that is of the essence of any form of assessment (or, indeed, examination). Rather, what is at stake here is the ‘cloning’ of shoppers, so to speak. The task, in fact, is to prioritize rapid consumption of unexamined information and, correspondingly, to de-legitimize the slow and inefficient use of time that is required for us to ‘assess’ a text or any other kind of information. Consumerism such as this justifies itself simply by looking at growth in sales: it needs no further philosophical validation. As the self-proclaimed ‘realists’ would have us believe, this is just how it goes in our contemporary sphere, and we should learn to live with it or to adapt to it. For us, however, in this present argument, it is vital to note that the reduction of knowledge to information is consistent with the demise of any form of assessment at all.
It may be that we have inadvertently found, here, the reason for another aspect of the ‘economy’ of the knowledge-economies: inflation. Specifically, it begins to look as if our mechanisms almost essentially require that we show such inflation in terms of grades awarded for work. If we make a move away from ‘qualification’ and its root in something called ‘quality’, to arrive at something called ‘quantification’ (or ‘estimation’) and the question of measurement; and if, further, it is increasingly incumbent upon us to attend to ‘the brand’; then, ipso facto, it makes sense to give the results of the measuring in rather inflated terms. Thus, it is not so much the ‘knowledge’ that grows, but the ‘economy’ itself; and the word for that is, simply, inflation.
Behind the turn to assessment, then, driven as it is by a system of ‘payment-by-results’, it is entirely rational that grades should be inflated in various ways – many, if not all of them, entirely legitimate. The legitimacy lies in the fact that what we are now enjoined to do is not to examine knowledge, but rather to record the management of information. If there is, in an assessed piece, a body of information of a measurable quantity or size, and if there is also a managing of that information that is consistent with the procedures described in the grade-descriptors, then the likely consequence is, indeed, this kind of inflation. The number of ‘first-class’ degrees rises; the ‘brand’ prestige rises; payment – in terms of intellectual, academic, and financial capital – is secured. The inflationary cycle is then repeated, in more and more bloated form. In all of this, however, the amount of education in question may not have risen at all: that is, now, an entirely separate matter from the recording of processes in which the managing of bits of information, and the transparency with which such managing and recording is done, has become paramount.
One of the main issues affecting the question of examination and assessment is legitimation: how do we ensure that people are being graded properly, and therefore being given the opportunities that assessment was designed to offer them? As I indicated above, in the times of ‘examination’, this was straightforward (if also, as we now know, troubling and concerned with exclusivity). Judgement was key; and the judgements in question were grounded in prior experience and the authority that such experience gives. The judgement, in short, is legitimized by two things: the qualification of the examiner, and their accumulating and accumulated experience. This, however, turns out to be precisely the problem: the judgement, determined by human input in this way, is not guaranteed to be ‘objective’ and neutral. Humans rarely are ‘neutral’, especially when judging matters in fields where they have expertise and experience, given that it is precisely such experience that constitutes their authority and their identity. Their ‘identity’ is thus confounded with their ‘experience’ or authority.
In the first instance, then, examination is potentially contaminated by bias and prejudice. There is a necessary determination to try to preclude such poor judgements. The consequence, in the first instance, might respectably turn out to be argument and debate, even a thematization within the discipline precisely of the terms and nature of the debate. Indeed, at one time, this is exactly what happened, when the specific ideological bias of particular critical positions, especially in the arts and humanities disciplines, was exposed. We once called this ‘the theory wars’, in which there were not only ‘competitions’, as it were, between various theoretical positions but also a much more fundamental battle between those, on one hand, who denied that there ever even was a theory governing their position and those, on the other, who saw all positions as being ‘situated’ within presiding ideological stances. These ‘wars’ never really resolved anything, largely because the opposing camps essentially ignored each others’ work.
Moreover, at the institutional level, no argument was ever really enjoined at all; rather, the consequence was an argument that suggested that, rather than adjudicate between the two camps, we should adopt the more ‘nuclear’ option and remove the possibility of human bias entirely. Thus, the so-called ‘theory wars’ became just another paper or module within the degree or disciplines, ripe for assessment. How do we remove human bias, systematically? We do that – we did that – by delegitimizing the prior knowledge, experience and, indeed, qualifications of the examiners. This is why I refer to it as a nuclear option, a razing of the ground itself. That qualification and that experience – indeed, behind this the very institutional authority of the University as a whole – now become the problem that assessment will counter and overcome; and so, instead of benefiting from it, we establish an allegedly ‘neutral’ system, based upon the supposed self-evident ethical goodness of immediacy and, above all, of transparency.
We now live in a culture that has no time for professional experience or knowledge. Perhaps the main issue here is again that of time: we live in a kind of foreshortening of time itself, and, as I have indicated before in this book, the result is that we give no time for learning or teaching or thinking. In line with the immediacy of electronic forms of communication, we also want our assessments to be immediate, which in some ways means also ‘unmediated’. The most common form this now takes is the entirely reasonable demand for a fast turnaround of assessed work submitted by students: it is indeed right that this work should be assessed with a high priority and returned promptly to the student. However, immediacy also means much more than this. At its extreme limit, it means that we should not be assessing now work that was done a year or two years ago: that is to say, the moment for assessing definitively is when the module itself finishes. No time is to be given for any further meditation, or work, or reflection on that work. For the Quality Assurance Agency for Higher Education (QAA), this immediacy of assessing was once deemed to be best practice; but it meant that the student was effectively precluded from making cross-references among or between separated modules of study. Denied that possibility, the student is also denied the chance to make their own judgements about priorities in the wider scheme of their degree. The thought to be assessed, then, is also thought that cannot be ‘mediated’ by the student who takes the time to think more deeply about the work being done, the discoveries being made.
Against any sense of a judgement that can be made in a proper and mediated fashion – that is, a fashion that takes time – we have been told to prefer a judgement that can be as quick and as efficient as the delivery of a webpage at the touch of a button. The result is all; the mediation – or study – required to get there is eliminated in the demand for immediacy and transparency. For judgement, we no longer call on human intervention; rather, we make appeals instead to an abstract system, devoid of the possibilities of contamination by human thinking. In this, we are no longer professors, but rather (and rather insultingly) ‘human resources’, operating within a prescribed system; and, as human resources in relation to assessment, we become not examiners but ‘operatives of the assessment function’.
This, of course, now goes well beyond the confines of the academy; and it may well be the case that this discourse originates elsewhere and has been inappropriately imported into the academy. In the language and norms of ‘human resources’, it is important that, instead of attending to a candidate’s curriculum vitae, say, that would reveal prior experience, we turn instead to ‘competency criteria’. That is to say: the CV might possibly prejudice an employer to favour one candidate over another, precisely because they have demonstrated the requisite experience for the job in hand, while other candidates may not. Such a move, it is argued, potentially prejudices me as employer against the less well-qualified candidate – that is, it does so unless I ignore that experience and authority, establish a ‘ground zero’ basis for comparison and turn to assess competence instead of prior qualification. I then, of course, need to set competency criteria; but the primary assumption is that these criteria will somehow themselves be ‘neutral’, and that, of course, is impossible. The criteria, set by me as employer, are inherently biased by my subjectivity. The only way around this would be to eliminate also my own subjectivity as well and to become ‘the employer function’. This way, we effectively strive to eliminate humanity entirely from the process and from our relations with each other. Thus, also, assessment and judgement can now only be assessments of processes and not of content. The ideology contained in the very terminology of the ‘HR’ language tends to ignore such a difficulty; yet it is the basic difficulty in question more generally, for it is the difficulty regarding legitimation.
In passing, let us note the further development of the trajectory that I have traced here. I have already argued that we have moved from an interest in knowledge to an interest in the management of information. This shift, however, to ‘competence criteria’, reflects a further shift. No longer are we interested in information itself (the CV, for example); rather, we will develop a bare questionnaire that allegedly allows candidates – regardless of their knowledge or experience or authority in a field – to show a supposed competence. We might pass over the fact that all that this shows is competence in filling out a form; or, perhaps, we should not so glibly pass that fact over. This – the ability to operate within an abstract system by manipulating forms – is exactly what our emergent bureaucratic cultures now prize. As with employment assessment, so also with assessments in the University that has been infected with this pernicious language and modes of thought.
Within the University, however, there are yet more pressing immediate concerns. The question for the student, once they receive the grade, is not any longer ‘What can I now do? What am I qualified to practise?’ or ‘Where did I go wrong?’, but rather, ‘Did the examiner get it right?’ In other words, now that any principle of legitimation by ‘qualification’ is removed from the scenario, we can now also examine the examiner. The examiner’s knowing, experience and authority – their qualifications – are now, literally, out of the question; but the examiner themselves is very much in question. So, we have a system where the examiner, in principle, can be examined – by a second-order examiner (who may be the student, but may equally be a further institutional ‘authority’, such as a ‘quality assurance agency’). However, it follows logically at this point that the same logic would apply, surely, to this second-order examiner; for this second-order examiner is also or should be, at least in logical principle, subject to precisely the same kind of scrutiny in turn. In the end, we face the ancient Ciceronian question of who judges, Quis custodiet; and we should add the other prime question asked by the great orator, the question of who stands to benefit from the judgement made, Cui bono? To avoid the obvious infinite regress, we simply and at a stroke get rid of the ‘who’ here. The answer is: the system and processes of examining themselves. We are no longer as interested in the content of assessment as in its processes, its carcinogenically proliferating and bureaucratically self-justifying modes.
In relation to this, consider here the arguments advanced by Sally Brown, for example, an eminent pro-vice-chancellor in a large institution in the UK (one that prides itself on widening participation in University, and in teaching, learning and assessment). In September 2008, in a piece in Times Higher Education, she argued that, for today’s student, ‘the value of work is tied to the weight of assessment’, with the unsurprising – if rather shocking – consequence that ‘students regard marks as money’. Her response to this is not that we should aim to correct or even to question for a moment such a narrow view of assessment and its signifying ‘currency’ in marks or grades; instead, rather, Sally Brown appears simply to accept it. So, she argues, ‘if we want to influence student behaviour we need to indicate the value we place on certain types of activity by weighting marks towards what we regard as important’. At one level, of course, such a statement is entirely non-controversial. In assessing someone’s familiarity with nuclear physics, say, we may well place more emphasis and give more marks to their description of the Hadron Collider than we will to the accuracy with which they have numbered the pages of their scripts.
Yet, in this context, remember, marks are currency, and this affects how we can understand the question of value. Essentially, we will see here that we have an attempt to legitimize the translation of quality into quantity; and, crudely, the question now for the student is not a question about ‘knowing’ at all. Instead, the focus is on an entirely different question about the ‘value-for-money’ that they are getting for their investment in the University degree programme. Do the ‘marks’ given by the institution represent the proper value in the eyes of the consumer? Behind this, we can easily see the threatening shape of ‘inflation’ once again; but the inflation, while affecting price (or marks) has now no relation at all to value; and this is so, paradoxically, precisely in the middle of a discourse regarding the value of assessment itself as a practice.
Sally Brown is absolutely right in stressing the importance of assessment; but the logic of the position, at least as advanced in her argument, is one that does not value assessment for what it can do, but rather values the processes and procedures of assessment for the granting of wishes to candidates. It is a ‘purchaser-provider’ model of assessment; and the resulting danger is that students can be told – or even that they should be told – exactly and only what they want to hear, or what they have ‘paid’ for.
The argument advanced by colleagues such as Brown depends on an idea of the University as a repository of data, which we call knowledge, and which the student buys or, better, invests in: the University as ‘bank’, so to speak. She subscribes to the prioritization of what Abbott called ‘knowledge’ (my information), and her prioritization of assessment over all else precludes the very possibility of ‘knowing’. The key thing here now is the quality of the University brand: its value in the marketplace. Of course, in the UK, we have a means of legitimizing this quality: the Quality Assurance Agency.
This logic is one where knowledge is measurable and quantifiable; and we have simple ways of legitimizing our evaluations and measurements. We argue that the transaction that goes on here is transparent: that it becomes supposedly self-evidencing. The way we do this is through the proliferation of those QAA-required grade descriptors, whose function is now clear. They are there to indicate to a student the ‘price’ of each grade that they will achieve. Thus, while an ‘A’ grade shows or ‘contains’ qualities x, y, z, p, q and r, a ‘B’ might only contain x, y and p. In this, what is happening is that assessment is reduced to a legal and mechanical process: it is no longer a matter of judgement or legitimation at all, in fact, much less ‘qualification’ to practise something (as in an exam).
The examiner is now the pure functionary of a system, and they are also to be held to account for the way in which they operate the system. This last aspect is what we usually recognize as an intrinsic ‘appeals’ procedure, itself now backed up in the UK by the large and expensive armature of an Office of the Independent Adjudicator. This Office can explore processes and procedures of assessment, ensuring that institutions do what their schedules and assessment protocols say they do; but it cannot reassess. There is logic here: they cannot reassess because the question of the human judgement that awarded the grade has been entirely eliminated from what is seen to be the substantive business of assessment itself. There is nothing to reassess other than the mode in which the system of assessment has been operated. Given that the Office of the Independent Adjudicator, as a court of last appeal (before legal processes themselves actually begin), now stands over the system, those ‘operating’ the system – teachers and assessors – lose their own authority and standing. Once more, authority is to be vested in the management of processes and a transparent system for ensuring their transparency.
In assessment, we are in a position where nobody judges, in fact; and this is a perfect description of a bureaucracy. We have established instead a system, based purely and simply on a crude logic of mercenary exchange – marks are money, remember – and the task of the examiner, and the student likewise, is to preserve the sanctity of the system itself. ‘Save the banks’, even if that means damning the community that the banks are there to serve. It’s a bit like any mercantile system: no single individual is in charge; but nonetheless, there are classes of people that rise to the top. In this case, the class in question is the managerial class, the bureaucrats who devise the systems, but never claiming any conscious agency. They see themselves – with good reason – as being slaves to, or at least honest servants of, the systems as well.
We have lost the very possibility of critical knowing here; we have lost the possibility of genuine exploratory dialogue and debate about value. Thus, in this state of affairs, we do not have assessment at all – remember, assessment means estimating quality – for we have no one actually estimating anything. We have replaced examining with something closer to bureaucratic monitoring. This, the other side of transparency, is surveillance. It is to this that I now turn.
Delegitimized, de-authorized: this is the version of human being that I am about to pass on to my assessed students. But let us look, in these final remarks, at what is at stake in this new orthodoxy. We can begin from the work of Phil Race, an eminent figure in the new managed assessment structures that cripple British and other institutions. He argues that there are three types of learner: deep learners (the kind that get actually to know things, but with a tendency to specialize and get a bit lost in thought); surface learners (those who skim the edges of lots of things, without ever really getting to grips with first principles on anything); and strategic learners (those who know what they need to do, pragmatically, to pass exams and get good grades, and who behave accordingly). Although he actually seems to favour this third, his case is that, in the move to the plethora of new ways of assessing everyone, we must strive to be just, to avoid systematically favouring one type of learner (for example, she who is ‘good at exams’) over another.
So, logically, given that there are multiple types of learner, it follows that we should multiply our modes and manners of assessment according to this. In such a multiplication, everyone, regardless of their standing or character-type, can ‘come to market’, as it were. Instead of having ‘finals’, that one big blow-out of formal exams after years of time spent in study, thinking and (all being well) learning, we now have continuous assessment as well. This was the first diversification of how we measured performance (and that phrase is itself now telling: it is a machine that is performing, and we are but cogs within it). Race lists at least fourteen kinds of assessment that he urges us to use. Here they are: exams; open-book exams; structured exams; essays; reviews or annotated bibliographies; reports; practical work; portfolios; presentations; viva voce exams or orals; student projects; posters or exhibitions; dissertations or theses; and work-based learning.
Now, with this proliferation of assessments, happening continuously and virtually constantly, every single move a student may make is monitored and assessed: everything becomes measurable and thus needs its ‘code of practice’, its mode of operation. The student cannot move for assessment – and importantly, she has no time to learn, to study. She is, in fact, radically disabled from learning, for she must always instead show how she is behaving in terms of the logic of whatever assessment her work or present activity is geared towards. That trusty old foot-soldier, Private Study, is dead; General Knowledge will soon follow to the grave, for such knowledge is not specific to the matter being monitored through these diverse assessment practices.
As if this were not enough, Race argues that the student must also now replicate all this, this time in the form of self-assessment. That is to say, she must internalize the ideology, or put herself under these same forms of scrutiny. It is important to remember, though, that what is at stake in all this is not the human person making judgements; but rather the preservation of the sanctity of the system. So, rather than ask, ‘Have I got the content of this test right?’, the question becomes, ‘Am I doing this kind of assessment properly? What are its protocols?’ We sometimes make the mistake of calling this ‘enabling’. Actually, of course, it is a structure that we have seen in fiction: Winston Smith, tortured by surveillance in George Orwell’s 1984, is told by O’Brien that he must learn to love Big Brother – not to pretend to love Big Brother, but actually to do so. What is enabled by it, what is made operational by it, is the system of overall surveillance, now made more efficient because individual students can internalize it.
My joke is that ‘private study’ is dead; but this signals something more important than a poor joke. What I am getting at here is that the realm of the private is now also under threat. The systems that I have explored elsewhere in this study have revealed that prevailing ideas of the University have served the purpose of doing an extensive damage to the idea of the public sphere, with the atomization of society into individual acts of purchasing things. However, when the public sphere is so roundly attacked, directly or indirectly, and when that attack is carried on in tandem with an ideology of ‘transparency’, the result is also an attack on the realm of the private. The question is whether we can maintain the idea that education, or something that we might call the life of the mind, can ever be allowed the private space and time within which to flourish or to be enhanced or to find edification. Are we allowed now to have our own thought? When everything we do is to be assessed, and when we need to keep ourselves under surveillance to ensure that we can guarantee that we are doing things according to the presiding published and transparent criteria (or ‘grade-descriptors’), then we have lost the sense of ourselves as private individuals entirely. We are now (at best) representatives of something else, something more abstract: we become ‘agents’ of the existing social order. This is yet more totalitarian than the very system of examination that we tried to escape so long ago.
As if to prove this, finally, suggests Phil Race, after all that external assessment has been redoubled at the level of self-assessment, we need to add the final twist of peer-assessment. In the situation in which we find ourselves, this maps very easily right on to those feared characters in 1984, the Household Spies. As in that novel, the end result is the same: a demand for complete conformity and totalitarian homogeneity, where the possibility of independent thought is as crushed as possible, usually under the Newspeak that we would now recognize in its forms of ‘Managese’. Thus, something that begins with the entirely admirable drive to find ways of rewarding diversity ends up by instilling a normative power of conformity. We are now reduced, all of us involved in assessment, to being agents of a presiding bureaucracy and system. This, of course, is not to say that good forms of assessment – those that can enable people to learn and to find authority for their autonomous activity – do not happen; but it is to claim that such good assessment happens despite the prevailing ideology of assessment whose effect, if not purpose, is to preclude the possibility of genuine social and cultural justices and freedoms. These things require the intervention of human judgement, with all its attendant risks; but we should recall that one of the first principles governing the University, at least as I am arguing for it here, is precisely the search for justice, and that search, like any quest, involves risks of getting things wrong. This, however, is also one of the conditions of our being or becoming human at all within a civic culture; and that culture shows its civilizing force in its benevolence and grace in the face of possible error. We usually just call this something like discussion or debate; and, in the University, we call it research, learning, teaching – and the search for good judgement.
In the end, the whole ideology of assessment goes hand in hand with a surveillance society. It is as if we cannot trust our students to become independent citizens, with thoughts of their own: rather, we have to make sure that all thought is managed, all criticism is reduced, and all people are constantly keeping themselves under surveillance. Needless to say, this is anathema to the very idea of academic freedom.
At the core of the whole issue is the lack of trust. It is not just that the prevailing ideologies do not trust our students to become independent citizens. More than this, it is also the case that we do not trust teachers. Yet more, it is also the case that we trust neither students nor teachers to become the kind of independent citizens that ‘we’ want. Of course, the ‘we’ in question here needs to be identified; and it usually can be identified in terms of whatever is the presiding centre of power and authority in a society. In many cases, that will be government; but in too many cases, the government itself is not as independent as it might be of others who determine the ‘mood’ of the nation, including, for instance, various media outlets that are closely identified with specific business interests. In the UK, for example, it is difficult for any party to gain political power unless it has the backing of News International media; and that is a business interest that does not have, as its primary aims, the advancing of the kinds of autonomous independent search for freedom and its extension that I have characterized as a primary goal of the University.
In the face of the lack of trust, a lack that was enthusiastically encouraged in the UK in the 1980s, when the Thatcher government was determined to ensure that there would be no alternative sources of authority in the society to rival the government’s own claims and grasp on social control, it was decided that we really needed a mechanism to restore trust, or to ‘assure’ the population of the ‘quality’ of what was going on in Universities. Thus the QAA was born, first of all as a sub-agency of HEFCE and becoming, inevitably, an independent body after the Labour administration came to power in 1997. The QAA does what its title suggests: it acts as an agency (but an agency answerable to whom?), is concerned with ‘assuring’ (not with ‘ensuring’) and is organized around a general idea of ‘quality’ (but a quality that is to be measured and thus rendered into quantity).
QAA is certainly concerned with standards. However, the one abiding slippage that seems to found all its activity is the slippage that confounds ‘standards’ with ‘standardization’. Thus it is that, in collaboration with the Higher Education Academy, through which it encourages the internalization of its norms by new lecturers, it endorses the activity of setting things like ‘benchmarks’ that will help us to assess performance, not just in the graded performance of student activities but also in just about everything else. The benchmark then becomes a standard; and the standard then becomes something that needs itself to be ‘standardized’ across and between institutions. The inevitable drive here is towards homogenization. Once this becomes normative, we have the enormous armature of practices that require us to be standardizing everything.
This helps explain why it is the case that modules have to be computable and exchangeable: their ‘content’ needs to be standardized in terms, say, of the time it will take a student to engage with the work each week. The demands of exchangeable currency – those CATs that I discussed earlier in this book – require that we know the size and currency value of the tokens (modules) that we now insert into the economy of the degree. Yet more fundamentally than this, we have to be ‘assured’ that if a module in final-year bioscience takes ten hours of a student’s time, then it is somehow equal to a second-year module in economics that also takes ten hours. The actual content of what is being done for those ten hour periods is erased in all of this; and, again, we are left with an abstract equality that bears no relation to actual experience or actual activity. This affects, and infects, assessment, whose quality is now focused around these standardized practices and measures. In the end, the idea is that we should ideally be assured that ‘a First is a First is a First’: no matter if it is a First in Medicine from University X or a First in Comparative Literature from University Y. If a First is a First is a First, then the business community of employers (and the taxpayer) knows what it is getting when it employs the First-class graduate.
Gertrude Stein, the somewhat abstract author of modernist texts such as Tender Buttons, wrote that ‘a rose is a rose is a rose’. Hearing this, Ernest Hemingway replied that ‘a rose is a rose is an onion’. Here, we have two different attitudes to the ways in which language might relate to reality. Stein’s language is hypnotic and lulls us into a rhythm whereby the semiotic aspect supplants the semantic: the way that something is chanted becomes the message itself. Hemingway (paradoxically, given his usually rather bare style) stresses the inevitable metaphorical nature of the semantic itself, and draws attention to the necessity to establish difference as the very foundation of our making semantic sense at all. The key question for us, however, is as basic as this: why do we want to believe that all Firsts are the same, as if there existed somewhere an absolute essence of ‘the First’? Importantly, even if there were such an essence, it would now be an essence given to us by the processes of crudely abstract mathematical standardization, and not by any more material authority, such as that of an experience grounded in and displayed by acts of judgement.
Is there a way beyond this? In concluding here, let me state a fundamental principle that I suggest as a governing purpose of assessment. Assessment is about legitimate authorization. Many will agree with something as basic as this; but many will also misunderstand it in terms of thinking that I am arguing for an assessment that is concerned to ‘enable’ in a very general sense. I need to offer slightly more precision, and will do so by looking at a determinedly ‘enabling’ model of assessment that has already gained significant traction in the United States and that threatens to surface in the UK and elsewhere: the assessment that is based in the idea of the ‘Ability-Based Learning Environment’ (ABLE).
ABLE has been pioneered especially in Alverno College, a liberal arts college in Milwaukee. Alverno has an emphasis on what it describes as learning those abilities that students need in order to be able to put their knowledge to use. In principle, this sounds fine, of course. In practice, we need to examine what it entails. ABLE is codified, for there are precisely eight specific abilities that are highlighted and made central to the entirety of the College’s practices. The eight abilities are: communication; analysis; problem-solving; valuing in decision-making; developing a global perspective; effective citizenship; and aesthetic responsiveness.
There is obviously nothing objectionable, at least in principle, to our students in general having certain kinds of ‘ability’ in all these things. However, the question I raise is a simple one: what happens to the specifics and particularity of our different kinds of knowledge when they are subsumed under these more generic and generalizable abilities? Is it the same thing when I consider ‘effective citizenship’ in a class on maxilla-facial surgery and when I consider it in my teaching of Joyce’s Ulysses, say? My point is simple: as with the demands of the QAA in the UK, we have here a drive towards a homogenization of the student body in terms of an eventual assessment practice that will no longer measure proficiency in a field, but rather will attend to the kind of person that we now are. Education becomes a means to produce a specific kind of human being or human behaviour, and one that is homogenized as far as is possible according to the determining whim of an overarching ideology.
There has been a drive in the UK in recent times that actively parallels Alverno’s ABLE education. In the UK, we have been asked to prioritize something called ‘transferable skills’ or, more pointedly, just ‘skills’ in all of the modules that we propose for validation. In other words, we cannot teach the particulars of our field now without ensuring that those particulars themselves are in some ways subservient to this skills agenda. These skills tend to be rather vague and generic: it is not the case that the government has suggested that an English department, say, should help its students to develop a skill in Althusserian Marxism, or Derridean deconstruction, or in the analysis of Ancrene Wisse or Paradise Lost – or, indeed, in anything that might be thought of as specific to English. Rather, English modules, ‘licensed’ by QAA demands, cannot be taught unless they demonstrate, for random examples, skills in ‘teamwork’ or ‘problem-solving’ or ‘effective communication’ or ‘leadership’ and so on. These randomly selected skills, however, start to look less random once laid out: they are broadly recognizable as the skills supposed to be central to effective business management. That is to say: English is here reduced to or translated into ‘Managese’, and the potential ‘critic’ becomes, instead, a skilled ‘manager’.
I have suggested that the ABLE agenda and the skills agenda are effectively as one, but with different explicit formulations. ABLE wants to produce a particular kind of graduate. Explicitly, ABLE does not use anything like a traditional examination. Its curriculum, it says, does focus on measurement or quantification, but it stresses that this is always ‘measurement that’s about you, and only you’. In its explanation of the ABLE curriculum, it states that ‘The lessons you learn are applicable in real life, they become part of who you are’; and this ‘real life’ is one that is ‘competitive’ but focused specifically on three key areas: ‘the worlds of work, family, and civic community’. In passing, let us note that Alverno itself presents itself therefore not a part of ‘real life’; real life happens outside of the College and it is a space of competition involving work, the family, and civic community. ABLE assessment is thus guided by whatever ideology it is that shapes a real-life world that is apart from this learning; and this is why the College has to stress that the learning that is done within its walls is ‘relevant’ to a reality elsewhere.
The learning becomes practical, but only in the sense that it ‘prepares’ the student for the competitions that it claims to be constitutive of reality; and the assessment is practical, too, but only in the sense that what is assessed – by the student who keeps herself under scrutiny – is the measure to which she fits in with the ethos of competitive work, competitive family, competitive civics.
Our own UK skills agenda performs the same function. The skills agenda helps divert attention away from the specifics of academic or intellectual content; it places a responsibility for self-monitoring upon the learner; and it requires a mode of assessment that drives us towards the priced evaluation or the quantified evaluation not of academic work but of personhood in a marketplace. This, I hope needless to say, raises questions of ethics and morality. The question is whether it is the responsibility of the University to produce the good consumer, as this ideology seems to propose; or whether we might think of education at this level as more edifying than this, or at least as endowed with a greater scope and ambit.
Alverno might find a justification for what it does in its religious foundation: it is a College founded by and grounded in the beliefs of the Franciscan Order, with its explicitly Catholic ethos determined by the School Sisters of St Francis, who chartered it in 1887. Outside of this religious kind of foundation, however, and in the more general tendency towards assessment as self-monitoring, we have a parallel of Catholic self-examination going on; and the project is one where we are driven to conform to an alleged external standard, really a standardization. The incipient totalitarianism of assessing persons – as our potential labour force especially – rather than performance is, at least, potentially pernicious. Yet this is what lies behind our transferable skills/transferable knowledge agendas.
Let us rehabilitate assessment; but let us rehabilitate it as something that enables a student to engage more deeply in the first instance with their field of study. There will no doubt be adjacent skills that the student learns; but these will vary, of necessity, with the individual student and with what they may bring to their programmes of study. More importantly, it is an error of political importance to confound the assessment of an academic performance with the assessment of an individual person in terms of the kind of person that we ‘produce’ from our institutions. In the end, we do not ‘produce’ at all in this way; but we can teach, and we can assess – and even examine – what we teach. That is to say, we can legitimize certain kinds of thinking; and we can, through that legitimization, bring our students to the point where they can exert their own authority. In this way, they do not become ‘agents’ of a government agenda, but rather they become authorities in their own right. Further, in this way, instead of government assessing its population, the people can properly assess its government. Assessment, if it works at all, can measure its own success by considering how well the graduate can learn to judge critically the world that they inhabit and can help to invent; assessment works, that is, if it searches for and tries always to extend justice.
This is at the root of an assessment grounded in justice and in democratic extensions of freedom; it is this that we should encourage. Within the classroom, we can do this by attending more directly to modes of assessment that are specific to, and that address the particularities of, our separate disciplines and academic practices.
Sally Brown writes that ‘your intended learning outcomes should serve as a map of your teaching programme’. This mechanical procedure reduces us to human resource, refuses the organic life of the mind or of learning. Only a bureaucratic mentality could come up with such crude and unhelpful ideas. It is intended to ensure that there are no loose ends, no ‘play’ within the engine-like mechanisms that ‘drive’ us and our ‘motivations’, and that everything can be accounted for. Yet more importantly, it commits the fundamental error – a kind of category mistake – that sees the University as an agent of governing ideologies; and sees the role of assessment as one that polices the kind of social agents that we produce, ensuring that we produce only people capable of conforming to whatever government, business or other external forces demand. An assessment that is grounded in the legitimization of our students’ authorities will help us to a different kind of outcome: one where democracy and freedom can be extended, and where assessment becomes, genuinely, a matter of radical empowerment.