Talking Up, Dumbing Down

To many academics of the older generation the equation more equals worse in the context of university admissions and teaching class sizes was a self-evident truth. It therefore followed that the target of processing 50% of the population through university, adopted by Government more than a decade ago without serious political dissent, would necessarily result in a lowering of standards, especially since resources for higher education ‘in real terms’ were declining over the same period. But there were as many younger colleagues who believed that a diversification of learning techniques and computer applications would compensate and prevent any serious risk of falling standards. The headlong rush to confer university status on former polytechnics and colleges of further education that hitherto had been seen as fulfilling an important but different role in tertiary education in order to accommodate increased numbers of aspiring graduates was rightly seen as evidence of an increasing concern with style over substance.
An issue that has particularly exercised students in view of increased fee levels is the quality of teaching that they receive, a complaint for which they certainly have good grounds. Around 1980 the Arts Faculty at Edinburgh University conducted a teaching contact-hours survey that showed that the Faculty average was 200 hours per year per member of teaching staff in all the mainstream departments like History, English, Philosophy or Classics. In specialist subjects like Oriental languages the figure was understandably much higher because of intensive language tuition, but in few departments was it lower. In the past ten years in my own department that figure has fallen to no more than 70 or 80. In both preliminary and honours courses the number of lectures has fallen to a third of the 1980s curriculum, radically changing the qualitative depth to which a subject can be investigated, a problem that has been compounded by division of year-long courses into term-length ‘modules’ for the benefit of visiting overseas students.


University Graduates after ceremony

Honours courses in my own department that were taught in two two-hour sessions throughout the year and examined in a three-hour paper in Finals have been reduced to thirty contact hours over one semester, examined by a two-hour paper requiring just two questions to be answered. Though students are required to take more modules than before (though not twice as many) this must represent a dilution of content and depth of study. Furthermore, since advanced courses cannot require previous completion of another course, because short-term visiting students would thus be excluded, the principle of progressive depth of study from third to fourth year in the Scottish system has been lost. Finals is no longer a measure of attainment after four years of progressive learning, but the outcome of accumulation of the requisite number of modules.
Concomitant with these changes was the abolition of the ‘proceed to Honours’ threshold. Formerly students in most Arts-based areas in Edinburgh University, and doubtless elsewhere, were required to achieve a threshold mark or average around 55% in their second year examinations in order to proceed to Honours in their chosen subject. Those who failed to meet this target were thought likely to struggle at Honours level, and were channelled alternatively towards the General degree, which was normally then completed in three years. With four classes of degree at Honours it was argued that it was unreasonable to preclude students, simply because they might not achieve the equivalent of a second class degree. The fact that virtually no-one ever gets a third these days will doubtless be taken by the optimists as a vindication of their confidence in rising standards. But the fact remains that with the abolition of the threshold it became easier to progress to Honours, leaving academic staff to massage the outcome.
Tutorial groups in the 1980s and 90s never aspired to the pairs of my Oxford undergraduate days, but were still in small groups. I remember being told when I arrived in Edinburgh that the ideal tutorial group was five, ‘three Scots for ballast (the Scots, especially ‘wee Marys’, were notoriously taciturn in tutorials), one English public school boy to provoke the Scots, and one American to provide the ideas’. The caricature was not especially flattering to the native contingent, but it actually worked quite well. Nowadays, tutorials of a dozen or more, held fortnightly, simply do not permit meaningful discussion and allow too many students to ‘free-wheel’ in the crowd. It has also become commonplace for all preliminary level tutorials to be handed over to teaching assistants, generally post-graduate students, paid at a niggardly rate, which they are glad to accept for the money and experience. Now in principle there is nothing wrong with postgraduates gaining teaching experience; being closer to the students in age may give them some advantages and make them more accessible. But the tendency has been for too much of this basic tuition to be handed over to short-term tutors and for the permanent staff to abdicate responsibility for preliminary tuition.
The issue of declining standards in universities starts at the admissions stage. In some institutions, as the number of universities increased, entry standards measured in terms of ‘A’ levels or Scottish Highers undoubtedly fell, and periodically the national press would report how one of its journalists had gained entry to read sociology or chemistry at the University of Potters Bar with two fails and a Boy Scout’s proficiency badge in knot-making. Even in older institutions like The University of Edinburgh (as it re-branded itself to distinguish it from the Other Universities in Edinburgh) it slowly became apparent after initial evasiveness that differential admissions standards were being applied in favour of applicants from what were perceived as disadvantaged school backgrounds, that is, from those that did not have a tradition of grooming university entrants like the fee-paying independent schools, or where for any candidate to have shown the slightest academic inclination would have been in the teeth of the prevailing culture. For applicants with better grades who were turned away this policy must indeed have seemed like the ultimate in totalitarian social engineering. In fact a measure of personal preference and prejudice had been endemic in admissions policies for years, though in favour of applicants from a more privileged sector. I recall vividly attending an interview in Cambridge, when the previous candidate was politely ushered from the room with the words, ‘glad to have met you, Carruthers, and do convey my regards to your father’, before the interviewer turned to me and curtly demanded to know my name. The problem is, of course, that once one departs from academic grades as the primary criterion it becomes impossible to justify publicly the selection of one candidate in preference to another.
The real effect, however, of flexible entry standards was obscured by the fact that year upon year ‘A’ level grades kept rising, notwithstanding the perception of employers and university teachers that their new recruits were increasingly illiterate and conventionally innumerate. Anyone daring to suggest that examination standards were falling was pilloried as a Jonah who was decrying the hard-earned achievements of the nation’s youth. Of course entry standards are no assured means of estimating the likely achievement at degree level: there is no evidence to suggest that high achievers at ‘A’ level or Highers consistently perform better at university or that lower grades at ‘A’ level cannot be redeemed three or four years down the line. In fact, it was argued that unless there had been an upward trajectory or ‘incremental improvement’ in student achievement then university teachers had little to take credit for.
The problem has always been that there is no absolute gold-standard by which the levels of achievement can be measured. Some years ago it was widely recognised in British universities that the First Class degree was being awarded to too small a percentage of the graduating cohort, sometimes as few as two or three per cent. Accordingly, Boards of Examiners were encouraged to be more generous, and not to set unattainably high standards for the First Class. The system used for marking was widely recognised as contributing adversely to the problem. Examiners in the Arts and Humanities disciplines in particular were notoriously reluctant to give marks above the 70s: perfection was simply not attainable in history or literature, it was argued, by contrast perhaps with some science subjects in which 100% might be possible, in practice as well as in theory. The problem was that a grudging First Class mark of 72%, just two per cent above the First Class threshold, would hardly have the same impact overall as marks spanning a ten-mark range for lower classes. So to achieve a First Class degree overall on aggregate required a very substantial predominance of individual papers at First Class standard. At the opposite end of the scale, a fail mark within a span, say, from 0 to 40 would have a much greater impact in pulling down the overall result. There would be little dispute today that many universities, and more especially some departments within universities, were at one time notoriously mean in the award of First Class honours, and some relaxation in the standard was not unreasonable. Otherwise the progressive demise of the Third, now virtually obsolete, meant that in many subjects the outcome was effectively between an Upper or Lower Second, casting doubt over the merit of having a four-tier classification system at all.
There can be little question, however, that an increasing obsession in recent years with ‘performance indicators’ and league tables has resulted in a significant measure of grade inflation across the board. This was evident from personal experience in the award of postgraduate research studentships in Scotland over a twenty year period, until Wendy Alexander after devolution decided to re-volve responsibility for Scottish awards, hitherto determined by an independent Scottish committee, to be subsumed within the British Academy scheme. These awards had been made on the basis of the student’s track record as an undergraduate, the quality of the research proposal, the appropriateness of the department in which that research would be undertaken and available supervision, and the referees’ reports. But above all, the calibre of the degree achieved at undergraduate level inevitably counted heavily in any system of award based upon academic merit and achievement. In the early ’80s the number of First Class degrees among candidates for awards was significantly fewer than the total number of awards available, so that once those candidates with Firsts whose applications overall were deemed to be of the requisite calibre had been allocated awards, there remained a further twenty or thirty out of just over a hundred to be distributed to the candidates in the next highest category. In terms of their degree records, this meant sifting in order of priority those whose degrees had been ranked by their institutions as ‘near misses’ for a First, as well as some of the next adjacent ‘top Upper Seconds’. Twenty years later virtually no candidates were ranked as ‘near misses’, but the numbers of Firsts awarded now exceeded the total number of awards available by a factor of fifty per cent. Even to suggest that institutions had simply succumbed to grade inflation, raising those who would formerly have been awarded top Upper Seconds and ‘near misses’ to Firsts, would have provoked the outrage of the politically correct. But faced with such an increase in excellence, what was the Board to do? First, the application documents were modified, requiring the degree awarding institution to indicate whether the First achieved was an ‘Outstanding First’, a ‘Clear First’ or a ‘Borderline First’ (or as one cynical academic put it, a First-Class First, Second-Class First or Third-Class First). Then in the ordering of the applications, candidates who had achieved a preponderance of First Class papers or examined elements were selected first, as opposed to those whose transcripts showed fifty per cent or less of First Class marks, but who nonetheless had achieved a First Class result on the system of classification applied by the awarding institution. Surprisingly enough, this process immediately reduced the number of applicants to significantly below the threshold of available awards, reducing the process of ordering the remaining applications in order of priority to an exercise not unlike that formerly adopted for assessment of the ‘near misses’. What was surprising was that so many Firsts could be awarded on the basis of a minority of First-class work, in some cases less than 40% of the overall portfolio. Nor were the new universities especially culpable; one of the most glaring offenders proved to be a major civic institution of considerable antiquity and distinction from the south of England. This, however, was not just a case of subjective impression of lowering of standards in the mind of a disgruntled academic. The ground rules for the award of Firsts were clearly stated by the awarding universities, and since the 1980s a significant number of Firsts were now being awarded on the basis of a minority of First-class units of assessment, where hitherto they were not.
Universities naturally will continue to insist that they have maintained and indeed enhanced standards, and will dismiss critics as curmudgeons, who simply have a nostalgic and unrealistic belief that things ain’t what they used to be. But surreptitious changes in marking scales, as a result of which it is possible to obtain a First on the basis of a minority of First Class marks, are seriously devaluing degrees compared to those awarded a generation ago. In Edinburgh the University’s marking scale was changed to create three grades of First Class, A1 or ‘brilliant’, numerically marked 90-100, A2 or ‘excellent’, 80-89 and A3 ‘very good’ 70-79, a system that obviously had the effect of disproportionately raising the profile when balanced against just two Second-Class bands and one Third, each of similar numerical span.
Universities continue to argue that the External Examiner system is a safeguard of standards. It is not, for the simple reason that External Examiners can only adjudicate within the terms of reference that are set by the university itself. The External Examiner may exercise his or her authority to raise or lower an individual mark for an answer or for a paper, but if the university has devised a system of aggregation whereby three First Class papers out of ten will be sufficient to gain the candidate a First Class degree, then the External can do nothing about it beyond registering his or her views for the record. It is hard to avoid the conclusion that in the end the whole system of class divisions will be abolished in favour of a simple numerical classification, not unlike the North American Grade Point Average system. Then at least the anomaly of one candidate at the very top of a class being classified together with another who just scraped into the same class would be overcome. Equally a candidate who just scraped home would not be disproportionately distinguished from one who did not, since the marginal difference between a 4.9 and a 5.0 would be self-evident to all. Employers in particular would be able to judge for themselves how important the difference in academic performance should be considered.

University students at exam

It is not, however, simply by manipulating the assessment structure that degree standards have been massaged. More recently academics have been subject to overt instructions that are designed to enhance the grades of poorly-performing students, often under the umbrella guise of concessions for dyslexia. In the University of Edinburgh for several years some examination scripts were imprinted with the following formula: ‘This student has specific learning difficulties. He/she should incur no penalties for poor spelling, grammar, punctuation and structure in examination scripts, unless these are being directly assessed and are core to an understanding of the course.’
For those of us for whom language is a means of communicating ideas and information, garbled grammar, incorrect punctuation and spelling, and lack of structure are major impediments to understanding, in the absence of which it is hard to make an assessment of what the candidate might have achieved had he or she not suffered from these ‘learning difficulties’. But perhaps one should not be surprised at such directives, when the academic administrators who authorised these concessions to illiteracy themselves had issued a set of criteria for the award of First-class degrees that included the notion that to achieve such distinction candidates would be expected to show real ‘flare’ (sic) for their subject.
It is of course entirely proper that the maintenance of standards should be at the forefront of a university’s concerns. To the uninitiated it might appear self-evident that maintaining standards requires recruiting and retaining staff of outstanding calibre, and attracting students of the highest academic potential. Quite rightly it is no longer acceptable that staff should have no aptitude for teaching skills or administrative competence. Equally it is entirely proper that ‘research’ should no longer be an excuse for indolent self-indulgence, caricatured by Laurie Taylor for many years on the back page of the THES in the figure of Dr Piercemuller, who took himself off to the south of France every summer. Certainly research should have clear objectives and should be carried out effectively within a reasonably defined schedule. I am still appalled to find, whenever I am asked to contribute to an edited volume, how many academics are constitutionally incapable of meeting a deadline, and how those who deliver promptly are disadvantaged by their contributions becoming dated through editorial concessions to the slowest contributors. It need not follow, however, that research should only be targeted at commercially or economically beneficial objectives. Pure research on superficially esoteric topics should still have a role within a university of international stature, though it does not follow that the public purse should pay for such endeavours. The real problem with the trend towards ‘accountability’ and ‘transparency’, concepts much beloved of Vice-Chancellors and their acolytes, is that what passes as such is seldom amenable to close scrutiny, and generally amounts to no more than superficial spin and cotton-wool opacity.
The problem, and perhaps a symptom of the contemporary malaise, is the increasing chasm between window-dressing and reality. In recent years an elaborate Quality Assurance industry has burgeoned, generating a paper-trail of documentation which purports to ensure the maintenance of academic standards, equal opportunities, wider access and a host of other politically-correct concepts. All may be entirely laudable in themselves, but the generation of league tables and codes of good practice without effective means of ensuring their application and delivery is worthless at best and dishonest at worst. My own department was subject to teaching quality assurance exercises, in line with Government policy. Surprising as it might seem to the layman, this did not actually involve anyone sitting in on lectures or tutorials to ensure that the teachers were competent, well-prepared, and audible rather than mumbling in a post-alcoholic haze through lecture notes written thirty years ago. What was required instead was endless documentation listing, for example, the ‘learning outcomes’ of every course taught in the department (not ‘objectives’ or ‘methods’ and quite obviously not requiring an elegant command of the English language), among which inevitably were those formalised concepts of ‘generic’ or ‘transferable’ skills (such as literacy, numeracy and a capacity for articulate communication) which a generation ago it was assumed students might have acquired in thirteen years of primary and secondary education. Of course a skilled scriptwriter trained in creative writing can readily string together a text of breathtaking opacity that uses all the right buzz words at the appropriate points, but this does not guarantee that what is delivered in the lecture room or laboratory has the slightest resemblance to the propaganda. And no-one could accuse the department of misrepresentation, since the documentation was almost totally devoid of meaning in the first place.
Nonetheless, at the cost of cancelling classes and even courses, six months was devoted to producing no less than fifty substantial ring-binders of this nonsense, and the department was duly pronounced ‘commendable’ across the board. Thank God, one might have thought, that at least we can now get back to the task of teaching. But no, that was the external quality assurance exercise. In the following session we were subject to an internal quality assurance exercise with internal as well as external assessors, in which the requirements were just sufficiently different to require a redrafting of all the documentation. And just to prove that you can never have enough of a good thing, the next session saw an institutional review of the utmost importance, and after your recent experience, which department could be better placed to act as guinea-pig? Meantime, of course, the department’s research rating fell, which naturally was the responsibility of staff who had evidently been culpably neglecting their research.
The vacuous nature of this whole charade was exemplified only too clearly by the appointment of external assessors to conduct the first of these reviews. Plainly too few academics had been recruited by the Quality Assurance Agency, with the result that only one of three appointed in this instance was even an archaeologist, the latter competent enough personally, but whose period specialism had never been taught in the department. The others were historians, and doubtless excellent historians. But in a discipline in which laboratory-based teaching and fieldwork form a substantial part of the undergraduate curriculum, as required by national ‘benchmarking’ standards, it is profoundly worrying that experience in these areas was not more prominent in the panel’s expertise. As Head of Department at the time I was stunned by the inappropriateness of the assessors, and indicated immediately to the University authorities that I was contemplating an appeal, as was permitted. As the deadline for registering objections approached I had second thoughts, and when challenged at the eleventh hour by a senior internal official as to why I had not proceeded, I explained my reasoning. First of all, I pointed out, to claim that one assessor was inappropriate might be met with some concession; suggesting that all three were unsuitable, even if true, might look a trifle immoderate. Then again, in view of the consistent under-resourcing by the University of my department over twenty years, our omissions and shortcomings would be only too evident to any assessors who were remotely familiar with the discipline, and we would be in for a very rough ride. On the other hand- I warmed to my theme- with this panel there was just a chance we could pull the wool over their eyes and muddle through. This assessment of the Realpolitik of the situation, I have to say, seemed to mortify the senior official, whose response was something less than coherent to the effect that we couldn’t do that. But we could, and did, with resounding success. It is true that some sections of our submission had to be revised, playing down the element of mea culpa where we thought the omission might pass unchallenged, and leaving only a measure of self-deprecation for credibility’s sake. Even in army inspections they used to leave gravy on just one plate in the canteen so that the inspectors could have the satisfaction of finding it.
The Research Assessment Exercise, whereby league tables for research excellence are drawn up every five or six years, is almost as flawed even though it is sustained by what purports to be academic peer review. The criteria of assessment are allegedly of research of international or national importance, though with characteristic British self-confidence in our supreme capacity to judge our own achievements, no johnny foreigners are invited to be members of the adjudicating panels to introduce any jarring measure of external objectivity. One fatal flaw is that there is no guarantee of a common standard between subject panels, so that one discipline may emerge as significantly more self-satisfied and generous with its grades than another. This in consequence means that within an institution one department may be deemed more reputable in research than another simply because of the varying standards applied by their disciplinary panels. None of this would matter greatly, were it not for the fact that research ratings are one of the criteria factored into the resource allocation model. Collectively therefore it might benefit a subject discipline if its national panel engages in a hearty dose of grade inflation, except that if other panels behave likewise the net effect within an essentially limited higher education budget is largely neutral. The only discipline that might lose out is the hypothetical one whose panel sticks rigorously to its professional and academic standards.
British universities have for many years now relied upon the principle of academic peer review to guarantee maintenance of standards and in particular in the evaluation of research proposals for funding by the Research Councils. Publication in a peer-reviewed international journal is regarded as the gold standard of research productivity. I have no doubt that in disciplines like physics, mathematics, philosophy and others that are studied with distinction internationally, and where even top-flight researchers may only be acquainted by reputation with each other’s work, this system may genuinely enshrine impartial evaluation of research. But in smaller disciplines, where the active research community is much smaller and more tightly knit, I have misgivings about the merits of peer review. It could stifle research initiatives by limiting funding support to institutions with an established track record in the field or to those who are familiar to the peer review cabal. It could militate against the prospects of support for younger scholars whose research might encroach upon or challenge the reputations of established authorities. It is hard to offer an effective alternative, but there does seem to be a case for greater monitoring of the system by peers from beyond the discipline in question. Quis custodes ipsos custodiet?
Throughout my academic career it has been axiomatic that universities were engaged in research, the belief, which in my own discipline was certainly justified, being that access to the cutting edge of research informed and stimulated the teaching and learning processes. Pressure from within the institution on staff to accept more students has been particularly keen in the area of postgraduate recruitment, particularly in terms of overseas students, who bring maximum fee income. In my own experience it was hard to resist applications, even when the proposed area of research was not one in which the department had any specialist expertise. In a small department, furthermore, there was always the real risk that staff departures would leave research students without adequate supervision, in at least one instance to my knowledge for a protracted period that would have given the student ample grounds for appeal, had the outcome not been successful.
The research environment was a key element in recruiting outstanding staff, and was certainly a greater attraction than the financial rewards of university teaching, which have fallen well behind civil and public service careers that once were second choice. Salary levels and more especially lack of resources in universities are now a serious disincentive to recruitment of the best talent in some fields, and will surely have a devastating effect within a generation on senior academic leadership in currently unfashionable disciplines. In the 1980s it was a common statistic that 70% of a universities income was spent on its staff, that its staff were in effect its most valuable asset. Now that figure has been eroded by a very considerable margin, and the responsibility for this must rest squarely not just with Government but with Vice-Chancellors, who have cynically kept pay awards within their responsibility to around the level which could be afforded by those institutions that had received the lowest level of supplementation of their annual grant. Since the level of annual supplementation is very much a case of swings and roundabouts, it should not have been impossible for our chief executives to devise a means of balancing out the funding of the salaries budget. Instead they chose invariably to follow a policy which not only guaranteed that the percentage of income devoted to staff would fall, but of course incidentally provided opportunities for redistributing resources to other activities at the expense of front-line teaching staff.
It is sad that the gap between senior university management and teaching staff has become quite so wide. Being the constant purveyor of bad news is hardly an enviable task, but the reluctance of senior colleagues to set foot in many of the departments affected by their executive policies, other than those whose fashionable or trail-blazing activities attract widespread media attention, is hardly calculated to foster the Dunkirk spirit. What is unforgiveable is the manner in which our Vice-Chancellors, or Chief Executives, as many of them now prefer to be regarded, have accepted massively disproportionate salary increases relative to their own staff. Accused by the unions of ‘awarding themselves’ excessive pay increases, the Vice-Chancellors have insisted that they are not responsible for setting their own pay levels, their performance being monitored by the Universities’ Councils or Courts and Finance Committees. To anyone who is familiar with the workings of senior university management the pedantry of this disingenuous argument does not deserve serious consideration.
To those outside academia, employed in the ‘real’ world of commerce and industry, life in the academic sector is inevitably portrayed as undemanding and stress-free. In fact, there can be few circumstances more stressful than being constantly frustrated in what you know could be achieved, particularly in matters in which you are deeply committed. Much of my time was spent typing out course handbooks and photocopying handouts for student classes, which I was quite prepared to do, if my employer believed that my salary was best invested in this way, but I do object when it is then implied that any shortfall in research output was through my neglect or omission. Sadly, it is not just academics who suffer the consequences of declining levels of resourcing in universities. The immediate victims are quite plainly the students themselves, who are expected to pay much more for much less than their predecessors received a generation ago. It may be true that learning media have diversified, and that back-up materials of greater sophistication are now available on the departmental web site. But this merely compensates for the decline in library provision, and reducing staff:student contact time defeats the purpose of a university education, which is supposed to offer more than a distance learning programme. In the end it is the institutions themselves that suffer the most. It has been said many times in recent years that academic staff morale has fallen to an all-time low. That may be true in some quarters, but it was not my own perception of the situation, which was more serious than that. My belief is that staff commitment to their institutions has plummeted because they have consistently and repeatedly been treated so shabbily by both Government and more particularly by their own senior management. Troops will march steadfastly in the face of adversity if they believe that their generals are with them. But if they see their leaders as self-serving individuals whose commitment to staff and students’ interests has declined in inverse proportion to the rise in their own salaries, then they too will eventually find other rewards for their endeavours. It is hardly surprising that one anonymous wit recently described some Vice-Chancellors as being like politicians but without the integrity. Academic staff are bright enough to recognise that there are very few checks upon what they do and how conscientiously they do it. They recognise only too well that the current obsession with quality assurance mechanisms is mostly window-dressing, without real substance and without sanctions for non-compliance. Accordingly those who are alert enough to have a life beyond the university’s cloisters, either within their professional discipline or in the wider academic or professional world, know that they can, like their Principals, pursue their own interests unchecked whilst drawing their salaries from an institution that knows no better. The concept of collegiality that sustained the universities in former years has been utterly destroyed by the cynicism of Vice-Chancellors and senior administrators. Staff who formerly delivered hours of service over and above their notional contractual commitment out of a sense of obligation to their discipline and to their students now recognise that the university’s response to their years of dedication is exemplified by the pay offer made in the Spring of 2009, which reached the rock bottom level of 0.3%, which even in a recession amounts to a pay cut in real terms. In other words, you can expect to be treated with contempt. And in response to this derisory offer, the Universities’ and Colleges’ Union, like its predecessors the broken reed of industrial representation, wringing its limp wrists has tripped itself up over a data management problem that has meant deferring a ballot on strike action until it is too late to matter.
So if university staff are inhibited from speaking out for fear of disciplinary action by their employers or because they know that they will be publicly pilloried for appearing to denigrate the efforts of their students, what can be done to remedy the decline in standards generally and the devaluation of university degrees? Staff who have taken early retirement are inhibited from speaking up by clauses in their retiral package preventing them from expressing publicly any views that are critical of the university (not simply preventing disclosure of the terms of their retiral package). In any event, as the paying customers students themselves can probably make a greater impact on university senior management than their teachers, and in some instances evidently are now doing so. But in the end it is politicians who must recognise the reality of the situation and provide the external stimulus for reform. What is needed is a Commission of Enquiry at the highest level to examine the state of universities nationally, and to investigate not just the issue of declining standards but the conditions that have brought this situation about. Nearly half a century after the Robbins Report of 1962 too many fundamental principles have been eroded or overturned without the consequences being fully thought through or made sufficiently clear publicly. Major issues, like the number of institutions entitled to degree-awarding status and the fairest method of funding students through university, as well as issues relating to admissions policies, the standard of teaching provided and the standard of degrees awarded, the obsession with spurious league tables and Quality Assurance exercises, all need urgently to be re-examined. Whether the present or next Government has the capacity or will to set such an enquiry into motion, of course, is another matter.
An academic career, of course, is not all unmitigated misery and disappointment. I shall always cherish the progress of students from their first uncertain weeks at university to the confidence of graduation, and the blossoming of research talent as they completed postgraduate degrees. I took particular pleasure at the first appointment of one of my former students to a chair in archaeology. I took great satisfaction from having had a large postgraduate school working on the archaeology of Atlantic Scotland, and the advances that together we have made in the study of the prehistory of that region. I doubt whether this would rank very highly in the Research Assessment Exercise, whose published commentary seemingly rated Scottish archaeology as of ‘sub-national’ significance. But I am content to have made a small contribution to ‘sub-national’ archaeology. As for my efforts as a teacher, I have never considered my impact on students as very significant. The best today are as good as they ever were: the problem lies only in the long tail of those who should never have been dragooned into university courses to which they are ill-suited and in which they are not interested. The best will always succeed, in spite of their teachers rather than because of them. But whether they will wish to proceed to posts in our dumbed-down universities themselves is another matter altogether.