EDDIES IN THE CURRENT? TRENDS IN HONOURS
DEGREE CLASSIFICATIONS IN ENGLAND, WALES AND NORTHERN IRELAND,
Visiting Professor, Lancaster University
ON 9 DECEMBER
2008 AT THE
SRHE CONFERENCE, LIVERPOOL
A five-year run of honours degree awards from
institutions in England, Wales and Northern Ireland, spanning
the years 2002-03 to 2006-07, is analysed. Whilst the general
picture is of an upward trend in the percentage of "good
honours degrees", at sub-sectoral level the pattern of trends
differs in some respects from that derived from the preceding
eight-year run of data.
"Grade inflation" is perceived as a longstanding
problem for education at a variety of levels and across national
systems. In the UK, for example, there is an annual ritual when
the results of public examinations are announced, in which claims
that standards are declining are countered by claims that improved
grades are a consequence of improved teaching and greater diligence
on the part of students. As regards higher education in the UK,
there are analogous claims of slipping standards when summaries
of honours degree results are published by the Higher Education
Statistics Agency (see, for example, Attwood, 2008). There has
been a longstanding belief in some quarters of the US that grade
inflation is endemic. Adelman (2008) argues that this is due to
increases in grades awarded in elite institutions and the disproportionate
attention that such institutions command in the media.
There is a variety of definitions of "grade
inflation" in the literature (see Yorke, 2008, p.108ff).
Some are naïve; others acknowledge the complexity that is
inherent in the construct. Even if one defines grade inflation
fairly neutrally in terms of an increasing divergence between
the grade awarded and the actual achievement (with the former
exceeding the latter), there are embedded assumptions about demographic
equivalence, the baseline for measurement, the relationship between
achievement and grade, and the stability of what is being measured.
Despite the use of "subject benchmarks" (see www.qaa.ac.uk/academicinfrastructure/benchmark/default.asp
) as points of reference for higher education curricula in the
UK, the exercise of institutional autonomy undermines the possibility
of arriving at definitive conclusions as to the causes of changes
in grading outcomes across the higher education sector. There
are simply too many variables in play.
The JACS categorisation of academic subjects
Academic subjects in the UK are categorised by the
Higher Education Statistics Agency [HESA] according to the Joint
Academic Coding System [JACS], with the categorisation being possible
at different levels of "granularity". In the present
paper, the coarsest level of granularity has been used. This represents
a preference for largish numbers in institutional subject disciplines
over the fineness of detail that is bought at the expense of statistical
robustness. At the start of the academic year 2002-03, JACS replaced
the original subject codings used by HESA. The change had two
facets: first, the subject classification was changed and, second,
the outcomes of joint-honours and combined subjects honours degrees
were roughly apportioned to the relevant constituent subject headings
(they had previously been swept up into a composite grouping of
combined programmes). This meant that, under JACS, there would
be a discontinuity with respect to the trends that were computed
for the academic years 1994-95 to 2001-02.
Trends in the award of "good honours degrees",
The "good honours degree" (an upper second
[2.1] or a first class honours degree) is often taken as a yardstick
of success, in that it opens doors to careers and other opportunities
that would generally remain closed to graduates with lower classes
of honours (ie lower second [2.2] and third class honours). The
third class honours degree is an endangered species, judging by
the decline in the use of that category which is, nevertheless,
a passing grade. It makes sense, therefore, to focus attention
on the boundary between upper and lower second class honours,
and to use as an index of trend the percentage of awards above
the boundary. The percentage is calculated with reference to the
total number of honours and "pass" degrees awarded:
100 x (N firsts + N 2.1s) (N firsts + N 2.1s + N
2.2s + N thirds/pass)
This index omits unclassified degrees, since across
the system there is a scattering of programmes that award degrees
on only a non-honours basis (the number of these has diminished
over time). "Pass" degrees are awarded to students whose
achievements on an honours programme narrowly fail to satisfy
the criteria for honours: this may be due to deliberately opting
not to do the honours project or dissertation, and/or because
performance in one or more curricular components falls below an
acceptable standard. For reasons of this kind, pass degree awards
are included in the denominator of the ratio. (There is, in practice,
some blurring arising from variations in institutional practice
in the reporting with respect to the pass and unclassified categories,
and consequently some error: however, the method chosen minimises
this.) The trend is computed according to the formula:
(% "good degrees") = (m * year) + constant
with the trend being the slope [m] of this linear
regression equation. The trend is expressed as the averaged annual
change (in terms of percentage points) in the percentage of "good
honours degrees" awarded. Its statistical significance depends
on the closeness of the sequence of the data-points to a straight
line (see the Appendix to this paper).
Data regarding the classifications of honours
degrees awarded between 1995 and 2002 were supplied by HESA. Analyses
showed that, across the higher education sector in England, Wales
and Northern Ireland, there was a general shift towards the upper
end of the honours classification scale (Yorke, 2008). (Data from
Scottish institutions were not included in the analyses because
of the different approach in Scotland to the award of honours.)
The rate of rise varied with broad subject area and institution
Unexpectedly, the rise was much stronger in
the elite "Russell Group" universities than in other
institutions and, on the relatively limited evidence available
from the Higher Education Statistics Agency regarding entry qualifications,
there seemed to be no reason to conclude that entry qualifications
constituted an important factor in the trend in honours degree
classification (Yorke, 2008, p.92ff). Adelman (2008) shows that
there has been a similar effect in elite institutions in the US,
and that across the great swathe of less-prestigious institutions
the grade-point average has remained fairly steady.
Figure 1. Trends in the percentage of "good
honours degrees" awarded in the years 1995 to 2002, by institutional
Coll = institutions not universities in 2002;
New = universities designated as such following the 1992 Education
Act; Old NotR = pre-92 universities, but not in the Russell Group;
Russ = Russell Group universities.
Alli Med = Subjects allied to Medicine; Bio SciBiological
Sciences; Agr = Agriculture & related subjects; Phy Sci =
Physical Sciences; Mat Sci = Mathematical Sciences; Com Sci =
Eng & T = Engineering & Technology;
Arc = Architecture, Building & Planning; Soc Stu = Social
Studies; Law = Law; Bus & Ad = Business & Administrative
Mas Com = Mass Communication & Documentation;
Lan = Languages;
Hist & Ph = Historical & Philosophical
Studies; Cre A&D = Creative Arts & Design; Edu = Education.
Trends in the award of "good honours degrees",
Data are now available from HESA which cover
the five-year span between academic years 2002-03 and 2006-07.
These have enabled trend analyses to be reinstated. The recent
computed trends are less likely to exhibit statistical significance
because of the smaller number of data-points compared with those
available to the previous analysis.
Between 2002 (the start of the academic year 2002-03
in which awards were made) and 2007, many colleges (particularly
those with broad portfolios of disciplines) became universities,
and in the present analysis have been subsumed into the "new
universities". The specialist institutions focus on Art &
Design, Teacher Education and Agriculture, and so the "specialist
institutions" group produced data relevant to only a few
of the JACS-designated broad subject areas. As with the previous
analyses, some institutional mergers took place during the period
in question: these are likely to have introduced some discontinuity
into trends, thus reducing the possibility of the trends reaching
statistical significance. Further, the University of Cambridge
changed its system of reporting honours degree classifications.
Figure 2 shows the percentage of good honours
degrees awarded, by broad institutional type. It is evident that
there is a relationship between this percentage and the institutional
Figure 3 shows the respective trends over the
five-year period. Compared with the results from the previous
eight-year run of data, there is no strong pattern though, when
all results are combined, the shift in the percentage of "good
degrees" tends to be upward. In considering these results,
it needs to be borne in mind that the numbers of awards relating
to cells in the Figure can be quite small, and that too much should
not be read into trends in such cells. A good example is in Creative
Arts & Design, where the bulk of enrolments are to be found
in the new universities and the specialist institutions. Hence
the overall trend is determined mainly by the results from these
institutions, with the other institutions contributing relatively
Figure 2. Percentages of "good honours
degrees" awarded in 2007, by institutional type.
Abbreviations as for Figure 1, save that Spec
= specialist institution.
Figure 3. Trends in the percentage of "good
honours degrees" awarded in the years 2003 to 2007, by institutional
Abbreviations as for Figure 1, save that Spec
= specialist institution.
Possible influences on trends
There are many possible contributing influences
on the percentage of "good honours degrees", and it
is naïve to collect them together under a blanket condemnation
of "grade inflation".
Rises in the percentage of "good honours degrees"
may be attributable to, inter alia:
Improvement in teaching quality.
Increased student diligence.
"Strategic" students (ie students
who opt for modules in which they can expect to obtain a high
level of returnmeasured in terms of gradingfor their
investment of effort: see Johnson, 2003, for an example).
Learning outcomes and explicit criteria.
If students know clearly what is expected of them, they will focus
their work so as to achieve the best result they can. Quality
assurance considerations have been instrumental in focusing on
the need for assessments to be as explicit as possible, and for
a close alignment between curricular content, pedagogy and assessment
(Biggs and Tang, 2007).
Increased use of coursework (using the
term in a broad sense). Coursework can, if tasks are well constructed
and rendered relatively secure from plagiarism and other forms
of deception, lead to a better indication of student attainment
than can formal examinations: coursework has been shown to give
rise to higher marks than such examinations (Bridges et al, 2002;
Simonite, 2003; Yorke et al, 2000). A broadening of the
range of coursework demands could also be a contributory factor.
Changes in award algorithm. "Benchmarking"
of award outcomes against cognate institutions has shown on occasion
that students may be being disadvantaged compared with their peers.
Institutions have on occasion felt it appropriate to adjust the
way in which awards are determined in order to fall into line
with their comparators. Such adjustments are more likely to edge
classifications upwards than downwards.
League tables. "Good honours degrees"
figure in a number of "league tables", or rankings,
of UK institutions. Institutions for which a league table position
is deemed to be of significance in marketing are perhaps particularly
susceptible to the implicit pressure to boost their position,
and assessment practicenot necessarily at the level of
the institutionmay be influenced despite the attentions
of external examiners.
Student achievement, as indicated by the honours
degree classification, may be adversely affected by
Distractions from teaching. The roughly
quinquennial Research Assessment Exercise [RAE] is a potent influence
on institutional activity. The increasing expectations laid on
academics to be entrepreneurial may be another influence.
Student part-time employment. The evidence
suggests that a low level of part-time employment whilst studying
full-time is not deleterious to academic performance, but that
higher levels can have an adverse effect. (See for example Brennan
et al, 2005; Pascarella and Terenzini, 2005).
There is some ambiguity about the effect of
some changes on student achievement, since what may have a positive
effect in one context may have an adverse effect elsewhere. Two
Shift in institutional provision (eg
course or departmental closures). RAE outcomes that have been
relatively poor in some universities have led to the closure of
departments and/or the reassignment of staff to other academic
areas. In the case of science-based subjects, this may have led
to a concentration of the most able students in a smaller number
of institutions, with other students shifting into applied or
combined programmes, perhaps in other institutions.
Entry profiles of students. As well as
the preceding point, entry profiles evolve with governmental and/or
institutional policy. Demographic projections, such as that of
Bekhradnia (2006), are harbingers of future shifts which could
have consequences for institutional award profiles.
At root, it's about standards
The evidence suggests that, although the current
of rising percentages of "good honours degrees" is broadly
continuing to flow, the more recent results point to some eddies
in which the direction of flow is reversed. This is particularly
noticeable in the Russell Group of universities, where the strong
upward trend over the period 1995-2002 has been reversed in a
number of subject areas. The reasons for the shifts in trend cannot
be determined from the datafurther study is needed to identify
whether there are any particular influences at work: ceiling and/or
norm-related effects on grades and "regression towards the
mean" could be making a contribution.
There is always a temptation to look for a simple
causality for rising grades. If "the cause" can be identified,
then the problem can be fixed. However, the discussion in the
preceding sectionwhich could have been extendedshows
that grade-outcomes are susceptible to influences of varying kinds
which in turn have varying provenances. There is no simple sectoral
"fix", since the multiple influences will have weights
that differ according to the context. It is likely that a rising
trend in an institution whose entry profile reflects a strong
commitment to widening participation arises from a different concatenation
of influences than a similar trend in research-led university.
The underlying issue is that of academic standards.
These evolve over time, in response to developments in subject
areas, expectations of the higher education system, and so on.
A truly self-evaluating institution keeps a watch on its performances
and how these relate to its aims and objectives: for the purposes
of this paper, the particular performance in question is the summation
of a host of student achievements. These, in turn, can only be
interpreted against curricular expectations, pedagogy and assessment
methods, both within the institution and between institutions.
The potential of benchmarking activity, on both an intra-institutional
and an inter-institutional basis, is readily apparent.
The kind of analysis presented in this paper
(which takes some time) can be undertaken within the institution,
though some cohort numbers will be too small to permit statistically
robust conclusions to be drawn. This may not matter greatly, since
institutional self-evaluation is inherently formative and hence
tolerant of a lower level of reliability than would be needed
for summative judgement. Institutional self-evaluation, done properly,
is not an easy option but a demanding and intellectually rigorous
Borrowing from Auden's poem The question,
To ask the hard question is simple;
Should not academics relish the challenge of
hard questions, such as those pertaining to standards?
Adelman, C. (2008) Undergraduate grades: a more
complex story than "inflation". In L.H. Hunt (ed), Grade
inflation and academic standards. Albany, NY: State University
of New York Press, pp.13-44. Attwood, R. (2008) Rise in proportion
of firsts to 13% renews inflation debate. The Times Higher
Education, 17 January.
Bekhradnia, B. (2006) Demand for higher education
to 2020. At www.hepi.ac.uk/downloads/22DemandforHEto2020.pdf
(accessed 27 October 2008).
Biggs, J. and Tang, C. (2007) Teaching for
quality learning at university (3rd ed). Maidenhead: SRHE
and Open University Press.
Brennan, J., Duaso, A., Little, B., Callender,
C. and van Dyck, R. (2005) Survey of higher education students'
attitudes to debt and term-time working and their impact on attainment.
London: Universities UK.
Bridges, P., Cooper, A., Evanson, P., Haines,
C., Jenkins, D., Scurry, D., Woolf, H. and Yorke, M. (2002) Coursework
marks high, examination marks low: discuss. Assessment and
Evaluation in Higher Education 27 (1), pp.35-48.
Johnson, V.E. (2003) Grade inflation: a crisis
in college education. New York: Springer.
Pascarella, E.T. and Terenzini, P.T. (2005)
How college affects students: a third decade of research.
San Francisco: Jossey-Bass.
Simonite, V. (2003) The impact of coursework
on degree classifications and the performance of individual students.
Assessment and Evaluation in Higher Education 28 (3), pp.459-70.
Yorke, M. (2008) Grading student achievement:
signals and shortcomings. Abingdon: Routledge.
Yorke, M., Bridges, P., Woolf, H. et al.
(2000) Mark distributions and marking practices in UK higher education.
Active Learning in Higher Education 1 (1), pp.7-27.
I am grateful to Harvey Woolf for comments on
an earlier draft.
HESA cannot accept responsibility for any inferences
or conclusions derived from the data by third parties.