The marking boycott and the future of higher education

From 6 November, members of the Universities and Colleges Union (UCU) will begin a marking boycott throughout the UK. This is a cause of huge regret and discomfort to all academic staff, but university management is nationally taking a hard-line stance and lecturers have been left with no other viable course of action. Since our first professional concern is for the education of our students – we, not the senior managers, are the ones who work with and for students every day of our professional lives – it is important that students understand the reasons for the boycott. There are several, but perhaps the top three are these:

  1. University managers are threatening to cut academic pensions by a third, endangering current teaching provision.
  2. Student ‘tuition’ fees are rising at an accelerating rate but management is spending less and less on tuition.
  3. Added to rising tuition fees, this proposed cut threatens to discourage today’s students from becoming tomorrow’s lecturers, which endangers higher education in the future.

1. Universities UK (UUK), the national body which represents university managers, has decided that the lecturers’ pension scheme (USS) needs to cut academic pensions by up to a third (though some special pension provision for university managers will be untouched). The Universities of Oxford and Warwick have broken ranks with UUK and say that the planned reforms are financially unnecessary and will damage university education by making the career unattractive, but so far UUK is refusing to negotiate. As students know all too well, this country is seeing accelerating rises in ‘tuition’ fees because of decisions taken by the government and UUK (in the teeth of opposition from lecturers and the UCU), and we think that cutting the investment in the university teaching profession is no way to preserve world-class education.

2. This boycott is therefore also about the proper use of student fees. Overall pre-1992 university income has risen by a quarter during the last five years. But as student fees rise to create this huge bulge in university income, investment in teaching in older universities is actually falling (post-1992 institutions are investing more than pre-1992 institutions, and their pensions are not under the same threat, which raises uncomfortable questions of comparison). Yet at the same time, there seems to be no shortage of money to spend on ‘re-branding’ our universities, for creating new ‘directorates’ and other higher-level management functions, or for paying 50% bonuses to the pension fund’s investment manager, who earns £900,000 a year. That can’t be the right way to spend the income from the massive and increasing debt that government and university managers are piling onto all our students. The priorities are wrong: the priority should be students and their education, and that means spending their fees to invest in the lecturers who teach them.

3. The proposed changes to pensions offer little incentive to students to go on to a Masters or Doctorate with the idea of becoming lecturers themselves. Some in government are talking openly about lifting the fee cap altogether, and when that happens, who but the very richest could afford to build up a debt of £150,000 or more simply to get a first lecturing job, if they have no hope of a decent pension at the end of it? The consequences of this decision for future generations – for the children of today’s students – could be truly catastrophic.

We are undertaking this marking boycott in the hope that students will appreciate that to protect the education they are currently enjoying, to make it possible to continue offering that education in the future, and to apply pressure on management to invest their ‘tuition’ fees in their actual tuition, we have to fight now. We hope that the boycott will be very short, so that it does not affect students too much. It grieves us enormously that we are forced by management into taking a form of action that hurts the people with whom we have no quarrel, and whose interests – now and in the future – we are fighting for. But we feel that limited disruption now is preferable to the catastrophe that could follow if we do not act for the sake of future generations.

On trigger warnings

Picasso, 'Guernica' (1937). Museo Reina Sofia, Madrid.

Picasso, ‘Guernica’ (1937). Museo Reina Sofia, Madrid.

I don’t think we properly understand what follows when someone says ‘That offends or upsets me’. If, in conversation with a friend, we are told that we have caused offence, it is natural that we should apologize and try to avoid the topic in future. In a more general sense, most of us learn – to give just one example of our complex moral education – that passing critical comment on a stranger’s physical appearance is likely to cause offence or upset, and so we avoid doing it. And so it might seem instinctively right for groups of students in some US universities today to call for ‘trigger warnings’ – a tip-off that there may be upsetting topics of discussion coming up – before lectures.

Both the New York Times and the Guardian have in the last few days reported on campuses across the US where such requests are not only being made but supported by the administrative apparatus of the university (as distinct from its academic staff). At Oberlin College in Ohio, for instance, a guide has apparently been issued to staff, instructing them to ‘Be aware of racism, classism, sexism, heterosexism, cissexism, ableism, and other issues of privilege and oppression. Realize that all forms of violence are traumatic, and that your students have lives before and outside your classroom, experiences you may not expect or understand.’ The New York Times suggests that this discourse has its ‘ideological roots in feminist thought’, but we should be more precise than that: it is from the discourse of intersectionality. The list of -isms, the talk of ‘privilege’, and the suggestion that a professor might not be able to understand something that a student might tell them about their personal experience, are all strong indicators of intersectional discourse, which privileges the authentic experience of the individual to such a degree that it preaches a maximally sceptical attitude to the possibility that Person B can understand anything about Person A for the simple reason that they are different people. (I’ve discussed this elsewhere on my blog.) But irrespective of its origins, what are we to make of these calls for ‘trigger warnings’? What are the effects of granting or denying them in university courses?

Freud's sofa

Freud’s couch

Naturally, I’m sympathetic to the idea that we shouldn’t upset people unnecessarily, which is to say wantonly. I don’t want to tap into people’s personal trauma simply in order to make them unhappy. But people are quite happy to upset their friends or one of their patients when some benefit clearly follows. If a friend needs to address a profound problem in a relationship, it often falls to us as friends to articulate something that would be forbidden to most people, and we air the issue because we feel that, if our friend confronts it, it will ultimately help them. Doctors and psychoanalysts, too, address indelicate and upsetting matters in order to find a way through them, to a solution. And this, I suggest, is the model of university education. The world is highly adept at closing its mind to difficult questions, preferring to let the established models run, but university education encourages – in fact requires – critical thinking. Sometimes, as in the pure sciences, the encounter with knowledge is not emotionally challenging; but sometimes, as in the humanities, it can be. The first problem with the ‘trigger warning’ discourse is, then, a misconception of what a university education is about. Irrespective of the discipline, it is about challenging ossified ways of thinking, and opening the mind; and when the kind of knowledge encountered in a discipline touches directly on matters of the human, it is possible – even likely – that that challenge may have an emotional effect on the student. But the same is true when we say to a friend ‘your relationship is over, isn’t it, and you have to come to terms with it': we know that it will make our friend cry, but that the articulation of that truth will ultimately have been helpful. That future perfect tense, the will have been helpful, is the tense in which education, difficult conversations, surgical operations, psychological treatment, and so on, operate. But instead of thinking of education in this way, I think that people calling for ‘trigger warnings’ are mistaking it for something else. And while I understand why they should do so – our shared ideological investment in the pursuit of enjoyment rather than contemplation makes it difficult to act in any other way – I think that it needs to stop.

The mistake is, I suggest, that education is considered, by the ‘trigger warning’ liberals, to be a form of entertainment, which operates in the present rather than the future perfect tense (it is fun now). We are used to hearing ‘trigger warnings’ before TV shows or in the little notes that film censors helpfully leave for us on DVD cases or the black screen before a film starts in the cinema. And since for most people, most of the time, TV and film are entertainment rather than education, that doesn’t seem entirely inappropriate. If I go to the cinema in order to wind down from a hard day, there are times when I want something that will comfort me, and make me relax in a rather brainless way – so it’s good to know that there won’t be graphic violence on screen (because I don’t like it). Similarly, if an individual has strong views on swearing or sex, it’s reasonable for them to expect that someone will warn them against watching a particular film or show. But we should bear two things in mind. First, art (including forms such as TV and film) is not purely entertainment. In fact I think that entertainment is just an occasional byproduct of art, albeit one that the culture industry – which of course wants to sell us things – has over the last century so exaggerated that even ostensibly intelligent historians of art seem to believe that enjoyment is the main quality of it. And before a liberal tells me to ‘check my privilege’, I should point out that I’m not making an ivory-tower statement here: I think that the normal, profoundly moved general-public response to films such as Schindler’s List suggest that ‘enjoyment’ is rather far from a non-university-educated person’s encounter with art. So that’s the first point: art isn’t principally about entertainment. And the second point is that education isn’t about entertainment at all, any more than surgery or a difficult chat with a friend is. Some people do enjoy it as well, but in a pure, abstract sense: they take pleasure in it on its own terms, simply for being what it is, for its own sake.

An operation in Norway, 1898

An operation in Norway, 1898

If you want to study medicine it’s no good, from a purely pedagogic perspective, to say that you don’t want to watch human bodies being cut open. I certainly wouldn’t want to watch that myself, but that’s why I didn’t study medicine: I know that it would upset me and I avoid it. Now, I could say, along the same lines, that if you want to study humanities it’s no good saying you don’t want to discuss distressing issues. Distressing issues are to the humanities what corpses are to medicine. But I don’t want to directly pursue that line (which, it seems from the newspaper reports, has been one that US professors have, quite understandably, pushed). Though I think there’s an essential validity to it, I’m not in favour of confronting the ‘trigger warning’ liberals and saying, ‘Grow up: don’t be an idiot; grow a spine: a bit of a shock will make you stronger.’

Instead, let’s turn the matter round, and instead of attacking the people calling for ‘trigger warnings’, end by considering the issue I raised at the start of this post: what are the consequences of calling for ‘trigger warnings’? I suggest that ‘trigger warnings’ would be, in fact, a violent influence in the classroom, or in the total discourse of a society. ‘Trigger warnings’ in a university setting would infantilize, for a start, by refusing to treat students as grown-ups. Universities should not be censoring students’ access to knowledge, because it does them a disservice. In this respect, the immaculately intersectional–liberal guidance note at Oberlin College is a spectacularly foolish error. Yes, there are individuals who don’t want to discuss rape, abortion, sodomy, racism, misogyny, violence, suicide, torture, sexual abuse, anti-Semitism, homophobia, clitoridectomy, incest, or paedophilia, let alone the foreign policy of Israel or religious intolerance, and so on. There may even be people who don’t want to look at Picasso’s Guernica because it reminds them of some traumatic experience in a war zone. But there are many others for whom the exposure to rigorous thought on these matters is vital, and in fact the central mission of universities, properly understood. Some issues in the world, including many I’ve just given in my arbitrary selection above, are so difficult to discuss (like the friend’s bad relationship, except scaled up to the size of an entire country, continent, hemisphere) that they are repressed by individuals or banned by a larger national or international community. For many students, the opportunity to discuss these matters in detail is a liberation; for many, a university lecture or seminar can be the first moment in their lives when they gain access to an emancipatory discourse. One person in a class might be upset by a discussion of marital rape, but for the thirty people who have never even thought that it might exist, the discussion makes an appreciable contribution to their formation as individuals. To deny such people that discussion on grounds that it might offend someone is to deny them the hope of freedom. Which is the greater evil here?

Finally, the liberal’s call for a ‘trigger warning’  is not very different from a conservative saying that it’s unseemly or gauche or ill-bred to talk about money troubles or personal anxieties. Both the conservative and the liberal are saying, quite openly, ‘I do not want to listen to what you are saying, even if you are saying it in a measured and sensitive manner, because I simply do not want to hear things that disturb my equanimity. Your suffering is, to me, less important than my comfort.’ Again, if we are talking about film or TV, there may seem to be less of a moral problem here, although if people refused to encounter things like Schindler’s List because they would simply rather not know about it, then I think we do have quite a serious problem: the history of Europe’s slow owning-up to the Holocaust is an important warning of what happens when ‘trigger warnings’ are granted at a societal level. But we’re not talking about film or TV; we’re talking about universities. And to say that we don’t want to discuss clitoridectomy or Israel’s attitude towards the Palestinians because it might upset someone is to do something profoundly illiberal. That ‘I don’t want to listen’ is the statement of someone in a position of relative power who refuses to listen to, and therefore to understand, sympathize with, or help, someone in a weaker position. Indeed, the ‘trigger warning’ call is simply the obverse of the familiar intersectional insistence that Person B can’t understand Person A. If we privilege authenticity of experience, and claim that nobody can understand anybody else’s suffering, it is a very small step to say ‘Since I can’t understand your suffering, I don’t want to listen to it either. I’m very happy, thanks very much, not knowing about your suffering: to hear about it would spoil my day, and since I can never understand you, what’s the point of you telling me anyway?’ As so often, liberals need to see the profound conservatism that lies behind their posturing, and the lack of sympathy that is the basis of their ‘caring’. It is selfishness dressed up as sensitivity, and far from being a nice idea, something that could only do good, it is rather revolting, and could potentially do considerable harm.

So stop calling for ‘trigger warnings’. Don’t say that you don’t want  to know. Don’t say that you can never understand because you lack the authentic personal experience. Listen instead. Be upset, if need be. Being upset can be productive. Then use the knowledge you’ve gained, painfully if necessary, to contribute positively in the world.

Brief thoughts on Lily Allen

Lily Allen’s ‘feminist’ video ‘Hard Out Here’ is old news, a couple of months old, but I’m behind the times and I’ve just encountered it, and its critical responses, today. Its flaws have already been amply pointed out, with particular attention paid to the video’s exploitative attitude towards black women, who are merely sexual objects here. But so far I haven’t seen anyone critiquing the music.

I don’t think that the video, shown below, is remotely critical of the sexist ideological universe it is ostensibly critiquing. The lyrics do, it’s true, give us a sarcastic view of pop videos and popular cultural attitudes to women, but the video still provides the ideologically necessary titillation. The music contributes powerfully to this. In form (the way verse and chorus alternate conventionally) and content (the shape of the melody and the song’s harmonic features) it is extremely simple and digestible: the music is bright, perky, pleasant, unchallenging, and so readily accessible that its essential musical point can be grasped by many millions of people within a few seconds (the YouTube video has had nearly 16 million views to date).

This simplicity of form and content, the catchiness of pop music, its immediacy and appeal, is what makes it the favoured commodity form for music in advanced capitalism. Really Lily Allen’s song is just like everything else in this regard. Pop music seems to offer the instant gratification that we are taught that we desire from music, without being ultimately satisfying (because if it satisfied us, we’d stop consuming it). For much of the time, when songs have any evident content, it is, as a first-year student of mine noted in a presentation last term, some more or less bland statement about love, money, sex, or drugs. These bland statements tend to meander around a narrow space bounded by a conservative ideological frame. We should all pursue the One; we should want money but not too much, because that makes us nasty, shallow people, when instead we should be pursuing the One; we can be terribly liberal about sex, and even allow it between persons of any age or race or sex, because we’re so liberal, but ultimately the One is the person we should focus our sexual interest on; and drugs are our permitted transgression, the thing that gives us edge, but without posing a real challenge to convention. The simple appeal of the music that presents this network of messages serves the purpose of sugaring the ideological pill: it’s a positive pleasure to internalize these ideological messages.

Back, finally, to the specifics of Lily Allen’s song, which presents itself as being antagonistic to modern forms of pop-culture sexism. Overall, the interaction of music and video does little more than provide the ideologically normal objectification of women with a nice reassurance that it’s ironic, so we can feel good about ourselves while nothing at all is changed. The catchy nature of the tune absolutely contributes to this: it says ‘Look, it’s really easy and pleasurable to be a feminist: not only will the experience be perky and pleasurable, but you don’t even have to change the materials you consume – you can still look at close-ups of women’s backsides being sprayed with champagne as they mock-fellate a bottle, because it’s all a joke. If it gives you a hardon, it’s OK: it’s a feminist hardon!’. It’s as cynical as a major corporation saying that the purchase of their goods is somehow a way of giving to charity, a way of making us feel good about preserving the status quo. Yet I suspect you’re unlikely to read a critique of ‘Hard Out Here’, or Jessie J’s ‘Do It Like a Dude’, or any other ‘critical’ pop that targets the musical materials in the same way, because there is an unquestioned assumption that because pop appeals to millions, it must be ‘democratic’. I think instead we need to return to the old Marxist position, nicely developed by Adorno, that what we’re mistaking for democratic reach is the poisonous and anti-progressive narcotic effect of a musical opiate for the masses.

The authenticity condition

Among that group of people on the liberal left who present themselves as policers of privilege, or as ‘intersectional’ ‘allies’ and so on, there is a prevalent belief that guides pretty much all else, but which should be rejected. It is the belief that unless A is a person of type X, A cannot understand X. Only a transexual/poor person/etc. can understand the experience of transexuals/poor people/etc. Let us call it the ‘authenticity condition’, the condition that the only person qualified to speak on X is someone with a direct personal (i.e. ‘authentic’) experience of X. Someone raised this point elsewhere on my blog, in response to a post on transsexualism, and Paul Bernal makes it in an otherwise fairly unobjectionable post on the often invisible privilege that enables some people to get on in life at the expense of others.

A focus on ‘privilege’ (which I put in scare quotes because the judgement of what counts as privilege in the context of the kind of liberalism I’m talking about is often quite tendentious) often goes hand in hand with this insistence on the authenticity condition, and the latter tends to devalue any of the insights of the former.

It should be immediately obvious that the authenticity condition is false. At root, it says nothing more than that person A can have no direct and unmediated experience of the thoughts or feelings of person B. Quite so: only the individual has unmediated access to those thoughts and feelings. (It is so banal that it is hardly worth saying.) But the individual themselves can mediate those thoughts and feelings. B can say to A ‘I feel happy because someone gave me a book’, and as long as A has had some experience of happiness by themselves, and knows what a book is, A can grasp a mediated sense of what B is feeling. Furthermore, once A knows that B is made happy by being given books, it’s possible for A to make interpretations of B’s feelings even without B directly reporting them, or to act in such a way as to make B happy even without B saying how to do it. So, for instance, if A sees B on another occasion reading a book, A might very well think that B is happy, because they have a book; if A sees B unhappy, they can hand over a book in the expectation that it might lead to an increase in happiness. B can correct this assumption of a relationship between books and happiness turns out to be wrong (the book might be by Dan Brown, for instance), but it’s a fair assumption to make on the basis of A’s previous knowledge.

Sometimes this knowledge which is gained before the event of an interpretation is really quite considerable. Consider the extensive and complex knowledge that a GP has of thousands of kinds of ailment. If B walks into Doctor A’s consulting room reporting a particular sort of discomfort, Doctor A has no direct and unmediated access to what B is reporting, but on the basis of what they do know about medicine, Doctor A can diagnose an illness and prescribe a treatment. The doctor doesn’t need to be able to directly feel what B feels, because B can report it. Sometimes, a patient might not be able to fully articulate the quality of a feeling they are experiencing, but a doctor can extrapolate from their knowledge to be able to accurately determine what the patient is suffering from, even though the patient doesn’t realize that the twinge two inches to the left is more significant than the more assertive one to the right.

According to the authenticity condition, the doctor would have to say ‘Since I cannot directly experience your thoughts and feelings, I’ll have to ask you to diagnose and treat yourself. It would be improper of me to do otherwise, because I would be presuming knowledge of your situation which I can’t experience at first hand. I would be impugning your experience and declaring that my own is superior to yours. In this context the only proper thing for me to do is, therefore, to shut up’.

Of course the liberal defenders of the authenticity condition would make no such demand of a doctor. The doctor has knowledge, access to what is currently understood to be true about medicine, and it is right for that knowledge to be applied, however imperfectly, to the treatment of patients. In the case of doctors, and pretty much nothing else, the majority of people are still happy to subscribe to the notion that there is Truth, an explanation for things which lies outside of ordinary everyday experience, and which has enormously useful explanatory power for our everyday lives. But the essential condition of liberalism, and particularly of the authenticity condition, is the assertion that there is no truth: there are only particular individual experiences, which only individuals can articulate.

Pretending to make do without truth

The particular historical cause for the emergence of this view was the experience of the twentieth century in the West, in which the kinds of Truth proclaimed by the Nazis, Soviets, and Americans (fascism, socialism-in-one-country, and capitalism) were seen to cause unparalleled human suffering. Alongside the rejection of fascism and socialism, the liberal consensus made an unconscious pact to reject the notion of Truth too, except in certain special cases: medicine is one admissible form of truth, and science more generally is given this credence too. Until recently, it was also fairly common for governments to be granted the privileged status of guardians of Truth, which they protected by means of ‘national security’ protocols and secret services, but since leaks about the unjustifiable practices of the security services have appeared, governments and intelligence agencies have lost their status as guardians.

We should not be misled by the abuses of those who have access to different forms of truth into thinking that Truth itself should be rejected. Even a hundred Harold Shipmans would not make it reasonable to suppose that medicine itself is untrustworthy and murderous. Nor should we be misled into thinking that in rejecting fascism and socialism we got rid of all Truth claims from global politics. One Truth got left behind, which claims to be the best explanation for how a peaceful, democratic, healthy, and prosperous civilization can run. It has increasingly many critics, but this Truth, which we call capitalism, is still given fundamental credence.

Truth is opposed to the authenticity condition, because Truth is by definition an understanding which has the potential to be universally applied. Medicine claims to explain universally what is wrong with sick humans. Alternative forms of medicine contest those claims to truth, but they make their own truth claims. Patients judge between the competing claims. That is how truth works. Crucially, the truth claims of the competing medicines are accepted as such: people know that they are weighing claims to universal truth, not just listening to individual opinions. A doctor doesn’t claim that antibiotics will help in accordance with the authenticity condition, but in accordance with a relationship to Truth.

As individual humans, we don’t have unmediated access to the experience of others. We find it very easy, however, to extrapolate from our own experience, by means of a fairly elementary logic. We suppose that our own experience is true to itself, and we assume the potential of universalizing that experience so that, with adjustments, we can do our very best to understand the thoughts and feelings of the people around us. Call it sympathy or empathy, call it Mitsein if you want to be Heideggerian about it. In each case, it is an interaction motivated by a commitment to Truth, not the authenticity condition. To insist that only X can speak of X is to deny the possibility of human sympathy and interaction, of one person attempting to work out, on the basis first of reflecting on their own subjective experience, and second listening to or observing another person, what another subject is feeling. To insist on the authenticity condition is to say that unless A experiences B’s sadness, A has no idea what it means to say that B is sad, or cannot imagine how B feels. It’s ludicrous. It’s inhuman. It should be rejected.

Putting truth back

Non-transexuals can gain a mediated understanding of the experience of transexuals if they listen to the accounts of transexuals. They can only get this understanding through mediation, but that’s not because they’re non-transexuals. It’s because nobody can gain unmediated access to the experience of another person. Transexual A can only understand the position of Transexual B through mediation too. They might need less of an explanation of the basic setup than Non-Transexual C, but we are not talking here about a difference in quality but only in quantity of knowledge.  In principle, Non-Transexual C can become quite as knowledgeable about the transexual experience in general as any transexual – or more so. Similarly, a doctor can understand the experience of lung cancer superbly well even without having lung cancer. A priest can understand the feelings of a person preparing for death even without being in the last moments of life themself. A historian can understand the subject position of people long dead. And so on. The Truth being appealed to here is that human experience is universal, that the only identity that counts is ‘being human’, and that every other kind of identity, however much it is pressed onto one culturally, is plastic, and can be reshaped or rejected.

So, finally, the authenticity condition must be rejected because, far from representing a view of the world that entirely denies the possibility of Truth, it actually supports the capitalist Truth. One of the principal strengths of late capitalism is the way that it has externalized human nature, in the sense that it makes us believe that the things that ‘make us who we are’ – things like musical taste, clothing, hairstyle, the kind of holidays we take or books we read, etc. – are ‘out there’, not within us, and specifically ‘out there’ in a market situation, so that we can buy a huge number of objects that help us to make sense of ourselves as individuals. It is even fair to say that our experience of ourselves is, thanks to late capitalism, essentially mediated too, by external commodities. In order to feed its circulation, capital likes us to generate ever-proliferating lists of identities, all of which need to be mediated by commodity purchase, none of which can be allowed to be universalized. Often, in the areas of human experience that liberals are particularly keen to police, these identities come under the capacious umbrella of the term ‘minority’. This is a non-analytic term, which says nothing about the experience or power of the people within it. The Queen is in a minority of one among 70 million, but she is hardly a ‘minority’ in the sense that people habitually use the term. But the non-analytic term ‘minorities’ has replaced useful analytic ones like ‘working class’, with the effect that the policing of language to do with ‘minorities’ has overtaken the focus on much more significant problems to do with class (an issue on which Mark Fisher, among others, has written extremely persuasively recently). Analytic terms like ‘working class’ of course are used to explain human experience in relation to a form of Truth (in this case an anti-capitalist one), but the authenticity condition denies the possibility that anyone who is not working-class (or, like Russell Brand, has lost the authenticity of their class by earning lots of money) can speak about the subjective experience of the working classes – and so it kills the possibility of an emancipatory politics.

Why do people bother with something that is not only foolish but also destructive to the progressive cause? Well, the authenticity condition certainly provides a comfort blanket for people who want to feel uniquely important, and who feel that their views must be heard because they are the only people with a right to speak. But it is at bottom a selfish condition, a condition which professes to ignore the possibility that human beings can talk to one another and see the world from another person’s perspective. It is fundamentally opposed to all forms of solicitude and love. It is a Thatcherite’s wet dream. Those who wish to defend this oppressive mode of thought should face up to the consequences of their intellectual commitment.

Brain scans prove there is no difference between male and female brains

Neural map of a typical man's brain. Photograph: National Academy of Sciences/PA

Neural map of a typical man’s brain. Photograph: National Academy of Sciences/PA

There is nothing a science journalist likes more than research which seems to prove an unexamined cultural prejudice. Today’s big story  on the ‘proof’ of the difference between observable connexions in male and female brains is a case in point. After studying over 1,000 brains from people covering a range of ages, scientists have discovered, as the Guardian reports it, ‘what many had surely concluded long ago: that stark differences exist in the wiring of male and female brains’.

Two things should immediately give pause here. First, there are no wires in human brains. The centuries-old popular reliance on the metaphor of the machine, now more usually imagined specifically as a computer, is no doubt heuristically useful, but it produces an intellectually sloppy short circuit (to keep the metaphor going): if the brain is wired, then it’s fixed, it has an essential form aligned to its function  – it is, in short, natural. (The wiring metaphor is not the Guardian columnist’s invention, incidentally: one of the study’s authors is quoted talking about the ‘hardwiring’ of women’s brains. And it’s an extremely common metaphor in scientific discourse.)

Second, when science ‘proves’ something that ‘many had surely concluded long ago’, it is fair to ask whether this is because the scientists have proceeded on the basis of an untested assumption, which their badly constructed experiment and fallacious inferences then ‘prove’. That is a central argument of Cordelia Fine’s Delusions of Gender, and it is easy to see how the present study could be open to similar criticisms. In this case, as so often, the prejudice follows this syllogistic pattern:

  1. Men and women are essentially different
  2. Men and women are natural
  3. Therefore the differences between men and women must be natural

The conclusion of the present bit of research – that male and female brains are ‘wired’ differently – circularly establishes (1) above. But it is a contention of a differently motivated body of knowledge, which questions prejudices rather than trying to find proof for them (namely gender studies), that while there are unquestionably differences in the behaviour and patterns of thought, etc., in adult men and women, those differences are not natural but constructed by highly complex and largely invisible social pressures. The gender-studies syllogism is something more like this:

  1. Men and women appear to be different
  2. Men and women are a product of their experience as well as of their genetic inheritance; they are the products of a culture acting on nature
  3. Therefore the apparent difference between men and women must be constructed

The most significant line in the Guardian report is one which is not picked up, in the report, by the authors of the study:

Male and female brains showed few differences in connectivity up to the age of 13, but became more differentiated in 14- to 17-year-olds.

Neural map of a typical woman's brain. Photograph: National Academy of Sciences/PA

Neural map of a typical woman’s brain. Photograph: National Academy of Sciences/PA

That really is very interesting, to anyone willing to pause for thought. Let us allow that the observed differences in adult brains are significant, and that brain science is capable of communicating details of value (though there is considerable scientific scepticism on this point). Those differences are not manifested until the age of 14–17. It follows that the assumption that girls and boys below that age are ‘essentially’ different, ‘because their brains are wired differently’ is unsupported by the evidence. It is wrong to suggest that boys and girls have a ‘natural’ difference, which can be traced to brain design, because the study does not support such a claim. On the contrary, if we think that gendered difference is explicable only by brain design, we ought to conclude from this study that there should be no difference, at least no difference occasioned by brain design, between boys and girls. We should expect boys and girls up to the age of 14–17 to behave the same way as each other, want to wear the same clothes and do the same activities, etc. – i.e. not to exhibit stereotypical gendered behaviours. Those stereotyped behaviours (‘logical thinking’ for boys, ‘intuitive thinking’ for girls, in the words of the study’s authors) simply should not manifest themselves until puberty. So let’s end talk about boys and girls being differently gendered at all.

But it is also necessary to ask why the difference observable in adult brains starts to manifest in the 14–17 age bracket. One answer is that the hormonal changes of puberty ‘must influence it’. That ‘must’ is quite a considerable assumption, and without extensive evidence, it cannot be sustained. The gender-studies assumption would be that it is only by the age of 14–17 that the cultural pressures to conform to certain gendered behaviours can start to have their effect – maybe in profitable interaction with the hormonal changes of puberty. Perhaps the state of puberty enables the brain to refashion itself in order to fix these observable adult differences. But there is no reason to suppose that it is the biological experience of puberty alone which causes those changes to take place. But of course for a scientist desperate to show that gendered differences are ‘natural’, a purely natural explanation of this change will be sought. And yet their own study is fighting against them.

If – as I am sure is the case – the scientists believe that boys and girls exhibit different gender behaviours, they can find no explanation for this in the brain science in their study. That is to say that the brain science of this study does not explain the difference they observe, and that anyone can observe, in the gendered behaviours of boys and girls. Let us generalize that statement just a little bit: the brain science of this study does not explain the observable difference between gendered behaviours at all. Boys and girls do exhibit differences in gendered behaviour, after a while (certainly by the age of 13 it is pronounced), but the brain science does not have an explanation for that. And yet the scientists assume that their brain science does explain the difference between the gendered behaviours of adult men and women. Reports on this study therefore hold, unwittingly, to two irreconcilable claims, which can only be explained by a refusal to acknowledge the prejudice that is absolutely driving the inferences from the study: first, that the brain science doesn’t prove gendered difference, and second that the brain science does prove gendered difference.

The majority of reports on scientific studies of human behaviour and culture suggest that science has ‘proven’ an orthodox cultural assumption. This provides good copy because it’s nice for the layperson, who shares these orthodox cultural assumptions, to feel that someone clever has spent a few million quid investigating the matter with expensive machines and a research team of other very clever people. It allows people to say ‘Well I could have told you that!’ in the pub, and feel very contented. Good public ‘impact’, no doubt. But when the assumptions underlying the study in general, the specifics of its design, and the science’s blindness to its own internal contradictions are so patently dubious, there is no reason at all to be persuaded by its claims. Yet again, I’m left feeling that, brushing aside the collegial pieties about humanities scholars respecting scientific research (when there is no expectation that this respect should be returned), the old ‘two cultures’ argument has some legs left. A major part of humanities research and teaching continues to depend on a thoughtful critique of the cultural assumptions, hugely invidious, which science of this sort unquestioningly bolsters.

In defence of an instrumental view of higher education

Introduction to Music StudiesReaders of this blog know that I’m vituperatively opposed to contemporary political attitudes to higher education, which see it as nothing more than a machine for growth, a productive sector of the economy which should be intellectually docile and do its ideological master’s bidding. And some people who come to my blog before they come to the musicology textbook, written by my colleagues in the Royal Holloway Music Department, which I edited with Jim Samson a few years ago (An Introduction to Music Studies), are surprised, even disappointed, by some of the things I wrote in the four-page introduction. The start of another academic year, when this book tends to be pulled out for discussion with first-years, has made me reflect once more, with the usual stoicism, on why I let this into print, and I thought that this time I would blog about it. Maybe students, or scholars writing similar introductory textbooks, will find the ruminations useful.

Students frequently ask me what the hell I was thinking when I wrote this. I seem so often to be siding with an instrumental view of higher education: the view that education’s function is to make people more employable, not better equipped to reflect on their world and the people around them. How could I do something so ghastly, so cowardly? The first thing to say, I guess, is that of all my publications, this introduction contains the least thought. It is purely functional prose, written to order: I was in the first year of my job, fulfilling a duty to my department and the publisher, and not in a position to make a scene in the introduction to a book that was a major collaborative enterprise in my new home. I didn’t intend to write a manifesto.

But the second, and much more substantial, defence is that, actually, the instrumental view is true, though of course it’s only part of the story, and it has to be looked at the right way round. The wrong way is the currently dominant Westminster view: the idea that universities ought to train the human resources for business, so that our economy remains competitive. On that view, universities are performing a service for business. But there is nothing ignoble in a prospective student saying ‘Music is the greatest love of my life, and I can think of nothing more wonderful than doing a degree in it. But I’m from a fairly poor background, I’m taking on £40K or so in debt, and I know that there aren’t all that many jobs in music, so I need to know: I’m not making a stupid mistake, am I? Should I be doing something more like maths or science, or is it OK for me to do this thing I love?’ It’s quite clear that such a student wants to read for a music degree for the love of learning about music; but as well as learning about music, they do want to be able to exist in material comfort and security in a world which requires them to work, and in addition to a university’s principal purpose, to add to the store of human knowledge, it also has a vital instrumental function in the service of individuals, who want to be able to get a job. That’s the instrumental view that I think should be defended, not the view that universities owe a service to business.

Many people feel that subjects like music, art, literature, and so on are somehow cheapened if they are brought into contact with the question of money. Well, it must be pleasant not to have to worry about money, but for the majority it is a matter of huge importance whether they can find good employment. And as I wrote the introduction, I had in mind A-level students, pre-major US students, or even interested parents, browsing the book with just this sort of anxiety in mind: they are or their child is interested in doing a music degree, and they want to be reassured that it’s not going to be a disadvantage in the long term. The last thing such a reader needs is a musicology textbook that opens with an eloquent paean to the liberal virtues of a humanities education, the good it will do to your beautiful soul, and so on. They know all that already – that’s why they’re interested in the first place – but they want the additional, vital, final reassurance that it’s OK to pursue this. In this context there’s room for crude, bland reassurance, in the language of the job centre advert and the office-job application CV. So I wrote that ‘a music degree offers a more genuinely useful training for graduate life than might at first be imagined’, and that it will ‘provide a breadth of training in transferable skills that will make you particularly valuable to other professions as a music graduate’. And I did so hoping that some people might think ‘Oh, thank god, it’s not just a pleasant opportunity for the rich who can afford to idle away three years thinking about the difference between a Corelli and a Handel concerto grosso: it’ll do me just as much good as a Maths degree, and I can do what I love!’

Admittedly, if I had to write this again today, I’d choose different words from time to time. But I’d make the same general points again. Reviewers have sneered at what I wrote. But, respectfully, I’d suggest that to sneer at this kind of reassurance is to do a disservice to precisely the students who need our support most, in order to be able to enter into an intellectual world that can radically change their lives. Not to descend to the instrumental level of the opposition to their taking the subject in the first place – not to talk, that is, the shabby language of economic utility – would be, in the introduction to an introductory textbook, effectively to say that musicology is only for the elite. Because it is only a student from an elite background, who has sufficient family experience of university, who needs no such reassurance. The comprehensive-school kid, or their parents, on the other hand, might need it – though if they don’t, they can skip the reassurance and get into the meat of the introduction, the pages that detail the contents of the book.

If the introduction has provided ammunition for a single kitchen-sink altercation with a defensive family that is trying to prevent their child from studying music – whose emancipatory potential I laud in virtually everything else I’ve written – then I’ll be glad to have written it. In fact I’d do it again.

Why I am striking

Today UCU, Unison, and Unite union members are going out on strike across universities. I am joining them. Why, since my own salary and that of all permanent, full-time academic staff is excellent? Is this a case of whingey lecturers greedily looking for even more money? Well, obviously, no. I’m striking for something else. Though striking against a 13% real-terms pay cut is a vital support for other public-sector workers who are engaged in the same struggle.

I’m striking against the iniquity of a university system in which ideologically complaisant vice chancellors and their circles of higher-management stooges, collectively dedicated to transforming the public good of education into a source of private profit, see their salaries rise spectacularly (the average for VCs now around a quarter of a million a year) while administrators, librarians, IT service staff, and all the hundreds of support staff who make university campuses such pleasant and well-functioning places, see their pay stuck below a living wage. And while junior academics, fresh from the PhD at the end of their increasingly expensive student careers, are on exploitative contracts, often zero-hours, sometimes altogether unpaid, having to piece together teaching jobs in two, three, or four universities in far-flung corners of the country, living and working on trains, and perhaps earning no more than £5,000–6,000 in a year for their efforts. The casualization of the academic workforce has reached such a terrible state that the average academic salary is now around the same as or below the theoretical ‘minimum’ academic salary – the figure that is considered the lowest possible offering for new, full-time lecturers. And as students pay more and more for their degrees, universities disburse less and less of it to the teachers and support staff who make it such a wonderful and life-changing place to be.

The university system is in real danger of collapse, and it is the fault not only of governments, who since the 1970s have consistently pressed for the transfer of as many public things as they can into private hands, but also of the senior management of universities. Today’s strike is part of a bigger struggle. Few students really appreciate this danger, and even fewer people in the country at large.

For more background on some of the issues, written in lucid, cool, and richly enjoyable prose, read Stefan Collini’s latest piece (one of a long series, all worth reading) in the latest London Review of Bookshttp://www.lrb.co.uk/v35/n20/stefan-collini/sold-out.

Witnessing enlightenment in Arnold Wesker’s ‘Roots’

Jessica Raine as Beatie Bryant. Photo © Stephen Cummiskey.

Jessica Raine as Beatie Bryant. Photo © Stephen Cummiskey.

The Donmar Warehouse’s current staging of Arnold Wesker’s psychologically acute and consummately realized play Roots is, among other things, a beautiful riposte to opponents of forms of utopian thinking that draw their strength from the emancipatory potential of high art.

At the centre of the play is Beattie Bryant (acted very affectingly by Jessica Raine), who has grown up among decent and well-meaning but fundamentally rather simple and unreflective Norfolk people. Before the play begins, Beattie has escaped from this into the world of her cultivated, metropolitan boyfriend Ronnie, and after three years of living with him she is returning to Norfolk, both to share the life-changing effects of his wisdom and aesthetic tastes, and to introduce him to her family.

The lives of ordinary people in Wesker’s Britain in 1959 are neither romanticized nor impugned. He observes in a note to actors and producers that ‘the picture I have drawn is a harsh one, yet my tone is not one of disgust […]. I am at one with these people: it is only that I am annoyed, with them and myself.’ A great deal of this annoyance is directed at the meagre mass-produced slops of the culture industry. The people are tired, bored, inexpressive, uncommunicative, drones. ‘Oh yes,’ Beattie says, and as she might still say in 2013, ‘we turn on a radio or a TV set maybe, or we go to the pictures – if them’s love stories or gangsters – but isn’t that the easiest way out? Anything so long as we don’t have to make an effort.’

The banality of their culture makes understanding of their own situation, and the ability to empathize with and act to help others almost impossible. At first, this banality is played for laughs. Early in the play, Beattie’s sister Jenny and brother-in-law Jimmy discuss a pain in his shoulders. Jenny reports her mother as saying that ‘some people get indigestion so bad it go right through their stomach to the back’, and when Jimmy says that’s daft, she says that she told her mother as much, but was told ‘Don’t you tell me […] I hed it!’. Jimmy retorts ‘What hevn’t she hed.’ When the shoulder pain is mentioned again, first Beattie and then, much later, her mother make precisely the same responses to the same stimuli. We laugh at their inability to respond differently at different times, instead repeating exactly the same form of words, but stopping with the laughter is only ‘the easiest way out’. The consequence of the family’s inability to be original, even about relatively trivial things, is a general failure of apprehension and communication.

Linda Bassett as Mrs Bryant. Photo © Stephen Cummiskey

Linda Bassett as Mrs Bryant. Photo © Stephen Cummiskey

Beattie’s parents don’t speak to each other in a meaningful sense, and are afraid of communicating to anyone else. Mrs Bryant doesn’t know how much Mr Bryant earns above the pocket money he gives her to provide food for the family. He, too, is ill, yet he won’t tell the doctor, because to speak to a professional of his illness would be to acknowledge that his body is the only thing he can sell to provide for his family. The new farm manager notices his increasing decrepitude, and puts him onto casual labour. This is, as Beattie alone can say, a catastrophe for the family. Yet they will not discuss it. Jenny says ‘It’s no good upsettin’ yourself Beattie. It happen all the time’, and her mother (Linda Bassett, tragically reprising something like her role in BBC TV’s ‘Grandma’s House’) dismisses it and changes the subject: ‘We’ll worry on it later. We always manage. It’s gettin’ late look.’ Not just economic difficulty but personal sorrow presents impossibilities for articulation too. Mr Bryant can only say of a much-loved neighbour’s death that it is ‘rum’, while his wife weeps silently, her back to the audience and her daughter, at the kitchen sink. The impoverishment of their language limits and ultimately incarcerates them: without the resources to grasp their own situation or that of others, they are completely incapable of sustaining the effective and caring social relations which, as a family, they fool themselves that they already somehow enjoy. 

Beattie believes that, with her lover Ronnie, she has found a means of escaping all this, and she wants to share it with her family. At bottom, the means of escape is the pursuit of a social-democratic politics, and she breathlessly summarizes Ronnie’s political vision to her mother as she dances to a recording of Bizet’s L’Arlésienne suite (this protracted exchange between the production’s two most compelling performers provides a moving conclusion to Act 2).

Socialism isn’t talking all the time, it’s living, it’s singing, it’s dancing, it’s being interested in what go on around you, it’s being concerned about people and the world.

Far from the caricature of the elitist entertainment of the ruling class, high culture for Beattie is ‘not highbrow, but it’s full of living. And that’s what […] socialism is.’ It is this which has enabled her escape into a relationship with a man whom she presumes to know everything. He is an idealist who thinks the poor can be lifted out of their material predicament, a socialist – even a communist. Such people are, even today (especially today), ridiculed for their infantile, unrealistic leftism. If only they would live in the real world instead of their ivory towers, preaching their high culture, they would see that theirs is an essentially patronizing attitude. The poor don’t need Bizet, and Shakespeare isn’t going to do them a blind bit of good. All these high-culture theorists like Adorno (whose views of the culture industry we may presume that Ronnie has taken on board) just provide a reassuring leftist fig-leaf for an essentially elitist programme of cultural domination. Wesker provides that perspective in the play, but his presentation is subtle.

In an attempt to get her family to think and offer an opinion on something, Beattie poses a very interesting moral problem in the form of a weird tale. There is a river with two huts on each side, with a ferryman operating a service between the banks. On one side lives a girl and a wise man, on the other two men, Archie (whom the girl loves, but who does not love the girl), and Tom (who loves the girl, but whom the girl does not love). The girl hears that Archie is going to America, so she wants to cross and spend one last night with him. The ferryman says he’s willing to take her, but she must remove all her clothes first. She asks the wise man what to do and he says ‘you must do whatever you think best’. She decides to strip, and the ferryman takes her across the river. She spends the night with Archie (who doesn’t love her but wants to take advantage of her), but he has gone – abandoning her – in the morning. In despair, naked, she runs to Tom (who loves her) and asks for help, but ‘soon ever he knowed what she’ve done, he chuck her out’. The question she poses is: who is to blame for the girl’s plight, and the interpretation she offers with Ronnie’s imprimatur is that the girl is only to blame for taking off her clothes, and in that she is guiltless because she is in love. After that she is the victim of two phoney men: one who takes advantage of her, one who professes to love her but won’t help her – with the latter being the more culpable because he was the last person she could turn to.

Photo © Stephen Cummiskey

Photo © Stephen Cummiskey

The morality tale is, of course, a kind of key to the play, but Wesker avoids crude parallels. Beattie is the girl who has crossed the river – inevitably this suggests either the Rubicon, since it’s a decision she cannot undo, or, given the fatalism and the ferryman, the Styx – laying herself bare to what will follow. For the entire length of the play we wait, with the family, for Ronnie to turn up, but like Godot, except more redemptively, he does not. Instead he sends a letter, ending their relationship and saying ‘It wouldn’t really work would it. My ideas about handing on a new kind of life are quite useless and romantic if I’m really honest.’ In this he is Archie, taking his pleasure before abandoning the girl. Beattie is stricken, but her family (here taking the part of Tom) is no help. Her mother attacks her outright, because Ronnie’s promised redemption has been ‘proved’ wrong, while her siblings simply want to know more about details of their former relationship. For all their love of her, they will not – cannot, because they do not have the language to do so – help her.

But the really key figure in the morality tale, and one that nobody, not even Ronnie, discusses, is the wise man. He represents the other aspect of Ronnie, not the lover but the man who is supposed to know. He knows about art and politics; he knows how to change the world; he knows how to rescue people from the poverty of their environment and set them free. Yet the wise man’s advice smacks of Polonius (‘to thine own self be true’), and is as stupid as his: ‘you must do whatever you think best’ might be correct in a purely formal sense, but aren’t the wise meant to see through difficulties and present a way ahead for those who are less wise? If his advice is empty, then so, we are normally told, is the advice of all such posers who fancy themselves to know it all. The wise man here is the perfect embodiment of the cultured elitist who, at best, simply chucks attractive but useless baubles at the mass of humanity which needs much more.

Yet in rejecting Beattie, and in abandoning his belief that high culture and the ‘new kind of life’ can help the poor, Ronnie performs an essential function. Beattie’s family are, as we have seen, prevented by their inarticulacy from breaking out of their symbolic bonds; Ronnie seems to offer the cultural and linguistic key, the means for a universal enlightenment and emancipation. The problem is that Beattie believes that this power lies in him. She believes that there is a position of absolute knowledge, a person who has all the answers. By declaring to her that this absolute knowledge is phoney, by abandoning her to her former prison, Ronnie acts out the even more troubling truth that there is no such position of absolute knowledge, no form of expression that can entirely free us from the limitations of our symbolically hedged-in world. Not even, to pick two plausible examples out of the air, a university tutor or a TV scientist. Of course the conservative tutor who refuses the challenge to think theoretically possesses no absolute knowledge; it is doubtful whether such a person is an intellectual at all. But not even the theorist, not even the most profound and radical thinker has all the answers. We’re on our own. And yet there’s hope.

Learning, Beattie says, is not the accumulation of facts: on the contrary, it is the action of asking questions, forever. The idealist Ronnie gives up, because he sees that what he thought was reliable knowledge of a utopian possibility is useless. (People who say they were radicals in their youth but gave it up to become ‘realistic’ when they got older are just like Ronnie.) But Beattie finds her voice. Throughout the play she has quoted Ronnie’s maxims and opinions, and been ceaselessly ribbed for it. Then, after his rejection, she starts to offer her own opinions for the first time. She analyses her family’s tragedy, and sees a way out. 

Bernard Levin described this moment, Beattie’s ‘final triumphant budding’, as ‘the most heart-lifting single moment I have ever seen upon a stage.’ But her enlightenment requires a double rejection, first of her family, which trapped her in inarticulacy, and then of her lover, whose presumed wisdom had led her to depend too much on another, on a subject supposed to know. One rejection is hard enough: leaving behind the limiting expectations of a family background, say, and allowing university to open one’s mind. But to reject the thing that seems to promise salvation, and to acknowledge that there is nobody to rely on and nothing to depend on but a need for constant questioning: that is a daunting prospect.

In Beattie we see what Ronnie, and what every conservative, fails to see, because they don’t see beyond the failure that opens up the possibility of triumph (the good old Beckett line that Žižek is very fond of: ‘Try again. Fail again. Fail better.’). Enlightenment comes. High culture emancipates. The poor are insulted, impoverished, and trapped by the culture industry. Standing apart from her family, who cannot understand what is happening to her, or how it could help them too, Beattie begins to live for herself. She has no answers any more, neither her family’s simple ones nor Ronnie’s impossible ones, but she has learned to see life, to articulate her own position and its suffering. She has obtained the only kind of wisdom that counts.

Daniel Barenboim and music’s emancipatory symbolic violence

Daniel Barenboim has, for the second year in succession, provided the most life-enhancing musical experiences of the London summer. Last year it was his conducting of the Beethoven symphonies (intercut with Boulez) with his West-Eastern Divan orchestra. This year is was Wagner’s Ring with the Staatskapelle Berlin, which concluded last night with a Götterdämmerung of exceptional power.

Power, strength – violence, even, though not in the most immediate meaning of the word – are markers of both his style of conducting and of much of the music of which he is a most celebrated interpreter. He is a fascinating conductor to watch. Sometimes – sufficiently often for it to be striking – he seems not to conduct at all, standing motionless and letting the orchestra get on with it. At other times, he simply gives a very clear beat with his baton, leaving his left hand free to rest in his pocket, on his heart, or to wipe his forehead. When he presses his left hand to service there is the usual panoply of sweeping and coaxing gestures, but some of his motions are quite obscene: he seems at times to be stabbing, punching, choking, swatting, pulling with both hands into his groin (the video above, from last year’s Beethoven cycle, allows glimpses of this, though I strongly urge watching him live). Critics of classical music, and particularly composers such as Beethoven and Wagner, might be quick to snipe: ‘well, that kind of violent gesture suits very well this fascist, patriarchal, imperialist music’. Such people are right about the violence, but wrong about everything else.

In a useful little book on violence (Violence: Six Sideways Reflections), Slavoj Žižek invites us to reflect on three quite different forms that it takes. The first form, subjective violence, is the kind of violence that most people mean, most of the time, when they describes things as ‘violent': kicking a dog, breaking a window, killing people with drones. A second form, systemic violence, is the kind of violence that a political, economic, or social system does in order to maintain itself in a position of strength. This is the violence of a system that drives people to food banks because it wishes to shrink the level of state support for low earners, or which makes people a pariah for not marrying or subscribing to one particular religion or other, and so on. The third kind, symbolic violence, is seen in statements such as St Paul’s ‘there is neither Jew nor Greek, there is neither slave nor free, there is neither male nor female’, or Martin Luther King’s ‘I have a dream’. Such use of language does symbolic violence to societies which seek to limit human potential to the arbitrary labels their culture supplies. Symbolic violence inverts the logic of the playground taunt, ‘sticks and stones may break my bones, but words will never hurt me': no, words really do hurt, whether the symbolic violence is upholding a particular social order (by labelling you a ‘scrounger’, ‘gay’, ‘unAmerican’…) or working against it (‘I will not leave this bus seat, even though I am black’, ‘we’re here, we’re queer, we will not disappear’). Symbolic violence is an important tool of both repressive and emancipatory actors in any society.

Daniel Barenboim

Daniel Barenboim

There is ample symbolic violence in Barenboim’s music-making, which I think his conducting style partly makes visible. Like his idol, Furtwängler, but arguably even more insistently, Barenboim establishes a firm, immovable, strongly articulated pulse with those clear beats of his right hand – but then changes the pulse, entirely without warning, and often very drastically. His performances are at the same time the slowest and the fastest, the softest and loudest, and the most homogenous and instrumentally distinct (since his control of dynamics and orchestral balance are also hugely surprising and original), that audiences are likely to encounter anywhere except on Furtwängler recordings. Within the symbolic structure of music, they are doing symbolic violence, but for neither conductor is this simply a matter of exaggeration for effect.

The symbolic violence in Barenboim’s performances is not generated by him, but only revealed by him: it is in the music already. When Wagner refuses to resolve the ‘Tristan chord’ at the opening of Tristan und Isolde, and instead gives us the same upwards-striving motion twice more on successively higher pitches, he does symbolic violence to the expectation, into which we are conditioned from when we first hear a lullaby or nursery rhyme sung to us, that the dominant chord will resolve onto the tonic; conversely, when Beethoven dramatizes precisely such conventional resolutions with an excessive rhythmic and timbral vigour that no composer before or since has ever approached, he does symbolic violence to a system which seeks to conceal – to naturalize, to make transparent – the extent to which it is shaping our desire for certain kinds of fulfilment.

Conventional wisdom – the commonly used euphemism for ‘the hegemonic ideology of a society’ – has it that classical music is, by its very nature, a problematic thing. Governments and media outlets which call it ‘elitist’, and academic ethnomusicologists who implicate it – and sometimes even its present enthusiasts – in centuries of Western imperialism all share this logic. They claim that it is an expensive pleasure (though this is pretty unconvincing: I have already blogged about how much cheaper it is to attend the opera than to see a live football match), imply that it requires a private-school education to appreciate it, that it sustains a social elite by cultivating ‘cultural capital’ (see my posts on the Great British Class Survey for more on this), or that, as what we might today call ‘soft power’, it is a tool of imperialist oppression. There is no question either that capitalist class hegemony or imperialism depend on an adaptable combination of symbolic, systemic, and subjective violence, or that cultural artefacts can be bent to repressive political ends, but the assumption that the ‘meaning’ or ‘function’ of classical music is always and forever limited by the meanings or functions it has had or served in the past is ludicrous. Classical music is, like every symbolic system, what its users make it; and in any case it is, within the terms of its own symbolic language, perpetually in a state of resistance to regulation. As anyone who goes to an opera or classical concert will know, the accusation that the majority of the audience, even in the ‘expensive’ seats, are representatives of the nation’s elite is difficult to credit. But even if it were true, that would not be because the music is somehow, fundamentally, ‘elitist’. Floating signifiers, the signs and language we use in every moment of our lives, and classical music not least, change their meaning whenever the context for their presentation is changed. Consider the radically different meanings of the word ‘gay’ in the early twentieth or early twenty-first century (or, going in another direction, of ‘queer’). Something which was once used to perform repressive symbolic violence can be re-quilted to perform emancipatory symbolic violence. Anyone who thinks that classical music is always a tool of economic or imperialist oppression, to be judged sombrely wherever it is encountered, is, for reasons I don’t fully understand, mistaking classical music for capitalism and Maxim guns.

Barenboim has some naive views. One species of them concerns his East-Western Divan Orchestra, ostensibly composed of otherwise warring factions in the Middle East, which is an attempt to show by example how music can heal the world’s wounds. The liberal Left is often quick to sneer at him for this, but I see no reason to. What looks naive from one perspective looks brutal, radical, from another. It is naive to say that black and white people deserve equal civil rights, too, but the ongoing struggle in the US, where legislation to safeguard that state of affairs once again proves to be weak in the face of a bigger systemic violence, shows that such naive vision is always required, especially when the emotional force of the call can have such a powerful effect in uniting people around a cause. And one of the things that music – Wagner’s and Beethoven’s more than most – can do is to stir emotionally.

Wilhelm Furtwängler

Wilhelm Furtwängler

Barenboim’s idolization of Furtwängler is a further indication of his naivety: was not this a conductor who remained in a high-profile musical position in Nazi Germany, rather than fleeing the regime? Furtwängler himself was politically naive, believing, as Barenboim seemingly does, that the music he made had some kind of potential to act as a check on barbarity, on stupidity, on inhumanity. (Barenboim also, not unrelatedly, plays Wagner in Israel, a country which cannot tolerate him and his ‘essential’ anti-Semitism: turning to the audience, he simply says, in Hebrew, ‘leave if you want to’.) Furtwängler was a fool, it seems: his musicking didn’t help. But the subjective violence of Hitler’s war engine was too great to be stopped by anything short of superior subjective violence. Yet symbolic violence can work in different circumstances, such as at the fall of the Berlin Wall (when Leonard Bernstein conducted Beethoven’s Ninth Symphony to articulate the reconciliation), or such as today, when the greatest repressive forces in the West are sustained by systemic rather than subjective violence (although protestors who are kettled and charged at by police horses will be aware that, in extremis, the state is not shy of resorting to the good old threat of subjective violence when circumstances dictate it).

The gestures, the ideas, the musical effects: Barenboim is arguably the greatest living conductor (certainly the greatest current conductor of Wagner, and possibly the greatest ever) because, not in spite of, his naivety. He is simple, childlike enough to suggest that, despite the appearance of being a white, middle-class, Western avocation, classical music can easily be an emancipatory political force, an essentially oppositional culture quite distinct from other musical forms (which sit more comfortably with the prerogatives of capital circulation). Bullshit, scream the liberals, and he gives them a cold, implacable stare.

Daniel Barenboim is a fool. And he is right.

Demonization of anti-capitalist cultural forms

[This is the last of three posts on the Great British Class Survey. To access the others, go to the introduction here.]

My final concern arises from the attitude to forms of culture that this study reflects. Specifically it reflects a demonization of a large number of truly counter-cultural forms, which is to say highbrow culture. By counter-cultural I mean, quite simply, running on lines that are counter to the general motions of society, and since our society is conditioned by capital and the social relations that it feeds on, counter-cultural forms are those which cause problems for capital. Highbrow culture is counter-cultural in two specific and important senses: first because it has always been and remains a problem for capitalist sale. It simply is not attractive enough, or as easy to reproduce, as capitalism wishes for its commodities. The question of quality or intelligence or value is immaterial here: capitalism simply can’t sell ancient Greek poetry for as quick and big a profit as it can sell football. The second reason highbrow culture is counter-cultural is that it offers the possibility for disrupting the class position of members of society, and specifically allowing for the possibility that the lower classes might rise above their beginnings.

The demonization of highbrow culture is widespread. It informs programming on television and radio, curriculums in schools, even, ludicrously, decisions that highbrow cultural organizations take about what they are going to stage or exhibit (can’t be too highbrow, darling: that simply won’t do). And the reasons for the demonization, on the face of it, as so often, seem good: highbrow culture has traditionally been associated with the rich and powerful. But it isn’t any more, and this is a paradox that the liberal attitude is often ignorant of. The richest and most powerful, people like ex-Etonian David Cameron, are more likely to disdain highbrow culture – and particularly musical culture – than to enjoy it, and it’s the aspirational lower-class Tories like Michael Gove who are more likely to be seen watching Wagner – doubtless with genuine love for the music – at Covent Garden. (And the ticket prices, incidentally, might not cost any more than football: see my blogpost here). This paradox – the rich seem to be increasingly opposed to highbrow culture – serves a specific political and economic purpose, which will become clear below (essentially, it helps keep the poor in their place).

The demonization of highbrow culture covers over two huge errors: first, the error of essentialism (i.e. supposing that there is an unbreakable link between certain people and certain characteristics: here, a link between poshness and highbrow culture); second, the error of supposing that people are always representative of the situation in which they find themselves. Hand in hand, these errors help to sustain the abject position of the poor by denying them access to at least the possibility of deciding for themselves whether highbrow culture has anything to offer.

One of the greatest and most emancipatory benefits of highbrow culture is that it strikes those who were not born into an exposure with it – i.e. those from the lower classes, whose diet might be Agatha Christie and the Bee Gees rather than Virgil and Beethoven – as something profoundly Other, something to which they can aspire, something that can lead them, as a beacon, to a better life. It is perhaps the most staggering stupidity of the contemporary liberal attitude that so many people fail to see this. In precisely the same way, the democratic freedoms of the West are often seen by abject subjects of middle-eastern dictatorships, and so on, as a political consummation devoutly to be wished. Again, the liberal attitude is to feel guilty about the very freedoms that our patronizingly viewed Other regards as so very desirable. The only thing that could possibly be wrong about enjoying freedom or high culture would be to deny it to others. Yet this is precisely what the liberal attitude dictates. Hide high culture away; listen to pop; don’t try to foist our cultural attitudes (pro-women, pro-gay, multicultural) onto the Other.

But disdaining highbrow Western civilization simply because it is (falsely) assumed to have an essential relation to an exploitative elite is, quite simply, selfishness. It leads to the evisceration of school curriculums and the demonization of the arts. Extending the gift of civilization to the whole world, not to force-feed but at least to allow people the chance to decide for themselves whether they want it, is surely a better course to take. But the attitude reflected by studies like the Great British Class Survey serves one purpose only: to further the motion of modern capitalism.

The highbrow tastes that are implicitly disdained by this study, and which explicitly lift people into a higher class even when their actual material conditions are much worse than those around them, are quite simply the cultural interests that capitalism has always found it very difficult to sell, relative to the attractive objects of mass consumption peddled by the culture industry. The demonization of these cultural forms – so extensive that even an ex-Etonian prime minister can largely disdain them – is an essential function of the command to consume. And hand in hand with this demonization of the cultural artefacts themselves goes the demonization of those who wish to make them, and rational thought about them, available to as many people as possible, because they believe them to have the potential to emancipate.

I should say that I don’t mean, as the authors of this study do, that listening to Beethoven or reading Virgil will make your life more secure and you a better and more socially important person. It won’t. Absolutely not. Cultural capital buys you nothing and doesn’t improve your morals or social standing. It’s worthless as a commodity. Don’t waste your time learning about Beethoven or Virgil if you want to be a better or richer person. It has no direct causal connexion with the increase or decrease of personal significance or riches. But it might just lead you into thinking about things you hadn’t thought about before. Pursuing it might open doors that your birth and the wealth of your parents could not. It might lead you, for instance, if all around you are low-skilled workers in dead-end jobs, to imagine a different career path, one focused on teaching, research, arts administration, museum curating, performance, journalism, creative writing… But the moment you tick more than three boxes of highbrow cultural interests on the online calculator, you are catapulted into a higher social class. And the higher you go, the more guilty you should feel.

The calculator is worse than simply being useless. It is a depressing reflexion of the present attitude in liberal Western society towards the possibility of properly understanding and then overturning our current system. And until we can cast off our assumption of an essential relation between highbrow culture and privilege, and the concomitant view that working-class people have working-class attitudes and cannot articulate for themselves a way out of their position which draws on the exquisite resources for self-understanding that highbrow culture offers, we are doomed to perpetuate the wretched class system of capitalism, which remains, even after this study, not seven classes but two: those who are exploited (a class that is growing) and those who exploit.