Intellectual freedom was the founding principle of the Institute for Advanced Study in Princeton – something we lack the confidence to do now, show two new books
By Simon Ings
IN 1930, the US educator Abraham Flexner set up the Institute for Advanced Study, an independent research centre in Princeton, New Jersey, where leading lights as diverse as Albert Einstein and T. S. Eliot could pursue their studies, free from everyday pressures.
For Flexner, the world was richer than the imagination could conceive and wider than ambition could encompass. The universe was full of gifts and this was why pure, “blue sky” research could not help but turn up practical results now and again, of a sort quite impossible to plan for.
So, in his 1939 essay “The usefulness of useless knowledge”, Flexner listed a few of the practical gains that have sprung from what we might, with care, term scholastic noodling. Electromagnetism was his favourite. We might add quantum physics.
Even as his institute opened its doors, the world’s biggest planned economy, the Soviet Union, was conducting a grand and opposite experiment, harnessing all the sciences for their immediate utility and problem-solving ability.
During the cold war, the vast majority of Soviet scientists were reduced to mediocrity, given only sharply defined engineering problems to solve. Flexner’s better-known affiliates, meanwhile, garnered reputations akin to those enjoyed by other mascots of Western intellectual liberty: abstract-expressionist artists and jazz musicians.
At a time when academia is once again under pressure to account for itself, the Princeton University Press reprint of Flexner’s essay is timely. Its preface, however, is another matter. Written by current institute director Robbert Dijkgraaf, it exposes our utterly instrumental times. For example, he employs junk metrics such as “more than half of all economic growth comes from innovation”. What for Flexner was a rather sardonic nod to the bottom line, has become for Dijkgraaf the entire argument – as though “pure research” simply meant “long-term investment”, and civic support came not from existential confidence and intellectual curiosity, but from scientists “sharing the latest discoveries and personal stories”. So much for escaping quotidian demands.
“The structures throttling today’s scholars come not from Soviet-style planning, but market principles”
We do not know what the tightening of funding for scientific research that has taken place over the past 40 years would have done for Flexner’s own sense of noblesse oblige. But this we can be sure of: utilitarian approaches to higher education are dominant now, to the point of monopoly. The administrative burdens and stultifying oversight structures throttling today’s scholars come not from Soviet-style central planning, but from the application of market principles – an irony that the sociologist Lawrence Busch explores exhaustively in his monograph Knowledge for Sale.
Busch explains how the first neo-liberal thinkers sought to prevent the rise of totalitarian regimes by replacing governance with markets. Those thinkers believed that markets were safer than governments because they were cybernetic and so corrected themselves. Right?
Wrong: Busch provides ghastly disproofs of this neo-liberal vision from within the hall of academe, from bad habits such as a focus on counting citations and publication output, through fraud, to existential crises such as the shift in the ideal of education from a public to a private good. But if our ingenious, post-war market solution to the totalitarian nightmare of the 1940s has itself turned out to be a great vampire squid wrapped around the face of humanity (as journalist Matt Taibbi once described investment bank Goldman Sachs), where have we left to go?
Flexner’s solution requires from us a confidence that is hard to muster right now. We have to remember that the point of study is not to power, enable, de-glitch or otherwise save civilisation. The point of study is to create a civilisation worth saving.
A twice-weekly academic writing group which was set up for PhD students and early career researchers at Oxford University has been credited with boosting productivity and reducing stress.
The group’s founder is Dr Alice Kelly, the Harmsworth Postdoctoral Research Fellow at the Rothermere American Institute. In a guest post, she tells the story of the writing group:
‘Most people need structure, accountability and discipline if they are to work productively. But this is exactly what disappears when highly qualified, often perfectionistic people start the rewarding but lengthy and lonely PhD process.
This is especially true in the humanities, where, in contrast to the more communal research environment that scientific teams enjoy, study is often solitary. I believe that universities can, and should, do much more to generate a sense of group motivation, camaraderie and peer support among early career scholars in the humanities.
I convene a group of postgraduate students and early career researchers to write together for three hours twice a week. After coffee, I ask everyone to share their goals for the first 75-minute session with their neighbour. Goals must be specific, realistic and communicable, such as writing 250 words or reworking a particularly problematic paragraph. I set an alarm and remind everyone not to check email or social media. When the alarm goes off, everyone checks in with their partner about whether or not they achieved their goal. After a break, we do it again. After our Friday morning sessions, we go for lunch together. And that’s it.
Yet the impact of the group in terms of writing productivity, reducing student stress and promoting a sense of community has been profound – beyond what even I had anticipated when I first introduced these sessions at the interdisciplinary Oxford Research Centre in the Humanities (TORCH) in October 2015. Since its beginning, the group has been enormously popular and is always oversubscribed. I have become convinced that such writing groups are an affordable and highly effective way of reducing early career isolation and improving mental health, and could be implemented more widely.
Participants reported the positive effects in two anonymous surveys for our humanities division. They value the sessions’ imposition of routine, realism about expectations and embodiment of the principle that thinking happens through, not before, writing (known as the “writing as a laboratory” model). Respondents were pleasantly surprised at their own productivity. One said: “I never thought I could accomplish so much in one hour, if I really committed, without interruptions”.
Another said: “It seems I lost the fear of finishing things when I was surrounded by other people.” Participants also reported adopting their newly established good habits outside the sessions.
Most evident, however, was respondents’ improved sense of morale and peer support. One noted that “the PhD can be such an isolating experience; it’s very calming to come to a place where, twice a week, we’re reminded that working independently doesn’t have to mean working alone”. Another referred to the group as “an invaluable resource that should be mandatory for all PhDs”.
The writing group offers, for six hours a week, what most workers get every day: a start time, a stop time and peer pressure not to procrastinate on the internet. Over a term’s worth of attendance, this produces serious results.
One participant had “rewritten a draft thesis chapter, written a conference abstract, edited two reviews for an online publication, finished two book reviews and edited several chapters of a volume”.
My role in the group varies between friend, peer, disciplinarian, mentor, stand-in supervisor, and a regular fixture offering some stability and continuity. If people don’t show up, I hold them accountable. If they are struggling with a piece of writing, I talk them through it.
The group has unexpectedly become an informal forum for all the academic questions we’re not sure who else to ask about, and has therefore had a serious impact on pastoral care through peer support.
As someone who worked long hours through the four years of my PhD – in exhausting periods of “binge writing” and unnecessarily time-consuming revisions – I am now a vocal advocate of short bursts of focused attention and writing as a routine practice, with mandatory time off from academic work.
One survey respondent noted that the group “has given me the sense that I have a working week and am not expected to work 24/7; it has helped me treat my degree as a job”.
As the group has developed, I have investigated strategies to make the sessions more effective. One idea was to organise a manual or sensory activity (colouring in or listening to music, for instance) during the break; another was to make participants set regular goals on index cards and to add a gold star when they achieve them.
Writing marathons – two three-hour sessions, separated by lunch – are useful for meeting end-of-term deadlines. The combination of accountability and reward (group celebrations at the end of a goal period, or when somebody submits their dissertation) motivates participants both to push themselves and to be pleased with their progress.
There is surprisingly little literature on the benefits of writing in group settings. Very helpful texts, such as Eviatar Zerubavel’s The Clockwork Muse (1999) and Paul Silvia’s How to Write a Lot (2007), advocate scheduled writing, goal setting and monitoring progress, but do not address the high levels of self-discipline needed for regular independent writing that a group provides.
Meanwhile, the literature considering writing groups, such as Rowena Murray and Sarah Moore’s The Handbook of Academic Writing (2006) or Claire Aitchison and Cally Guerin’s volume Writing Groups for Doctoral Education and Beyond (2014), promotes them for collaborative writing or peer review purposes, rather than improved morale and community.
Amid mounting demands for “outputs” and increasing evidence of chronic stress and mental health problems among academics, having an academic writing group at every university could be a simple yet powerful way of making the task of writing more productive and rewarding for the next generation of scholars.’
Few have savaged lecturers as brutally as the the Enlightenment-era printmaker William Hogarth. In Scholars at a Lecture, the presenter reads from his prepared text, his eyes down, indifferent to his audience. The budding academics are no more impressive; those in thrall to the lecturer’s nonsense have slack faces with lolling eyes and open mouths. The others don’t offer any critique but yawn, doze, or chat idly among themselves.
In The Reward of Cruelty, the lecturer, who cannot be bothered to rise from his chair, pokes lackadaisically at Tom Nero, a criminal whose body has been turned over for dissection. The audience shows little interest. Hogarth’s most damning image of a lecturer, however, depicts one who does inspire his audience, but to dubious ends. In Credulity, Superstition, and Fanaticism, a minister thunders on about witches and devils as parishioners swoon with terror. The minister’s text quotes II Corinthians: “I speak as a fool.”
Hogarth’s satirical prints repeat a common belief about lectures: Those who claim the lectern are dullards or charlatans who do so only to gratify and enrich themselves. The louder they speak, the more we should suspect them. True knowledge, he implies, does not come from speechifying blowhards and self-satisfied experts, but from practical observation of the world.
This distrust has now spread to the lecturer’s natural habitat, the university. Administrators and instructors alike have declared lecturing a stale teaching method and begun advocating new so-called content delivery techniques, from lab- and project-based learning to flipped classrooms and online instruction, that “disrupt” the sage-on-the-stage model. While these new methods work well, we should not completely abandon the lecture. It remains a powerful tool for teaching, communicating, and community building.
Detractors claim that lectures’ hierarchical, inflexible, and unidirectional structure doesn’t suit our dynamic, crowd-sourced world. To be sure, lectures are top-down affairs that occur at fixed times. But these aspects are assets, not weaknesses: they benefit both students and instructors.
Lectures are not designed to transmit knowledge directly from the lecturers’ lips to students’ brains — this idea is a false one, exacerbated by the problematic phrase “content delivery.” Although lecturers (hopefully) possess information that, at the beginning of a lecture, their students do not, they are not merely delivering content. Rather, giving a lecture forces instructors to communicate their knowledge through argument in real time.
The best lectures draw on careful preparation as well as spontaneous revelation. While speaking to students and gauging their reactions, lecturers come to new conclusions, incorporate them into the lecture, and refine their argument. Lectures impart facts, but they also model argumentation, all the while responding to their audience’s nonverbal cues. Far from being one-sided, lectures are a social occasion.
The regular timing of lectures contributes to their sociality, establishing a course’s rhythm. The weekly lecture, or pair of lectures, draws students together at the same time and place, providing a set of ideas to digest while reading supplementary material and breaking into smaller discussion sections. Classrooms are communities, and typically lectures are the only occasion for the entire group to convene physically. Remove the impetus to gather — either by insinuating that recorded lectures are just as effective or by making the lecture optional — and the benefits of community disappear.
One common lament among university students is a sense of social isolationduring the school year. While lectures won’t necessarily introduce students to their best friends or future partners, they do require attendees to get dressed, leave the house, and participate in a shared experience. This simple routine can head off lonelieness and despondency, two triggers and intensifiers of depression.
Further, dismissing the value of attending lectures only encourages students to skip weeks of classes and then frantically try to catch up later, a destructive cycle that compounds loneliness with anxiety. Students not only fall behind but also miss opportunities to speak with their peers and instructors.
Like a metronome, lectures regularly punctuate the week, grounding other elements of students’ lives by, for instance, encouraging regular sleep schedules and study periods, which can also reduce anxiety and stress.
Audiences outside academia clearly understand the benefits of collective listening. If public lectures did not draw sizable crowds, then museums, universities, bookstores, and community centers would have abandoned them long ago. The public knows that, far from being outdated, lectures can be rousing, profound, and even fun.
The attack on lectures ultimately participates in neoliberalism’s desire to restructure our lives in the image of just-in-time logistics. We must be able to cancel anything at the last minute in our desperate hustle to be employable to anyone who might ask. An economic model that chops up and parcels out every moment of our lives inevitably resists the requirement to convene regularly.
Critics frequently complain that lectures’ fixity makes it difficult for students to work. We should throw this argument back at those who make it: the need for students to work makes it hard for them to attend lectures. Work, not learning, is the burden that should be eradicated. Education is not an errand to be wedged between Uber shifts; it represents a long-term commitment that requires support from society at large. This support is thinning; eroding the legitimacy of lecturing makes it thinner still.
Neoliberalism has also made it hard to recognize the work students perform in lectures. Many critics dismiss lecture attendance as “passive learning,” arguing that students in lectures are aren’t doing anything. Today, declaring something passive completely delegitimizes it. Eve Chiapello and Norman Fairclough argue that activity for its own sake has become essential to personal success: “What is relevant is to be always pursuing some sort of activity, never to be without a project.” Indeed, in our constant scramble to project adaptable employability, we must always seem harried, even if our flailing about isn’t directed toward anything concrete. Without moving around or speaking, lecture attendees certainly don’t look busy, and so their activity gets maligned as passive, unproductive, and, consequently, irrelevant.
But lecture attendees do lots of things: they take notes, they react, they scan the room for reactions, and most importantly, they listen. Listening to a sustained, hour-long argument requires initiative, will, and focus. In other words, it is an activity. But today, the act of listening counts for very little, as it does not appear to produce any outcomes or have an evident goal.
No matter how fast-paced the world becomes, listening will remain essential to public dialogue and debate. As Professor Monessa Cummins, department chair of classics at Grinnell College states:
Can [students] listen to a political candidate with a critical ear? Can they go and listen to their minister with an analytical ear? Can they listen to one another? One of the things a lecture does is build that habit.
Discussion sections after lectures always reveal the expert listeners. They ask the best questions, the ones that cut straight to the speaker’s main themes with an urgency that electrifies the whole audience, producing a flurry of excited responses and follow-up questions. Good listening grounds all dialogue, expands our body of knowledge, and builds community.
Although they probably haven’t thought about in these terms, many of the lecture’s critics would probably favor one side of Aristotle’s scheme of knowledge, which separates theory from practice. Historian Pamela H. Smith succinctly describes the difference: theory (episteme, scientia) describes knowledge based on logical syllogism and geometrical demonstration. Practice (praxis, experientia) encompasses things done — like politics, ethics, and economics — or things made — technē, which require bodily labor.
Before the modern era, technē was widely denigrated. Smith writes, “Technē . . . was the lowly knowledge of how to make things or produce effects, practiced by animals, slaves, and craftspeople. It was the only area of knowledge that was productive.” Today of course, the tables are turned; technē’s productive quality elevates it above supposedly impractical theory. Indeed, under capitalism, anything that doesn’t immediately appear as productive, even if only in the most superficial way, is dismissed as a waste of time.
Good lectures build knowledge and community; they also model critical civic participation. But students have suffered a wide variety of bad lecturers: the preening windbag, the verbatim Powerpoint reader, the poor timekeeper who never manages to cover all the session’s material. Lecturing does not come naturally and can take years to master, yet very few instructors have the opportunity to learn how to deliver a good lecture. Given the outsize emphasis many universities place on publication and grants — not to mention the excessive workloads, low pay, and job insecurity the majority of instructors face — lecturers have little incentive to invest the time and effort it takes to gain these skills.
Meanwhile, active learning partisans sometimes overlook the skill it takes to conduct their preferred methods effectively. Becoming a good lab instructor, project facilitator, or discussion leader also takes time and practice. In addition to bad lectures, I’ve sat through plenty of bewildering labs and meandering seminars. Because these teaching methods have the guise of activity, however, their occasional failures are not dismissed as easily as those of the supposedly passive lecture.
Lecturing is increasingly impugned as an inactive and hierarchical pedagogical method. The type of labor demanded in the lecture hall — and the type of community it builds — still matters. Under an economic system that works to accelerate and divide us, institutions that carve out time and space to facilitate collectivity and reflection are needed more than ever.
Rees explores the opportunities and risks that cutting-edge science presents.
02/21/2017 10:24 am ET
Lord Martin Rees is an astrophysicist and the former master of Trinity College, Cambridge. He sat down with The WorldPost for a wide-ranging interview, which has been edited for clarity and brevity.
Alexander Görlach: Out of all great transformations we are going through, from climate change to artificial intelligence to gene editing, what are the most consequential we are about to witness?
Martin Rees: It depends on what time scale we are thinking about. In the next 10 or 20 years, I would say it’s the rapid development in biotechnology. We are already seeing that it’s becoming easier to modify the genome, and we heard about experiments on the influenza virus to make it more virulent and transmissible. These techniques are developing very fast and have huge potential benefits but unfortunately also downsides.
They are easily accessible and handled. It’s the kind of equipment that’s available at many university labs and many companies. And so the risk of error or terror in these areas is quite substantial, while regulation is very hard. It’s not like regulating nuclear activity, which requires huge special purpose facilities. Biohacking is almost a student-competitive sport.
I am somewhat pessimistic, because even if we do have regulations and protocols for safety, how would we enforce them globally? Obviously we should try and minimize the risk of misuse by error or by design of these technologies and also be concerned about the ethical dilemmas they pose. So my pessimism stems from feelings that what can be done, will be done ― somewhere by someone ― whatever the regulations say.
Görlach: Do you fear that this could happen not only in the realm of crime ― if we think of so-called “dirty bombs,” for example ― but could also be used by governments? Do we need a charter designed to prevent misuse?
Rees: I don’t think governments would use biotech in dangerous ways. They haven’t used biological weapons much, and the reason for that is that the effects are unpredictable.
‘Over the next 10 or 20 years, the greatest transformation we are likely to live through is the rapid development in biotechnology.’Lord Martin Rees
Görlach: That brings recent Hollywood blockbusters like “Inferno” to mind, where one lunatic tries to sterilize half of mankind through a virus.
Rees: Several movies have been made about global bio-disasters. Nevertheless, I think it is a realistic scenario, and I think it could lead to huge casualties. Disasters such as the one from “Inferno,” as well as other natural pandemics, could spread globally. The consequences of such a catastrophe could be really serious for society. We have had natural pandemics in historic times ― the “black death,” for example. The reason that governments put pandemics ― natural or artificially produced ― high on their risk register is the danger of societal breakdown. That is what worries me most about the possible impact of pandemics. This is a natural threat, of course. The threat is aggregated by the growing possibility that individuals or small groups could manufacture a more lethal virus artificially.
Görlach: So when speaking of the age of transformation, aspects of security seem paramount to you. Why is that?
Rees: We are moving into an age when small groups can have a huge and even global impact. In fact, I highlighted this theme in my book Our Final Century, which I wrote 13 years ago. These new technologies of bio and cyber ― as we know ― can cause massive disruption. We have had traditional dissidents and terrorists, but there were certain limits to how much devastation they could cause. And that limit has risen hugely with these new bio and cyber-technologies. I think this is a new threat, and it is going to increase the tension between freedom, security and privacy.
Görlach: Let’s look at another huge topic: artificial intelligence. Is this a field where more uplifting thoughts occur to you?
Rees: If we stay within our time frame of 10-20 years, I think the prime concerns about A.I. are going to be in the realm of biological issues. And everyone agrees that we should try and regulate these. My concern is that it will be hard to make effective regulations. Outside biological consequences, in the long term, of course we need to worry about A.I. and machines learning too much.
In the short term, we have the issue of the disruption of the labor market due to robotics taking over ― not just factory work but also many skilled occupations. I mean routine legal work, medical diagnostics and possibly surgery. Indeed, some of the hardest jobs to mechanize are jobs like gardening and plumbing.
We will have to accept a big redistribution in the way the labor market is deployed. And in order to ensure we don’t develop even more inequality, there has got to be a massive redistribution. The money earned by robots can’t only go to a small elite ― Silicon Valley people, for instance. In my opinion, it should rather be used for the funding of dignified, secure jobs. Preferably in the public sector ― young and old, teaching assistants, gardeners in public parks, custodians and things like that. There is unlimited demand for jobs of that kind.
‘Some of the hardest jobs to mechanize are jobs like gardening and plumbing.’Lord Martin Rees
Görlach: But robots also potentially could take on the work of a nurse, for that matter.
Rees: True, they could do some routine nursing. But I think people prefer real human beings, just as we’ve already seen that the wealthiest people want personal servants rather than automation. I think everyone would like that if they could afford it, and everyone in old age would like to be cared for by a real person.
Görlach: In your opinion, what mental capacities will robots have in the near future?
Rees: I think it will be a long time before they will have the all-round ability of humans. Maybe that will never happen. We don’t know. But what is called generalized machine learning, having been made possible by the ever-increasing number-crunching power of computers, is a genuine big breakthrough. These structures of machine learning are a big leap, and they open up the possibility that machines can really learn a lot about the world. It does raise dangers though, which people may worry about. If these computers were to get out of their box one day, they might pose a considerable threat.
Görlach: In your opinion, what sparks new innovation and ideas? Will A.I. and machines foster these processes?
Rees: Moments of insights are quite rare, sadly. But they do happen, as documented cases suggest (laughs). There is a great saying: “Fortune favors the prepared mind.” You have got to ruminate a lot before you are in a state to have one of these important insights. If you ask when the big advances in scientific understanding happen, they are often triggered by some new observation that in turn was enabled by some new technological advancement. Sometimes that happens just by a combination of people crossing disciplines and bringing new ideas together; sometimes just through luck; sometimes through a special motivation that caused people to focus on some problem; sometimes by people focusing on a new problem that was deemed too difficult previously and therefore didn’t attract attention.
‘Fortune favors the prepared mind.’
Görlach: Would you say a collective can have an idea or that only individuals have ideas?
Rees: Many ideas may have depended on the collective to even emerge. In soccer, one person may score the key goal. That doesn’t mean the other 10 people on the team are irrelevant. I think a lot of science is very much like that: the strength of a team is crucial to enable one person to score the goal.
Görlach: Do natural sciences and humanities have the capability to tackle the challenges occurring from these transformations?
Rees: The kinds of issues we are addressing in Cambridge involve social sciences as well as natural sciences. As I said before, because of the societal effect, the consequences of a pandemic now could be worse than they were in the past, despite our more advanced medicine. Also, if we are thinking of ecological problems like food shortages, the issue of food distribution is an economic question, as well as a question of what people are ready to eat. All these things involve fully understanding people’s social attitudes. Are we going to be satisfied eating insects for protein?
Görlach: With the rising amount of aggregated data, it becomes increasingly difficult for the humanities to keep up with natural sciences. How can we synchronize the languages of different academic fields in this era of big data?
Rees: Great question! There are impediments caused by disciplinary boundaries, and we have to encourage people to bridge these. I am gratified that we have some young people who are of this kind: philosophers who are into computer science or biologists who are interested in system analysis. All these things are very important. I think here in Cambridge, we are quite well-advantaged because we traditionally have the college system whereby we have small academic groups in each college. Each of these colleges is a microcosm, so all disciplines cross somewhat. It is therefore particularly propitious as a location for the development of cross-disciplinary work.
How can we synchronize the languages of different academic fields in this era of big data?
Görlach: The blessings of modern innovation seem to be ignored by many policymakers; we see a retreat from globalization and a retreat from digitalization. Is it a disconnect between science and the rest of society?
Rees: The misapplication of science is a problem, of course. As well as the fact that science’s benefits are irregularly distributed. There are some people that don’t benefit, such as traditional factory workers. If you look at the welfare of the average blue-collar worker and their income in real terms ― in the U.S. and in Europe ― it has not risen in the last 20 years; in many respects, their welfare has declined. Their jobs are less secure, and there is more unemployment. But there is one aspect in which they are better off: information technologies. IT spreads far quicker than expected and led to advantages for workers in Europe, the U.S. and Africa.
Görlach: But surely globalization made many poor people less poor and a few rich people even richer.
Rees: Sure, I guess this statement can be made after 25 years of globalization. But it should also be addressed that we now witness a significant backlash in many places in terms of Brexit or the presidential election in the U.S.
Görlach: How drastically do you think these developments will affect science, the attitude toward it and its funding?
Rees: Many of the people who use modern information technology, such as cellphones, aren’t aware of the immense technological achievements. Back in the day, developments could be traced back to scientific innovations decades ago, which were mainly funded by either the military or the public. They may not be aware of it, but they appreciate it. So it’s unfair to say people are anti-science. They are worried about science because indeed there is a risk that some of these technologies will run ahead faster than we can control and cope with them. So there is a reasonable ground for some people to be concerned ― for example, about biotech and A.I.
But we also have to bear in mind that for technology to be developed, it’s necessary ― but not sufficient ― for a certain amount of science to be known. We can take areas of technology in which we could have forged ahead faster but haven’t because there was no demand. Take one example: it took only 12 years from the first Sputnik to Neil Armstrong’s small step on the moon ― a huge development in 12 years. The motivation for the Apollo program was a political one and has led to huge expenses. Or take commercial flying ― today, we fly in the same way we did 50 years ago, even though in principle we could all fly in supersonics.
These are two examples where the technology exists but there hasn’t been a motive ― neither political nor economic ― to advance these technologies as fast as possible. In the case of IT, there was the obvious demand, which exploded globally in an amazing way.
‘There are areas of technology in which we could have forged ahead faster but haven’t because there was no demand.’Lord Martin Rees
Görlach: Living in a so-called post-factual era, what are “facts” to you as a scientist?
Rees: In the United Kingdom, those who voted for Brexit voted that way for a variety of reasons. Some who voted for it wanted to give the government a bloody nose; others voted blatantly against their own interest. The workers in South Wales, for example, benefited hugely from the European Union. There is a wide variety of different motives but I don’t think people would say that they voted against technology.
Görlach: Still, there is this ongoing narrative about the fear of globalization and digitalization, and that would also imply the fear of technology.
Rees: Sure, but that is oversimplified. We can have advanced technology on a smaller scale. I don’t think you can say that technology is always correlated with larger-scale globalization. It allows for robotic manufacturing, and it allows for more customization to individual demand. The internet has allowed a lot of small businesses to flow.
Görlach: But there seems to be an increasing disconnect in many societies regarding the consensus on which facts matter and how facts are perceived.
Rees: To understand this attitude you are expressing, we have to realize that there aren’t many facts that are clear and relevant in their own right. In most cases, I think people have reason to doubt. Most economic predictions, for example, have pretty poor records, so you can’t call them facts.
In the Brexit debate, there were a lot of valid arguments on both sides, and you can’t blame the public for being skeptical. This is also true for the climate debate. It is true that some people deny what is clear. But the details on climate change are very uncertain. Even those who agree on all will differ in their attitudes toward the appropriate policy. That depends on other things, including ethics. In a lot of recent debates, people agreed about the science. They disagree about the appropriate policies deriving from that facts. For instance: how much constraint are we willing to exercise, in order to facilitate the life of generations to come? Opinions differ hugely.
‘In the Brexit debate, there were a lot of valid arguments on both sides, and you can’t blame the public for being skeptical.’Lord Martin Rees
Görlach: But how then do you judge the developments we now see in many Western societies?
Rees: I think these developments are partly caused by new technologies that have led to new inequalities. Another point is: even if it hasn’t increased, people are now more aware of inequality. In sub-Saharan Africa, people see the kind of life that we live, and they wonder why they can’t live that kind of life. Twenty-five years ago, they were quite unaware of it. This understandably produces more discontent and embitterment. There is a segment of society, a less-educated one, that feels left behind and unappreciated. That is why I think a huge benefit to society will arise if we have enough redistribution to recreate dignified jobs.
Görlach: What political framework do you think of as an ideal environment for science?
Rees: In the Soviet Union, they had some of the best mathematicians and physicists, partly because the study of those subjects was fostered for military reasons. People in those areas also felt that they had more intellectual freedom, which is why a bigger fraction of the top intellectuals went into math and physics in Soviet Russia than probably anywhere else ever since. That shows you can have really outstanding scientists surviving in that sort of society.
Görlach:So the ethical implication is not paramount to having “good” science after all?
Rees: I think scientists have a special responsibility to be concerned about the implications of their work. Often an academic scientist can’t predict the implications of his work. The inventors of the laser, for instance, had no idea that this technology could be used for eye surgery and DVD discs but also for weaponry. Among the most impressive scientists I have known are the people who returned to academic pursuits after the end of World War II with relief but remained committed to doing what they could to control the powers they had helped to unleash.
In all cases, the scientists supported the making of the bomb in the context of the time. But they were also concerned about proliferation and arms control. It would have been wrong for them to not be concerned.
To make an analogy: if you have teenage son, you may not be able to control what he does, but you sure are a poor parent if you don’t care about what he does. Likewise, if you are a scientist and you created your own ideas, they’re your offspring, as it were. Though you can’t necessarily control how they will be applied, because that is beyond your control, you nonetheless should care and you should do all you can to ensure that your ideas, which you have helped to create, are used for the benefit of mankind and not in a damaging manner. This is something that should be instilled in all students. There should be ethics courses as part of all science courses in university.
‘How much constraint are we willing to exercise, in order to facilitate the life of generations to come? Opinions differ hugely.’Lord Martin Rees
Görlach: What, then, is your motivation as a scientist?
Rees: I feel I am very privileged to have consistently, over a career of nearly 40 years now, played part in debates on topics that I think are writing the history of science in this period. As we make great, collective, scientific progress, we are able to confront new mysteries, which we couldn’t even have addressed in the past. Many of the questions that were being addressed when I was young have now been solved. Pressing questions couldn’t even have been posed back then.
Of course the science I do is very remote from any application, but it’s of great fascination and a very wide audience is interested in these questions. It certainly adds to my satisfaction that I can actually convey some of these exciting ideas to a wider public. I would get less satisfaction if I could only talk about my work to a few fellow specialists, so I am glad that these ideas can become part of a broader culture.
Görlach: What is the best idea you ever had?
Rees: I don’t have any sort of singular idea, but I think I have played a role in some of the ideas that have gradually formed over the last 20 or 30 years about how our universe has evolved from a simple beginning to the complex cosmos we see around us that we are a part of. For me, the social part of science is very important ― many ideas emerge out of discussion and cooperation and, of course, out of experiments and observations.
The symbiosis between science and technology ― the old idea is that science eventually leads to an application ― is far too naïve! It goes two ways, because advancements made in academics are facilitated by technology. We only made advancements beyond Aristotle by having much more sensitive detectors and being able to explore space in many ways. If we didn’t have computers or ways of detecting radiation, etc., we would have made no progress because we are no wiser than Aristotle was.
Görlach: Lord Rees, thank you very much for your time.
The appropriate reaction is to be happy when people share their good news, right? Like when that friend from graduate school emails to say he’s just won a tenure-track professorship at [insert name of elite institution]. You’re not supposed to think: “Damn, that position should be mine.” Or when you run into a colleague in the corridor who is perky because her book manuscript has just been accepted by a major scholarly publisher, you force a smile but you’re really thinking, “I deserve a contract. What have I got to show for my hard work?”Any of this sound familiar? To paraphrase John Lennon, maybe you’re just a jealous researcher.
Passion can be great, but not when it comes to jealousy. The word “jealous” implies a profound sense of bitterness toward someone who has something you want — and, perhaps, something you feel you deserve.
Very little has been written on jealousy in academic life, and yet, anecdotal evidence suggests that it is prevalent in our profession. This is unsurprising. As Chronicle readers are well aware, academe today is a place of increasingly precarious employment conditions — where the “publish or perish” mantra is more relevant than ever and the pressure to win grant money has reached fever pitch.
In such an environment, it’s little wonder that jealousy can take hold. I’ve certainly felt my share (and I herewith apologize for privately cursing those of you who got positions and/or book contracts that I wanted). Jealousy may come with the academic turf but that’s rarely a good thing. So what can we do to better manage our envy at all stages of the academic career?
Accept that you’re jealous. Don’t blame your envious feelings on a bad day or a bad week (though you may well be having one or both of those). Most important, don’t pretend those feelings aren’t there, and that everything’s fine.
Just say the words — “I’m jealous” — and you’ll be in a position to tackle the rest of these suggestions.
Recognize the state of the industry. Academe is competitive. That is as true in Australia (where I’m based) as it is everywhere else. Hundreds of seriously bright, seriously hard-working and seriously determined scholars can apply for a single position. The days of graduate students completing a doctorate and walking into a tenure-track position without working up a sweat are gone. (Did they ever exist?)
And let’s face it, academe has never been a level playing field. Consider factors such as nepotism and ideological bias. Consider, too, the myriad forms of discrimination (sexism, racism, homophobia, ableism). The mere existence of campaigns such as “Why is my curriculum white?” (in Britain) suggests that even the more “progressive” parts of the academy still have some serious work to do.
Recognizing that academe is competitive and riddled with inequalities won’t (and shouldn’t!) make you smile. Nevertheless, it should help alleviate any fixation you might have on a person or persons who’ve reaped the rewards that you so fervently desire. You’re one of countless people around the world who are trying to develop (or enhance) their research careers — there’s not just one or two of us.
Focus on you. Seems obvious, right? We all know that progress in your academic career is all about putting your best foot forward. You constantly need to argue why you are the best person for this professorship, or why your project should receive funding above all others.
Remind yourself regularly: It’s unhelpful to direct resentment, envy, and anxiety at someone who has achieved some measure of success that you desire. When you do that, your energies — the ones that you could be putting into your own research and career progression — are instead being channeled into that other person. You’re focused on what they’ve done, what they’ve achieved, when what you really need to do is figure out the steps you need to take to get where you want to go.
Protect your reputation. You’re not putting your best foot forward when you rant to other researchers about how Professor X got the promotion that should have come your way.
At best, that kind of talk can paint you as sour and uncollegial. At worst, it can make you seem unprofessional and worth avoiding. I once heard a colleague indulge in just such a rant and I emerged from it thinking: “Well, what’s that person saying about me behind my back? Why would I want to work with them?”
If you do genuinely feel the need to voice your grievances, perhaps speak with a mental-health professional. I say that without a shred of sarcasm. Your mental health is paramount, plus confidentiality is assured.
Develop interests outside the academy. I almost screamed when my therapist made that recommendation. Did he realise how busy I was — what with writing lectures, marking term papers, answering student queries, producing job applications? How could I have any kind of life outside the university?
Yes, all of those activities take time. Plenty of time. Yes, they must be undertaken if you hope to avoid dropping off the scholarly radar.
That said, developing interests outside the seminar rooms and grant-writing workshops can help prevent you from becoming too entangled in the daily grind of scholarly life. Outside interests can help alleviate the strain and tension that can manifest in emotions such as jealousy.
And so, I took my therapist’s advice and purchased a gym membership. I assure you, there’s nothing like sweating it out on the leg press to temporarily block out those multiple deadlines and the heavy teaching week. Stressed about that seminar paper? Work it out on whatever hobby allows you to step outside of your academic persona.
Given that our industry is increasingly volatile, jealousy is an almost inevitable aspect of academic life. But we don’t have to let it overwhelm us.
Jay Daniel Thompson is a cultural-studies researcher who teaches at the University of Melbourne and at Victoria University in Australia. He is also a freelance writer, editor, and blogger.
Outraged headlines erupted when students launched a campaign to challenge the great western philosophers. We went to the source of dissent – London’s School of Oriental and Asian Studies – to investigate
“They Kant be serious!”, spluttered the Daily Mail headline in its most McEnroe-ish tone. “PC students demand white philosophers including Plato and Descartes be dropped from university syllabus”. “Great thinkers too male and pale, students declare”, trumpeted the Times. The Telegraph, too, was outraged: “They are said to be the founding fathers of western philosophy, whose ideas underpin civilised society. But students at a prestigious London university are demanding that figures such as Plato, Descartes and Immanuel Kant should be largely dropped from the curriculum because they are white.”
The prestigious London University was the School of Oriental and Asian Studies (Soas). It hit the headlines last month when journalists discovered that students, backed by many of their lecturers, have set up a campaign to “Decolonise Our Minds” by transforming the curriculum. So shocking did the idea seem of a British university refusing to teach Plato, Locke or Kant that the story was picked up by newspapers across the globe. BBC2’s Newsnight debated whether “universities should eschew western philosophers”. This predictably generated more outraged headlines when one of the guests, sociologist Kehinde Andrews, denounced Soas as a “white institution” and the Enlightenment as “racist”.
For academics and students at Soas, the press coverage itself is the cause of outrage. “When the report came out that we were trying to take white men off the table, it was just bewildering because we had no intention of doing that,” says Sian Hawthorne, a convenor of the undergraduate course World Philosophies, the only philosophy degree that Soas provides. “Our courses are intimately engaged with European thought.”
“We’re not trying to exclude European thinkers,” says a second-year doctoral student, and a member of the Decolonising Our Minds group. “We’re trying to desacralise European thinkers, stopping them from being treated as unquestionable. What we are doing is quite reasonable.”
So what is the truth behind the headlines? Will philosophy students at Soas really not be taught Aristotle and Kant? Do the students and academics have a point that the curriculum is “too white”? And what should be the place of European philosophy, and European philosophers, in an age of globalisation and of a shifting power balance from west to east?
I went to Soas to talk to students and academics. “That’s the one thing,” one student told me, “that no journalist has so far done.”
The School of Oriental and African Studies was founded in 1916 “to secure the running of the British Empire”, as historian Ian Brown puts it in his history of the institution. Its aim was to provide “instruction to colonial administrators, commercial managers, and military officers, but also to missionaries, doctors and teachers”. Soas taught them the local languages as well as providing “an authoritative introduction to the customs, religions, laws of the people whom they were to govern”.
Today, of the more than 6,000 students at Soas, almost half come from abroad, from 130 countries, and more than half are black or minority ethnic. Far from teaching students how to administer the empire, the school now helps develop independent, postcolonial societies. It sees its mission also as providing a critique of empire, and of its continuing legacies, a view that extends to the very top of Soas management. “Our minds are colonised, absolutely,” says Deborah Johnston. Johnston is no student, nor even a mere academic, but the pro-director of learning and teaching, one of the most senior management figures at Soas. She continues: ‘‘In most UK universities there has been a dominance of European thought. That’s why we need to do work to decolonise the curriculum, and our minds.”
For some, such views emanating from the very top of the institution entrench the belief that, in the words of an academic at another London college, “Soas is the most politicised of British universities”. Others, however, see the problem not as one of an institution that is too politicised but as one that has not yet rid itself of the ghosts of empire. The curriculum, such critics claim, is still too rooted in a colonial view of the world, too stuffed with European thinkers, and too blind to African, Asian and Latin American thinkers.
Neelam Chhara is a third-year politics student at Soas, and the Student Union officer for “equality and liberation”. “On my course in political theory,” she says, “we discussed 26 thinkers. Just two were non-European – Frantz Fanon and Gandhi.”
Such “frustrations with our curriculum” led students to set up the Decolonising Our Minds group. “We thought: why not show what an alternative curriculum could look like by hosting thinkers and academics that didn’t centre on Europe like our curriculum was doing.”
Meera Sabaratnam laughs when I tell her about Chhara’s reading list. “That’s two more non-Europeans than when I was taught political theory in my undergraduate PPE at Oxford.” Sabaratnam is a lecturer in international relations at Soas. As an institution, it is, she says, much better than most universities. For instance, 39% of academic staff are of black or minority ethnic background – more than three times the figure for British universities overall. Nevertheless, she supports the Decolonising Our Minds campaign. “It is necessary to talk about colonial legacies and to look at how colonialism and racism impact the institution.”
The argument for a more diverse curriculum seems reasonable, indeed unquestionable. After all, philosophers and thinkers come not just from Europe. There are great non-European intellectual traditions, myriad philosophical schools from China, India, Africa and the Muslim world, many of which have shaped European philosophy. Three years ago I wrote a book on the global history of ethics, called The Quest for a Moral Compass, which drew not just on European philosophers, but also on the works of Mo Tzu and Zhu Xi, Ibn Rushd and Ibn Sina, Anton Wilhelm Amo and Frantz Fanon, Sarvepalli Radhakrishnan and Fung Yu Lan. All these different thinkers, I wanted to show, can be woven into a single but complex narrative through which we can rethink global history.
And yet, the debate about a “diverse curriculum” is not as straightforward as one might imagine. Few would contest the idea that European thinkers should not be on the curriculum simply because they are European. But of the major European philosophers that often dominate reading lists – such as Plato, Aristotle, Descartes, Locke, Hobbes, Kant, Rousseau, Nietzsche, Arendt or Sartre – how many are there simply because they are European rather than because their ideas merit study?
Sabaratnam acknowledges the problem. “Framing a course is primarily about content: what are the issues that need to be taught, and who can speak interestingly about those issues? How many European thinkers you include and the balance between European and non-European thinkers is an academic decision. If you want to understand political theory, you can’t avoid engagement with Kant, Hegel and so on.”
“But,” she adds, “that can’t be the be-all-and- end-all.” There has, she insists, “to be a parallel debate about diversity and representation. There is value in having non-European thinkers and women on those reading lists.”
If European thinkers should not be on reading lists simply because they are European, should non-Europeans be included just because they are non-European, solely for the value of increased diversity? Kwame Anthony Appiah, professor of philosophy and law at New York University, and last year’s Reith lecturer on Radio 4, is sceptical. He teaches a course on global ethics, which includes European, Chinese, Arab and Indian thinkers. The key question for him, however, is not “Is the curriculum sufficiently diverse?” but “Is any particular thinker worth studying?”
“If they were uninteresting or unimportant,” he observes, “it would not be much of a defence to say, ‘They are Arab or Chinese and make the course more diverse.’”
The difficulties in thinking about a diverse curriculum can be seen in the founding statement of the Decolonising Our Minds campaign. It does not say: “We need to expand our curriculum to include philosophers from across the globe”. Rather, it insists (under the heading “Decolonising Soas: Confronting the White Institution”) that, “If white philosophers are required, then to teach their work from a critical viewpoint.” This suggests that not having white philosophers should be the default position. This might not quite be “students demanding white philosophers be dropped from university syllabus”, as the newspapers claimed, but it’s not that far off.
“When you put it to me like that,” says Sian Hawthorne, “yes, I think that is problematic. However, I take a more generous reading of that statement as saying whomever is taught, whoever’s work is drawn on, it must always be dealt with critically. That is one of the first principles of a university education.”
The students themselves told me that they had not realised what the statement actually said, and would change it.
Do we need to be particularly critical of white philosophers, I asked Hawthorne. Yes, she replied, because “whiteness has been engaged in perpetuating forms of oppression and marginalisation and exclusion”. Does she think that all European philosophy is tainted by racism and colonialism? “Yes. There’s plenty of evidence to demonstrate this.”
But by insisting that the work of all white philosophers, from Aristotle to Arendt, from Socrates to Sartre, should be seen as tainted by racism, is she not confusing ideas and identity? Is she not falling into the same trap as racists, suggesting that because one possesses a particular identity, so one’s ideas are necessarily distinct, and linked to that identity? A philosopher is white so his or her ideas are contaminated.
Hawthorne rejects the criticism, and uses as an analogy the way that academics look upon the work of the German philosopher Martin Heidegger. Heidegger was one of the most influential 20th-century philosophers, having shaped the ideas of a host of thinkers such as Hannah Arendt, Jean-Paul Sartre and Jacques Derrida. He was also a Nazi with repulsively antisemitic views. The discovery of Heidegger’s nazism and antisemitism has led to much debate about how to treat his philosophical ideas.
“Do we deal with Heidegger?” asks Hawthorne. “I think we must. But we must do so in the understanding that he was a Nazi. We don’t not read his texts. But we read them carefully. That should also be the case with white philosophers. Just because they’re white doesn’t mean that they’re written off. But we need to be careful.”
This, though, is a false analogy. What concerns many about Heidegger is not his skin colour or his identity but his political views. Asking whether Heidegger’s Nazi views should affect the way that we understand his philosophical ideas is different from insisting that, because Aristotle or Kant or Arendt were white, we should be careful in the way we read their writings.
“Whiteness is not a useful category when talking of philosophy,” says Appiah. “When people speak, they speak ideas, not identity. The truth value of what you say is not indexed to your identity. If you’re making a bad argument, it’s a bad argument. It’s not bad because of the identity of the person making it.”
Perhaps the fiercest debate about European thought emerges in the battle over the Enlightenment, that sprawling intellectual, cultural and social movement that spread through Europe during the late 17th and 18th centuries, and was the harbinger of intellectual modernity. There is no period of history that has been more analysed, celebrated and disparaged. Unlike, say, the Renaissance or the Reformation, the Enlightenment is not simply a historical moment but one through which debates about the contemporary world are played out. From the role of science to the war on terror, from free speech to racism, there are few contemporary debates that do not engage with the Enlightenment, or at least with what we imagine the Enlightenment to have been. Inevitably, then, what we imagine the Enlightenment to have been has become a historical battleground.
“It’s become familiar to think of the Enlightenment as special,” Hawthorne suggests, “because it’s a constitutive narrative for how the west understands itself.” The Enlightenment, in her view, provides a myth, a creation story, that the west tells itself about what makes it more civilised and the rest of the world more barbaric.
Yet, for much of the past two centuries, the Enlightenment was seen as central to the values of the left, and of those challenging western imperialism and injustice. As the late Marxist historian Eric Hobsbawm put it, “All progressive, rationalist and humanist ideologies are implicit in it, and indeed come out of it.”
More recently, however, many on the left have argued that the Enlightenment, far from being a resource for those challenging colonialism, is itself a colonial project. Enlightenment universalism, such critics argue, is racist because it seeks to impose western ideas of rationality and objectivity on other peoples. “The universalising discourses of modern Europe and the United States.” Edward Said argued in his book Culture and Imperialism, “assume the silence, willing or otherwise, of the non-European world.” It is an argument central to the Soas campaign.
Soas academics and students argue that Enlightenment thinkers had a highly restricted notion of freedom; freedom as “the property of propertied white men”, as Meera Sabaratnam puts it. John Locke is widely regarded as having provided the philosophical foundations of modern liberal conceptions of tolerance. Yet he was a shareholder in a slaving company. Immanuel Kant, often seen as the greatest of Enlightenment philosophers, clung to a belief in a racial hierarchy, insisting that “Humanity is at its greatest perfection in the race of the whites” and that “the African and the Hindu appear to be incapable of moral maturity”.
“Enlightenment philosophers make arguments about knowledge and reason setting us free, and laud the values of liberty,” Hawthorne observes, “at the very moment that colonial enterprises and the slave trade are expanding. Those very same arguments are summoned to justify Europe’s so-called civilising mission and make claims about European superiority.”
The British historian Jonathan Israel, now professor of modern European history at Princeton university, is perhaps the most important contemporary scholar of the Enlightenment. Over the past decade he has published an extraordinary trilogy of books, Radical Enlightenment, Enlightenment Contested and Democratic Enlightenment. The size of Israel’s labours is eye-catching. Each work in the trilogy runs to almost 1,000 pages; in total there must be close to 2m words here. There are few who better understand the Enlightenment.
Like many before him, Israel lauds the Enlightenment as that transformative period when Europe shifted from being a culture “based on a largely shared core of faith, tradition and authority” to one in which “everything, no matter how fundamental or deeply rooted, was questioned in the light of philosophical reason”. Yet, Israel is also deeply critical. At the heart of his argument is the insistence that there were actually two Enlightenments. The mainstream Enlightenment of Locke, Voltaire, Kant and Hume is the one of which we know, and of which most historians have written. But it was the Radical Enlightenment, shaped by lesser-known figures such as d’Holbach, Diderot, Condorcet and, in particular, the Dutch philosopher Baruch Spinoza, that provided the Enlightenment’s heart and soul.
The two Enlightenments, Israel suggests, divided on the question of whether reason reigned supreme in human affairs, as the Radicals insisted, or whether reason had to be limited by faith and tradition – the view of the mainstream. The mainstream’s intellectual timidity constrained its critique of old social forms and beliefs. By contrast, the Radical Enlightenment “rejected all compromise with the past and sought to sweep away existing structures entirely”.
I talked to Israel about the Soas debate. The argument that the Enlightenment is racist, he suggests, comes from a one-eyed view, the selective picking and choosing of certain individuals and quotes. Such critics see only the more conservative mainstream figures, such as Locke, Kant and Hume, and ignore the thinkers of the Radical Enlightenment, an approach that Israel calls “seriously obtuse”. The Radical Enlightenment, he observes, “was condemned by all European governments and by all churches, because in principle it insisted on the universal and equal rights of men and the full emancipation of the black population”.
In 1770 a remarkable polemic against colonialism and slavery called Histoire philosophique des deux Indes (The Philosophical History of the Two Indies) was published. Written by a number of Radical thinkers including Raynal, Diderot and d’Holbach, it was both a study of Europe’s relations with the East Indies and the New World and an encyclopedia of anti-colonialism. Arguing that “natural liberty is the right which nature has given to everyone to dispose of himself according to his will”, the book both prophesied and defended the revolutionary overthrow of slavery: “The negroes only want a chief, sufficiently courageous to lead them to vengeance and slaughter… Where is the new Spartacus?”
The Histoire was astonishingly successful, published in more than 50 editions in at least five languages over the following 30 years. But it was only one of many such radical tracts, including d’Holbach’s Système sociale, Tom Paine’s Rights of Man, and the works of Condorcet and Diderot. “This current,” Israel argues, “was totally at odds with all forms of imperialism, colonialism and racial discrimination or prejudice.”
The Radical Enlightenment was “without question the starting point for the anti-colonialism of our time”. In Israel’s view, what he calls the “package of basic values” that defines modernity – toleration, personal freedom, democracy, racial equality, sexual emancipation and the universal right to knowledge – derives principally from the claims of the Radical Enlightenment.
Israel is sympathetic to the demand that university curricula be diversified. “There is a strong case for studying non-European traditions as an essential part of any philosophy teaching course.” But, he points out, such a global view began in the Radical Enlightenment itself. “Many radical enlighteners believed their anti-Christian naturalism had powerful roots in medieval Islamic philosophy. They also had strong affinities with Chinese Confucianism. They were free of the Eurocentrism that marked the mainstream Enlightenment of Voltaire, Montesquieu, Hume and Smith.”
“I wouldn’t want to go up against Jonathan Israel,” laughs Sian Hawthorne. “He is probably the foremost thinker on the Enlightenment. All I would say in response is that there is no single thing that you can point to and say ‘That’s the Enlightenment’.”
That, however, is a view that fits more comfortably with Israel’s notions of the two Enlightenments, the mainstream and the Radical, than it does with the claim that “the Enlightenment is racist”.
Hawthorne is right, however, to point to Locke’s failure to challenge slavery and to Kant’s racial anthropology. Such views do seem shocking today. But they seem shocking because of the transformation in consciousness brought about in large part by the Enlightenment itself. In most societies and traditions, European and non-European, the kind of ethnocentrism expressed by many mainstream Enlightenment thinkers was the norm. The Enlightenment helped change that. “I don’t know where you’d get the powerful tools for criticising European colonialism if you did not have the Enlightenment,” observes Appiah. “The modern idea of equality, the modern critique of inequality – much of the materials for that idea and for that critique come from that period.”
One does not have to rely on historians like Israel or philosophers like Appiah to make that point. It was made also by the very people who suffered under the yoke of European colonialism and sought to cast it off.
Today, most people know of the French and American revolutions, two great social tumults whose reverberations we still feel. Few know of the other great revolution of the 18th century – the one in Haiti that began in 1791 and culminated with independence in 1804.
In 1791, a mass insurrection broke out among Haiti’s slaves, upon whose labour France had transformed Saint-Domingue, as it called its colony, into the richest island in the world. It was an insurrection that turned into a revolution, a revolution that defeated the three greatest armies of the age – the French, British and Spanish – to become the first successful slave revolt in history, a revolution that was to shape history almost as deeply as those of 1776 and 1789.
The slaves were led by Toussaint L’Ouverture, a self-educated former slave, deeply read, highly politicised and possessed of a genius in military tactics and strategy. He was the “Spartacus” for which the European radicals who wrote the Histoire philosophique des deux Indes had pined.
Toussaint’s greatest gift, perhaps, was his ability to see that while Europe was responsible for the enslavement of blacks, nevertheless within European culture lay also the political and moral ideas with which to shatter the bonds of enslavement. The French bourgeoisie might have tried to deny the mass of humanity the ideals embodied in the Declaration of the Rights of Man. But Toussaint recognised in those ideals a weapon more powerful than any sword or musket or cannon.
From Toussaint L’Ouverture to Nelson Mandela, for two centuries those battling against European power and racial oppression looked to the Enlightenment ideals as the fuel for their struggles. Today, most of those struggles and movements have disappeared. As a result the meanings of “radicalism” and “decolonisation” have withered, and come to mean something very different and much more tame than they did half a century or a century ago. Shorn of the social movements that gave Enlightenment values their radical edge, those values have lost much of their meaning. That today so many should so easily dismiss the Enlightenment in the name of “decolonisation” tells us more about the shaky foundations of contemporary radicalism than it does about the Enlightenment.
The one word that Sian Hawthorne returns to again and again is “dialogue”. “We’re not used to seeing the world as the world. We keep cutting things up and segmenting them. Too often we don’t see the entanglements between European and non-European philosophies. What’s missing is dialogue.”
“Dialogue” is one of those words, like “diversity”, that can mean all things to all people. It is often used to define shallow, skating-on-the-surface conversations which give the impression of an exchange but which touch upon nothing substantive. It can also mean proper, dig-deep contestations through which we test each other’s ideas and in which we show ourselves willing to be uncomfortable as we ourselves are tested. In universities, and in society at large, there is today too little of the latter and too much of the former; too little real engagement and too great a desire to stay within our comfort zones.
There is much on which I disagree with the Decolonising Our Minds approach. I disagree with its concept of “whiteness”, with the characterisation of the Enlightenment as “racist”, with the understanding of what “European thought” constitutes, with what it means to “decolonise”. What I admire, though, is the openness to have this debate, and to engage in the kinds of conversations I had with both students and academics. I spent an afternoon discussing, debating and disagreeing with Meera Sabaratnam. At the end, she said: “The discussion that we’re having now is exactly the kind of discussion that it should be possible to have at universities.” On that, I could not agree more.
A different philosophy: six key texts
1. Mo Tzu, Basic Writings
(Columbia University Press)
Most people know of Confucius. They should know of Mo Tzu. Though he lived a century after Confucius, he has a claim to be China’s first true philosopher. Unlike Confucius, Mo Tzu engaged in an explicit reflective search for moral standards and gave tightly reasoned arguments for his views. He defended a universalist vision, arguing that the moral interests of strangers are as important as those of our tribe. He proposed a form of what we now call “consequentialism”, the idea that an act should be judged primarily by its effects, which was remarkably sophisticated for its time. The conservatism of Confucianism, and its cultivation of the moral character necessary to rule, to administer and to follow, won the favour of the Chinese state. The radicalism of Mo Tzu was forgotten and suppressed. Only fragments of his writing remain.
2. Ibn Rushd, The Decisive Treatise
(University of Chicago Press)
The Andalusian Muslim Ibn Rushd (1126-1198), often known in the west as Averroes, was the last of the great classical Islamic philosophers. Through his commentaries on Aristotle, he became more influential on western philosophy than on Islamic thought. Central to Ibn Rushd’s work was the relationship between philosophy and religion and the insistence on the compatibility of reason and faith. Perhaps his two most important works are The Incoherence of the Incoherence and The Decisive Treatise. The first is a response to the great theologian al-Ghazali and his attack on reason in his book The Incoherence of the Philosophers. The second is a defence of the role of reason in a community of faith, in which Ibn Rushd argues that it is God who commands humans to employ reason and not just faith.
3. Abu’l ‘Ala al-Ma’arri, The Book of al-Ma’arri
(New Humanity Books, 2015)
Today, we have become used to thinking of the Islamic world as insular, hostile to reason and freethinking and with a single, unquestioned view of God and the Qur’an. But in the first half-millennium of its existence, especially during the Abbasid period (750-1258), there was within the Islamic empire an extraordinary flourishing of philosophical debate and of freethinking. The most important of the freethinkers was Abu’l ’Ala al-Ma’arri, an 11th-century Syrian poet and philosopher, renowned for his unflinching religious scepticism:
“They all err – Muslims, Jews,
Christians, and Zoroastrians:
Humanity follows two world-wide sects:
One, man intelligent without religion,
The second, religious without intellect.” [from The Two Universal Sects]
There are very few English translations of his work. There is NYU Press’s recently published edition of his Epistle of Forgiveness (sometimes compared to Dante’s Inferno) and this short selection of his poetry.
4. Jonathan Israel, Radical Enlightenment
(Oxford University Press)
A groundbreaking study of the “other Enlightenment”, not the Enlightenment of Locke, Hume, Voltaire and Kant, but that of Spinoza, Pascal, d’Holbach and Diderot, a half-underground movement whose radicalism, according to Israel, has deeply shaped modern conceptions of freedom, liberty, equality and tolerance.
5. CLR James, The Black Jacobins
Trinidadian CLR James was one of those towering figures of the 20th century who is all too rarely recognised as such. Novelist and orator, philosopher and cricket lover, historian and revolutionary, Pan-Africanist and Trotskyist – few modern figures can match his intellectual depth, cultural breadth or sheer political contrariness. The Black Jacobins tells the story of the Haitian revolution and of its tragically flawed leader, Toussaint L’Ouverture. Decades before historians such as EP Thompson began producing “history from below”, CLR James told of how the slaves of Haiti had not been passive victims of their oppression but active agents in their own emancipation. It is a work of biography and social history, not of philosophy, but central to the narrative is the importance of ideas, especially the ideas of the Enlightenment, as weapons for social transformation.
6. Frantz Fanon, The Wretched of the Earth
(Penguin Modern Classics)
A classic of the anti-colonial struggle, The Wretched of the Earth has since become the bible of postcolonial literature. Fanon’s admirers see him as giving succour to the view that European thought is destructive of non-European peoples and cultures. His critics focus on his celebration of violence as redemptive. Fanon’s work is in fact more subtle than either allow. Born in Martinique, Frantz Fanon was a psychiatrist and revolutionary and a key figure in the Algerian struggle for independence. The Wretched of the Earth was written when he was dying of leukaemia and is a searing indictment of the dehumanising trauma of colonialism on the colonised individual, culture and nation. KM
I’ve been thinking a lot recently about how the social sciences are proving too slow in catching up to developments in digital technology. This means that engagements with new possibilities are often piecemeal and ad hoc, pushing the threshold of innovation in methods while methodological and theoretical discussion lags further behind. We see changes at the level of platforms, infrastructures, devices and practices which allow new techniques to be developed but discussion of the implications of these techniques, how we should understand what they’re doing and how they fit with older and more established techniques struggles to catch up.I’ve argued that the reasons for this are largely to do with the structure of scholarly communication. The proliferation of publications, with an estimated 28,100 journals publishing 2.5 million articles a year, encourages specialisation in both writing and reading. I’ve watched this happen first hand with the asexuality literature, something which has grown from literally a handful of articles seven or eight years ago to a topic which would have to now be my primary focus to ‘keep up with the literature’. This is a microcosm of a much broader trend which I think it’s important for us to understand.
This is compounded by a norm for much longer articles in many social science disciplines vis-a-vis scientific reports. The imperative to ‘keep up with the literature’ militates against exploration and experimentation, while established forms of social scientific writing make it difficult to get important technical details included in substantive articles in mainstream disciplinary journals. Furthermore, publication is slow and this compounds the inter-journal competition which inculcates intellectual conservatism all around by discouraging epistemic risk taking on behalf of those seeking to be published in the highest status journals and instrumental strategies from lower status journals seeking to raise their impact factors. The more that’s published, the more markers of prestige get fought over in order to ensure that one’s intellectual wares stand out in a crowded market place.
Established structures of scholarly communication engender slowness in catching up with technical developments. Is the solution therefore to find structures which facilitate faster communication? Two obvious examples stand out here: open science practices and social media. It’s surely a positive thing that open science is becoming better established within the social sciences, such as a journal like Big Data & Society requiring authors to publish datasets and self-archiving of pre-prints becoming an established practice now mandated in the UK in the case of papers. Likewise I think it’s a good thing that social media has been taken up by so many social scientists. It reduces the opportunity costs of exploring outside one’s own area e.g. it’s much less onerous to follow a data science blog then it is to keep up with the latest data science papers. As a corollary, it also makes it easier to form connections outside one’s own circles, both by making it easier to have things in common to talk about and also simply making contact with these people in the first place.
But the idea these practices would fix the problems of scholarly communication appears to me to rest on a fallacy: ‘slow’ communication is problematic because it entails friction and lag in what would otherwise be ‘fast’ communication. If we break down the barriers, will everything flow more freely and these seemingly intractable problems might be solved? There’s a rich imagery of ‘fast’ & ‘slow’, ‘open’ and ‘closed’, lurking in the background here which we need to be critical of on a political level. But in a more prosaic sense, I think it straight forwardly distracts from the fact that the problem with slow scholarship isn’t simply a structural matter, such that the established system of scholarly communication (aided and abetted by the incentive structures of the contemporary academy) moulds academics to be ‘slow’ and that if we ‘hack the system’ then it might then mould academics to be ‘fast’.
Under present conditions, I can see how ‘open science’ might lead to all sorts of new pathologies, particularly if the transition from ‘filter then publish’ to ‘publish then filter’ is tied up with the commercial logic of platforms like Academia.Edu, Mendeley and now SSRN. If monetisation of these platforms is dependent on user attention and user data, it stands to reason that engineering strategies serving to maximise both will become a commercial imperative, if they’re not already (and we shouldn’t underestimate how long tech companies can be propped up with capital while making zero profit). The in itself entirely reasonable proposition that non-traditional forms of influence should be incorporated into scholarly metrics is likely to compound such a move, naturalising the algorithmic black boxes of social media and open science platforms and creating new forms of prestige available for fast scholars.
These mechanisms might not dominate the platform, but the idea of fast, free, openscholarly communication allowing a million flowers to bloom away from the disciplinary structures of the contemporary academy is a dangerous illusion. It represents the common sense of the ‘market’, the epistemic superiority of the crowd, creeping into how we view scholarship. We can need to be profoundly critical about how attention, reward and hierarchy work on these new platforms without jettisoning their affordances entirely in our rush to critique. I’m not saying we shouldn’t use social media, only that we shouldn’t culturally embrace it as the superior ‘new’ in relation to the inferior ‘old’. It should be both/and rather than either/or. This is something which I think will be much harder if we continue to think in terms of ‘fast’ and ‘slow’, at least as an abstract dichotomy we apply to complex systems.
Nonetheless, I do think we need to in some way hack the structures of scholarly communication if the social sciences are going to reliably keep up in anything more than a narrowly technique-driven way to emerging technologies. But rather than ‘fast’ and ‘slow’, we should perhaps see this in terms of ‘collaborative’ and ‘atomised’: resisting the algorithmic incentives of platforms while embracing the affordances they offer for new forms of working together, even within the constraining structures of the accelerated academy.