Category Archives: « finishing the rat race »

Table of Contents:

Preface …………………………. the future is a foreign country too

Introduction …………………. I wouldn’t start from here…

Prologue ……………………….. beware the naysayers!

Part I ……………………………………….. bones of contention: who do we think we are?

Chapter One ………………….. aimless weather: why I’m no longer an atheist

Chapter Two ………………….. the stuff of dreams: why I’m no longer a humanist

Chapter Three ………………… apes of wrath?

Part II ……………………………………… regimes of the old malarkey: what in the world are we doing?

Chapter Four …………………. keep taking the tablets

Chapter Five …………………. roll up the red carpet

Chapter Six …………………… all work and no play

Chapter Seven ……………….. lessons in nonsense

Chapter Eight ………………… the unreal thing

Chapter Nine …………………. the price of everything

Chapter Ten …………………… the clouds of not knowing

Preface: the future is a foreign country too

“I’m a pessimist because of intelligence, but an optimist because of will.”
— Antonio Gramsci

It was summer 2006 when I last properly travelled. Disembarking in Athens, and then a few months later, in Beijing and Mumbai, I spent a few summer months in three very different countries. Once great civilisations of the ancient world, all were now facing momentous turnabouts. One was about to enter a shattering era of decline (not that this was evident fifteen years ago) while conversely the others were at the start of an historic upturn: a pair of tigers recovering their strength after a long slumber, and readied for a new ascendancy on the world stage. So one thought I’d carried with me was along the lines of which of the two would make for the least objectionable future superpower. It was not a happy question, of course, since like many people I’d rather there were no superpowers, but we also have to realistic.

My visits to these extraordinary countries of the East had been an unforgettable experience. We had journeyed across landscapes of inexpressible beauty, and visited some of the world’s most ancient temples, palaces and mausoleums. I’d eaten often strange but mostly very delicious food, and delighted in so many other oddities of two distinctive and complicated cultures. There were so many positives, however, both stays had also perturbed me greatly and in unexpected ways.

I had, for instance, fully anticipated that China, being a one-party and (notionally at least) Communist state, would be quite evidently so, with a highly visible police and military presence, and a population fearful that careless words might lead to sudden arrest and ‘re-education’ behind the razor-wire of some distant internment camp. Thousands of Chinese dissidents are indeed dealt with by such brutal tactics 1, as the Chinese people were well aware. Not that China is alone in operating secret or semi-secret detention centres, aka “black sites”, for those who are in effect political prisoners. Gitmo at Guantánamo Bay bears the proud motto “honor bound to protect freedom”. Ostensibly protecting American freedom by incarcerating without trial those the authorities deem a threat in violation of human rights and international law. In China, I was occasionally informed, all those interned at the ‘reeducation camps’ are extremist members of a dangerous religion called Falun Gong. For Falun Gong we might read “terror suspect”. And doubtless a few of those who guard the Chinese black sites feel “honour bound” too.

The Chinese state is certainly repressive but not overtly so. Greatly to my surprise, my host and his friends spoke freely not just in private but (more surprisingly) in public too; our conversations regularly straying off into the perilous waters of politics, economics, and the rights and wrongs of Chairman Mao. One evening I also spoke with this little group of friends about (what we call) the massacre at Tiananmen Square, and though personally too young to remember the events, each agreed that the story reported in the West was a distortion. The students had attacked the army first, they told me in turn, and the soldiers were forced to defend themselves. To underline this point it was my host who reminded me of “the tank man”. That incredibly brave soul who directly confronted a entire column of tanks of the People’s Liberation Army. Apparently the Chinese watched the same footage (although not in its entirety I presume). The soldiers were just trying to go around him, my host explained, with the others nodding agreement… but then, as we know, half truth is untruth.

As in the West, the Chinese bourgeoisie seemed unduly trusting in their government. Reluctant to protest against the excesses of their own authorities not principally out of fear, but more straightforwardly because their own lives are rather comfortable and contented. The government enjoys eudaimonistic legitimacy: conditions in China being very good, historically considered, and certainly for those lucky enough to move within the comparatively affluent circles of Chinese middle classes – and my friends’ families were all within the lower echelons of that circle. The extremes are invisible: the hardships of the sweatshop workers and worst of the slums hidden away; the heavily polluted industrial centres also well off the tourist trails; and regions where dissent is most concentrated, such as Tibet, strictly off-limits to nearly everyone.

The big giveaway came only after I’d arrived at the border. Crossing from the mainland into Hong Kong and suddenly held up by long queues at the checkpoints. It was here that I spoke with an English couple who were leaving after a commercial visit to the nearby electronics factories. They told me they were both delighted to be leaving, completely dismayed by what they had witnessed. The fourteen year old girls on production lines working sixteen hours for ten dollars a day. When I asked why the British firm they represented didn’t buy their components more locally, they shook their heads and told me that it’s impossible to compete. And the queues at checkpoint? Necessary precautions to hold back a flood of Chinese refugees who were desperate to join us.

India was a totally different story. In India the privation and misery is never very far beyond the hotel door. It is ubiquitous. So the most deeply shocking revelation about India (revelation for me at least) was how an upwardly mobile and already affluent few are able to look right past the everyday squalor. As unmindful, as much as apathetic, to its overwhelming ugliness and stench.

If I may briefly compare India to Tanzania, the immediate difference was an alarming one. For modern India is, and in countless ways, a comparatively wealthy nation composed of a growing middle class, a great many of whom are already earning considerably more money than I ever will, whereas Tanzania remains one of the poorest nations on earth. Yet, and leaving aside the similarities in terms of the obvious lack of infrastructural investment (which is bizarre enough given the gaping economic disparity), there was, at least as I perceived it, a greater level of equality in Tanzania: equality which made the abject poverty appear less shocking (after a while at least) if no less degrading. So India sickened me in a way that Tanzania had not, remaining as she does, more ‘Third World’ than one of the poorest and most ‘underdeveloped’ nations on earth.2

The overriding lessons from these journeys were therefore twofold. As a traveller to China, I had been greeted and treated quite differently to those who visited the former Eastern Bloc countries. No doubt the thousands of undercover spies exist, but in general this modern Chinese totalitarianism is slicker and more quietly efficient: the cogs of a police state meshing and moving but barely visible and mostly unheard. So China revealed how authoritarian rule can be installed and maintained with comparatively little in the way of outward signs. For instance, I saw less CCTV cameras in Tiananmen Square than I would have expected to find in Trafalgar Square.3 Whilst on our many journeys across the country, we encountered no road blocks or random checkpoints. Indeed, my entry into China had been far easier than my departure from Heathrow. The reason behind this being as clear (at least on reflection) as it was deeply troubling: that, as Orwell correctly foresees in Nineteen Eighty-Four, any forward-thinking police state must sooner or later aim to abolish thoughtcrime altogether.

From India, the important lesson had been much plainer, and my thoughts were firmed up after a conversation with an Italian stranger on our flight home. “We must never let this happen to our own countries,” he told to me solemnly, and almost as if aware in advance of the impending financial attack which is now impoverishing our own continents.

On returning, I decided to start work on a book. Not about the journeys themselves, but less directly inspired by them.

Fifteen years on and the future does not look especially prosperous for those in the East or the West. But it does appear that there is a convergence of sorts. The worst elements from modern China and India coming West, and, in exchange, the worst elements of our broken western socioeconomic systems continuing to be exported far and wide. Simultaneously, however, the desire for major political change is now arising in many nations. So broadly in the book, I challenge the direction the world is heading, looking forward to times in which people East and West might choose to reconfigure their societies to make them fit our real human needs much better.

In brief, the book tackles a range of interrelated subjects: from education and debt (closely linked these days); advertising and mental well-being (linked in another way, as I hope to show); to employment practices and monetary systems – these issues are covered in Part 2. Whereas in Part 1, the larger questions of how we view our own species, its relationship to other species, as well as to Nature more broadly, are also considered. A quest for answers which includes a different, but closely related question – what do science and religion have to tell us about this blooming, buzzing confusion and our place within it?4

The book is entitled finishing the rat race, since this is not merely desirable, but, provided the political will to do so, and driven by the careful but rapid development and application of new technologies, is certainly an achievable goal for every nation in the twenty-first century. It would mean, of course, a second Enlightenment, and unlike the first, one that blossoms over the whole world. Before this can happen, however, we collectively must grasp how perilous the political situation has become, whilst reminding ourselves always that the darkest hour is before the dawn.

1“China is thought to have the highest number of political prisoners of any country in the world. Human rights activists counted 742 arrests in 2007 alone. More recent estimates have put the number between 2,000 and 3,000. There is no way of knowing the total behind bars for “endangering state security” – the charge which in 1997 replaced “counter-revolution” in the Communist criminal code.”

From an article entitled “A welcome move, but thousands remain political prisoners” written by Paul Vallely, published by The Independent on June 23, 2011. http://www.independent.co.uk/voices/commentators/paul-vallely-a-welcome-move-but-thousands-remain-political-prisoners-2301417.html

2 I make these statements on the basis of what I witnessed first-hand. There is however a purely quantitative method for comparing relative economic inequality that is known as the GINI index and based upon a remarkably simple and elegant formula generating a single number index ranging between 0 (for perfect equality) to 100 (for perfect inequality i.e., all the income going to s single individual). What is not so straightforward however is precisely how the statistics are determined for each of the different countries. So instead of one GINI index you will find (if you decide to look) that there are a number of alternative ones: the two main ones being produced by the World Bank and the CIA. But is either of these a truly reliable indicators using figures independently arrived at irrespective of any political motivations? Given the organisations involved we surely have to good reasons to be doubtful. And so what does it really tell us then when we learn that according to the CIA, at least, India is one place ahead of Tanzania and two places ahead of Japan? You can find the full CIA rankings at this link: https://www.cia.gov/library/publications/the-world-factbook/rankorder/2172rank.html Note that the date of the information varies considerably from country to country.

Likewise, what are we to judge when the World Bank indicator provides figures for India (33.9) and China (42.1) but offers no figures for Tanzania or Japan (to continue the comparisons from above)?

You can find the full UN World Bank ratings at this link: http://data.worldbank.org/indicator/SI.POV.GINI

3“Britain has one and a half times as many surveillance cameras as communist China, despite having a fraction of its population, shocking figures revealed yesterday.

There are 4.2million closed circuit TV cameras here, one per every 14 people.

But in police state China, which has a population of 1.3billion, there are just 2.75million cameras, the equivalent of one for every 472,000 of its citizens.”

From an article entitled “Revealed: Big Brother Britain has more CCTV cameras than China” written by Tom Kelly, published in The Daily Mail on August 11, 2009. http://www.dailymail.co.uk/news/article-1205607/Shock-figures-reveal-Britain-CCTV-camera-14-people–China.html

“With more than 100 such devices [i.e, CCTV cameras] Shetland (population 23,000) has more surveillance cameras than the entire San Francisco police department, which has just 71 CCTVs to cater for a population of 809,000.

So is Shetland an extreme one-off example? Hardly. the UK not only has more CCTV cameras than the world’s biggest dictatorship, China, we also have more cameras per person than anywhere else on the planet.”

From an article entitled “CCTV Britain: Why are we the most spied on country in the world?” written by Fergus Kelly, published in The Express on December 4, 2010. http://www.express.co.uk/posts/view/215388/CCTV-Britain-Why-are-we-the-most-spied-on-country-in-the-world

Both of these articles were published a few years ago, whereas I, of course, had visited China fifteen years ago. There is plenty of evidence and every reason to suppose that mass surveillance will have increased there too.

4 I have stolen the phrase here from William James famous remarks about how “The baby, assailed by eyes, ears, nose, skin, and entrails at once, feels it all as one great blooming, buzzing confusion.” From The Principles of Psychology, ch.13, “Discrimination and Comparison”. http://psychclassics.asu.edu/James/Principles/prin13.htm

I wouldn’t start from here…

The following article is the Introduction to a book entitled Finishing The Rat Race.

All chapters are available (in sequence) by following the link above or from category link in the main menu, where you will also find a table of contents and a preface on why I started writing it.

*

The first truth is that the liberty of a democracy is not safe if the people tolerate the growth of private power to a point where it becomes stronger than their democratic state itself. That, in its essence, is fascism – ownership of government by an individual, by a group, or by any other controlling private power…. Among us today a concentration of private power without equal in history is growing.

— Franklin D. Roosevelt 1

*

Talk of revolution is very much out of vogue. Instead, we look back on the late sixties, when its prospect was the brightest in living memory, with nostalgia and wistful detachment. Certainly it is true that we pay homage to the civil rights movement and tribute to its lasting achievements, but little else remains – that sexual liberation happened to coincide with the invention of the pill was surely no coincidence!

Tragically, what started up as glorious peaceful sedition: an anti-war, anti-establishment, anti-capitalist upwelling that had genuinely threatened the existing order; finished up largely as a carnival – ultimately the dark carnival of Altamont 2 and the depravity of the Manson Family murders 3 – and with this, the path to social justice was promptly cordoned off. The revellers mostly went home, cut their hair, removed the flowers and beads to keep as mementos, and then looked ahead to another fad. All of which is unsurprising. After all, why jeopardise the comforts and security won during the heated post-war struggles in the slim hope of a resoundingly radical victory?

If history teaches anything – other than its central thread that empires rise and fall – is it not that the toppling of entrenched political regimes or even of diabolical tyrannies, whether by violent means or more peaceable ones, ends too often with the emergence of new regimes as tyrannical and entrenched as the ones they replaced? True or false (and how to decide anyway?) what matters is that the modern tendency is to believe this is the case: thus contrary to Marx’s bold forecast, the age of revolutionary upheaval appears over, or – in the West at least – perpetually stalled with political quietism established as the norm – don’t worry, I shall go on shortly to contradict myself!

Indeed, our acquired taste for conservatism has usefully served the interests of the ruling establishment throughout my adult life, a period lasting three decades in which time its creed became ever more rapacious. ‘Conservatism’ has in fact been transformed well beyond any easy recognition. Adapted in the eighties, it came to serve the demands of a rising corporatist class which, like various species of shark, is itself compelled to move restlessly forward or perish. As the Red Queen tells Alice in Through the Looking-Glass, “it takes all the running you can do, to keep in the same place.” 4

To these ends traditional conservatism, which tries to engender forms of social stagnation, has been entirely superseded by neo-liberalism; today’s predominant, in fact unrivalled, politico-economic ideology with its overarching quasi-conservative doctrine of minimal ‘state interference’.  In practice this involves a combination of wholesale privatisation with swingeing cuts to public services and welfare. Inculcated by economics departments throughout the land, it has been implanted as a monoculture within our institutions of government, as within the plethora of foundation-funded think tanks and policy forums from whence it originally sprang (most notably The Adam Smith Institute and the Aspen Institute).

All distinguished economists, senior politicians, civil servants and mainstream journalists (the latter three more than likely indoctrinated through courses on Philosophy, Politics & Economics (PPE) at Oxford – with stress here very much placed on the ‘E’ of neo-liberal economics 5) are attuned to the belief that, in the words of its great trailblazer, Margaret Thatcher, “there is no alternative”. And luminaries of the new economics turn to historical precedents to buttress their pervasive doctrine; every kind of planned redistribution of wealth and resources (i.e., any conceivable alternative to their own ‘free market’ absolutism), irrespective of competency or goodwill, they say, has been doomed to failure.

The communist experiments of the Soviet Union and Mao’s China – examples they single out (continuing to do so long after the fall of both regimes) – did indeed result in catastrophes both at the level of production and due to lack of supply of goods. And if, indeed, the only foreseeable alternatives to neoliberalism were thoughtless reruns of a Soviet model or Maoism, this line of criticism could hardly be gainsaid; in reality, however, the vast majority of the world already subsists, living in dire poverty and likewise deprived of basic resources, although not under socialism, but in strict adherence to ‘free market’ directives extolled by the self-same experts. China, on the other hand, which remains autocratic and to a great extent a centrally planned economy, is evidently booming – but that’s for a different debate (suffice to say here, I certainly do not propose we follow their example).

In reality, neo-liberalism is an exceedingly cruel doctrine, and its staunchest proponents have often been candid about administering what they openly describe as their economic ‘shock therapy’ – although this label is generally attached when the treatment is meted out to the poorest nations. To soften its blow in other instances, a parallel ideology has arisen. The principle of so-called meritocracy provides the velvet glove when this same iron fist of laissez-faire fundamentalism is applied throughout western democracies. You get just as much as you deserve and this is best ensured by market mechanisms.

But finally, the socio-economic pendulum has moved in extremis. Today, even in the comfortable West, income and wealth inequality have grown to unprecedented levels. Our societies appear to be in the process of rupturing just as they did less than a century ago on the eve of the most destructive war in history. Meanwhile, the ‘progressives’, who long ago ditched the dog-eared pamphlets of revolutionaries, remain captivated by the spell of the more glossy portfolios of the meritocracists.

Having inveigled both political wings – becoming the new left and new right – they now hope to persuade us that ‘centrism’, founded on strict meritocratic principles, remains the single viable – since least ‘extreme’ – vision for democracy. Mostly stuck on the lower social rungs, however, we, the people are clearly restless. For the moment we moan and groan impatiently, but that moment is set to pass. Calls for fundamental social change are gaining strength and I dare to predict that we are on the brink – for better or for worse – of an altogether seismic shift.

Jordan Peterson is famously critical of ‘ideology’. He has a particular distain for Marxism, Stalinism, Nazism, Postmodernism, Feminism, in fact, any ism. Instead, he argues, that the individual is sovereign, ideology should be renounced, and that, quote, “If we each live properly, we will collectively flourish.” So what is ideology? And what leads Jordan Peterson (and others) to believe he is somehow above it all?

*

So how do we break free of the spells that bind us – the increasingly entangled entrapments of technology, money and work? There are really only two approaches we can take. Either we turn inwards, as an increasing number are doing, to try to rediscover who we are through methods of deep introspection. Or, confronting external reality head-on, we engage in collective acts of defiance, since our true strength lies in numbers.

There are good arguments for both approaches. The boundary between the subjective and objective is infinitely thin and I address this more fully in the chapters ahead. To repeat an old rallying slogan: the personal is the political! This cannot be said often enough.

My greatest concern is that we should not remain passive. Clear and unshakeable demands are urgent, since power concedes nothing without. But again, introspection is invaluable in this regard – for how can we better understand what we truly want without solidly comprehending who we really are? Any hope of shaping a better future nevertheless lies in collective hands and depends upon acts of solidarity.

The alternative is grim. Besides the prospect of new kinds of techno-tyranny, failure or refusal to react decisively will exacerbate the troubles that already plague us; ones forecast by Erich Fromm in the conclusion to his book The Sane Society:

In the 19th century inhumanity meant cruelty; in the 20th century it means schizoid self-alienation. The danger of the past was that men became slaves. The danger of the future is that men may become robots. True enough, robots do not rebel. But given man’s nature, robots cannot live and remain sane, they become “Golems”; they will destroy their world and themselves because they cannot stand any longer the boredom of a meaningless life. 6

Fromm’s vision is the best outcome, not the worst. For it wrongly presumes, as many still do, that the ruling class has no agenda of its own. In fairness, he lived in a different age: a time before the significant rise of today’s postmodern, globalist (supranationalist as opposed to internationalist), corporatocratic, neo-feudal, technetronic, technocratic age – I have chosen each of these words with care, since each reveals a different facet of the grand design. Hold the thought, because I’ll come back to it.

In Europe, America and much of the rest of the Western world, the entire political system is captured by variants of what would traditionally be labelled ‘right-wing’ or even ‘extreme right’. However, this is not the old-style extremism of Hitler or Mussolini, which was built upon the foundations of bombastic nationalism, but a new brand that cleverly disguises itself as non-ideological, tolerant or even moderate – I heard political commentator Tariq Ali once refer to it as the ‘extreme centre’. This is actually the best description we have.

This new extremism chooses new methods to promote and protect its crony insiders. It says sorry but we (meaning ‘you’) just have no choice – there is no alternative! – and these other chaps are more valuable, and simply “too big to fail”, before confirming, more or less as an aside, that democracy wasn’t working in any case. Meanwhile, it also finds new justifications for engaging in aggressive foreign wars that we are told have no relationship to the old wars of conquest and exploitation. War today becomes nothing more than a matter of preemption, or if that fails to impress the grumbling populous, a means of humanitarianism. However, the new extremism finds old and very well-tested excuses when it comes to clampdowns on our individual freedoms at home, with the main one being, ironically enough, to protect us from ‘extremists’.

Were the ruling class more candid about their truer intent (and the broader agenda is gradually emerging as an open secret) then we would have heard plenty by now about the coming dawn of what ought to be straightforwardly called fascism (Trump was not an aberration, but a symptom), except that aspiring tyrants, for self-evident reasons, cannot be expected to speak too loudly about their grandest ambitions. Even so, the quickening steps on our road to serfdom are becoming harder to deny.

*

Some years ago I had been thinking up names for an envisaged progressive political movement, when, after realising that all of the traditional labels ‘people’s’, ‘popular’, ‘democratic’, ‘freedom’, ‘revolutionary’, etc were already irreparably sullied, it occurred to me that in our mimetic age something snappier might be more suitable. Something along the lines of ‘system reset’, although without the Maoist overtones! Briefly that led me to consider the familiar 3-fingered salute on every computer keyboard, Ctrl-Alt-Del: a consideration that altogether stopped me in my tracks.

In fact, picking apart the elements, Ctrl-Alt-Del already represents the three-pronged assault we are increasingly subjected to: the plutocrats using these precise three strategies to oppress and dominate. First through Ctrl by means of propaganda and censorship, with the steady encroachment of mass surveillance in all areas of our lives (the panopticon), and arguably too with the mental health crisis and widespread prescription of ‘chemical cosh’ opiates and more Soma-like SSRI antidepressants.

In a recent study by scientists at University of Chicago, it was found that rats given anti-anxiety medications were less inclined to free a companion in distress, presumably because they didn’t have the same ability to feel empathy:

Next is Alt (i.e., alteration) with rollout of GMO in agriculture and transhumanism which opens the door to many developments including the advent of designer babies by means of gene editing and the literal rewiring of human consciousness. Finally there is Del (delete) by virtue of ‘population control’ which is a shorthand euphemism for the desire to dramatically reduce human numbers.

Nick Bostrom is a philosopher with deep scientific and technical training 7, who aside from being Director of the Future of Humanity Institute at Oxford University is also co-founder of the World Transhumanist Association (renamed Humanity+, Inc.) as well as an acknowledged inspiration for Elon Musk and Bill Gates. 8

Bostrom clearly stands at the forefront of methods of Ctrl and Alt being a leading proponent of total surveillance and for transhumanism, which is basically eugenics 2.0 enhanced by virtue of refined genetic manipulation and accentuated through interfacing with machines. As Bostrom’s Humanity+ announces its own intentions:

What does it mean to be human in a technologically enhanced world? Humanity+ is a 501(c)3 international nonprofit membership organization that advocates the ethical use of technology, such as artificial intelligence, to expand human capacities. In other words, we want people to be better than well. This is the goal of transhumanism.

‘Better than well’ is putting it extremely mildly. If you read past the opening statements then you quickly appreciate that the final goal is nothing short of total mastery of biology in order to achieve absolute control of human life and everything in the biosphere. Advocates of such godlike dominion over Nature should perhaps attend to the writings of Mary Shelley and Johann von Goethe. For Bostrom with his outspoken desire to install mass surveillance to save the world, I also recommend a healthy dose of Orwell.

It is almost tempting to think that the choice of Ctrl-Alt-Del was meant to be a piece of subliminal predictive programming, except that the man credited with its origins is an IBM engineer called David Bradley, who says it was not intended for use by ordinary end users but helpful for software designers. Curiously, however, as Bradley also says (see interview embedded below): “I may have invented control-alt-delete, but Bill Gates made it really famous.” 9

This section above was previously posted on October 14th 2020 as part of an extended article entitled the united colours of Bilderberg — a late review of Montreux 2019: #7 global system reset.

*

Civilisation stands on the brink. A radical transformation is coming; that is inescapable. The old patterns can no longer sustain us either materially or spiritually, and this seldom confessed truth is perfectly well understood by the ruling class who have already constructed the road ahead to an envisioned future and presented us with roadmaps.

Eager to keep as much control over everything as possible – call it ‘full spectrum dominance’ as the military arm of their military-industrial-financial complex does – they have long-since spread their tentacles into every conceivable area of politics and society more generally. This has been achieved primarily through the agency of huge foundations which indirectly support a network of think tanks, policy forums, NGOs and so forth: a stealth takeover, spreading into every nook and cranny of public life. By cloaking their real intentions under the guise of internationalism (or “globalisation”, or “global governance”) and environmentalism (or “sustainability”) since the 1970s and long before, the mainstream left is today as sold out to the same ruling powers and, in consequence, has become as unimaginative and non-progressive, as the right.

And the ruling class is the master of illusion that has slowly perfected its talent for deception and manipulation. Unlike the Wizard of Oz, who has a certain homespun wisdom, it has nothing real to offer in exchange for our deepening servitude. But the racket persists because the majority of us have become so intellectually impoverished we somehow cannot imagine any better alternative.

But finally, the system is crumbling apart altogether. One way or another, and very soon, it will have to be replaced. The ruling class, interested first and foremost in maintaining and increasing their power and privilege, already understand and acknowledge this, seeing that they must resort to some form of neo-feudalism, or creeping fascism, if you prefer (more below), which in any case they also see as “the natural order”.

For now, we find ourselves in the midst of desperate fight to preserve our remaining wealth and freedoms. The onslaught facing those of us in the West already seems a relentless one. As we enter the most important period of world history since the Second World War, this immediate fight is political, and involves us in the perennial Marxist dispute one over control and allocation of material resources. By contrast, the longer-term battle assaults our humanity at the most fundamental levels since it threatens to hold autonomy over our minds and bodies; policing our thoughts and finally altering our biology down to the molecular level. This last step of transhumanism seeks the literal melding of humans to artificial technology. Bizarre, certainly, like the worst science fiction dystopia, yet this is what the billionaires are seriously into, and what they are beginning to discuss publicly at gatherings like the World Economic Forum.

The infrastructure for this coming era of tyranny has been installed, or already close to completion: a mass surveillance panopticon; the arming and privatisation of the police (in America this militarisation being more starkly evident); the emergence of secret courts and draconian legislation (America’s NDAA 2012 arguably the most egregious example so far). In short, we see the emergence of a revised judicial framework that prosecutes whistleblowers for treason and charges dissenters as terrorists.

It is also a shift that coincides with our “age of austerity”, which is again gamed to ruin the already destitute, while simultaneously it undermines the middle class. An economic coup de grâce following four decades of more gradual decline: incomes continue to be reduced in real terms thanks to stagnant wages and zero interest on savings, neither of which can keep up with the demands of rising costs of living. But all stages of this ongoing decline – a more or less controlled collapse – are facilitated by the most sophisticated systems of mass propaganda ever devised. The internet is owned by the same billionaires, as is the bulk of the corporate media. Free speech was snuffed out years ago.

Incidentally, for those who feel that ‘fascism’ is too strong a word, or too vague, and too freely bandied around by the doom-mongers who proffer nothing but “a council of despair”, there is another post (which is essentially the book’s final extra chapter) where I try to explain at greater length why we need to keep using the word, no matter how badly misappropriated and damaged it has become over time.

A brief aside: the vitally important lesson to be learned from the rise of the Nazis (as well as the other fascist governments of the twentieth century) is not that monsters are sometimes capable of holding an otherwise educated if unwitting public in their thrall, but that fascism is most vigorous when it feeds on the pain and fear of a desperately struggling population. It is when economies are ruined that fascism almost spontaneously arises, just as flies rush to a rotting corpse. As for the monsters, it may be that many of them do not appear much like monsters at all. As Hannah Arendt, who is best known for coining the phrase “the banality of evil”, wrote after she saw Adolf Eichmann testify at his trial in 1961:

The trouble with Eichmann was precisely that so many were like him, and that the many were neither perverted nor sadistic, that they were, and still are, terribly and terrifyingly normal. From the viewpoint of our legal institutions and of our moral standards of judgment, this normality was much more terrifying than all the atrocities put together, for it implied — as had been said at Nuremberg over and over again by the defendants and their counsels — that this new type of criminal, who is in actual fact hostis generis humani [“enemy of mankind”], commits his crimes under circumstances that make it well-nigh impossible for him to know or to feel that he is doing wrong. 10

Today, Hitler strikes us as an absolutely ridiculous and grotesque figure. He is the epitome of evil; the devil incarnate. His chubby pal Mussolini appears no less ranting and raving mad. The very fact I have included any reference to them in my argument already weakens it: the first person to mention Hitler being the loser in all our debates today.

Indeed, when it comes to any appraisal of Hitler and Mussolini, an extraordinarily difficult task presents itself in simply disentangling the caricatures from the men themselves. So unfortunately, we are unable to see these demagogues through the eyes of their contemporaries. We ought to be periodically reminded therefore – pinching ourselves if necessary – how throughout Europe and America both men were not only presented as respectable, but feted as great statesmen. Hitler was lauded by Time magazine and the Daily Mail; he was good friends with Henry Ford and King Edward VIII; financially supported by Prescott Bush, father of George H. W., and by the then-Governor of the Bank of England, Montagu Norman. Prior to – and also during the war – fascism met with great favour amongst the highest echelons of the ruling class: aristocrats and plutocrats falling in love with fascism, because fascism is inherently plutocratic and aristocratic. 11

But fascism is not just the dirty secret of a staggeringly recent past, all mention of it as a political force now seems anachronistic. Few outside the thuggish gangs of neo-Nazis and white supremacists will openly call themselves fascist today. But tragically, fascism as a mainstream political force did not expire with the deaths of Hitler and Mussolini; it changed its name and its modus operandi, but little else.

So while any mention of fascism as a major political force seems anachronistic, and no-one outside the thuggish gangs of neo-Nazis and white supremacists openly calls themselves fascist today, it remains the dirty mainstream secret of an astonishingly recent past. Tragically, its mainstream political force did not expire, however, with the deaths of Hitler and Mussolini; it changed its name and its modus operandi, but little else.

The steady rise of this postmodern, globalist, corporatocratic, neo-feudal, technetronic, technocracy is, as I say, an open secret. Saying you don’t like my characterisation is a bit like saying you don’t like the colour of the sky! Indeed, half of these identifiers are ones coined, or at least preferred, by the world shapers themselves – the globalist plutocrats who so love technocracy. Certainly, you may raise a challenge that we are now beyond postmodernism, the irony of which ought to raise a little smile if not a full-blown chuckle, whilst it may also be admitted that ‘corporatocracy’ and ‘neo-feudal’ are pejorative terms. What is harder to ignore is the stench of decay under our peasant noses, although dutifully the pliant hoards will often hold their noses with considerable gratitude.

The majority has always behaved this way, although history was reshaped regardless and in spite of such widespread propensity for Stockholm syndrome. As Goethe wrote: “None are more hopelessly enslaved than those who falsely believe they are free.” 12

*

Addendum: A republic of the new malarkey

A map of the world that does not include Utopia is not worth even glancing at, for it leaves out the one country at which Humanity is always landing. And when Humanity lands there, it looks out, and, seeing a better country, sets sail. Progress is the realisation of Utopias. 13

— Oscar Wilde

*

“What is the meaning of life?” is an unintentionally hilarious question. So abstruse and rarified that it awkwardly bumps into the authentic experience of being alive before meandering off with eyes barely lifted from its own navel. It is just too damned philosophic! And yet there is a related though ineffable question that does respectfully and more intelligently seek an answer, and so at a primordial and existential level a kind of paradox confronts us daily. This paradox is indeed a source of much merriment.

But then, this question, which is hardly raised in polite company, finds a more permissible everyday enquiry: “what is the purpose of life?” A question, I think, we all ask ourselves from time to time, and one that takes its lead from the Socratic challenge: the search for self-improvement through self-examination. More confrontationally, you may have faced interrogation along the lines of: “so what are you doing with your life?” The implication here, of course, is that something purposeful needs to be done in life, whereas just drifting along without a clear purpose or goal is completely unacceptable.

In the modern world this belief is common sense. By contrast, pre-modern humans mostly live from day-to-day – as we all did until comparatively recent times – but still we forget how ‘purpose’ is not an ordinary and natural consideration, and not one that those in primitive societies would actually understand, but a later invention. Civilisation gave birth to ‘purpose’ in the abstract, and then once we had acquired aspirations of ‘purpose’, ‘meaning’ arose as a more diffuse back-projection.

And formerly, religion was the wellspring we drew upon to make determinations about our ultimate significance and so answers to questions of ‘purpose’ and ‘meaning’ were entirely contingent upon ordained beliefs about the divine and of morality. Today, with no gods to bother us, we might suppose the invitation simply to eat, drink and be merry would be sufficient enough, and yet few appear fully satisfied in following this straightforward directive; a nagging doubt persists that we may still be here for some higher purpose – or failing that that we can reinvent one anyway. Put differently, we have a tremendous longing for ‘worth’.

Unfortunately in our valiant attempt to save the world from the most egregious of religious doctrines, the cure becomes rather too clinical. In practical terms utilitarianism has stolen religion’s mantle and this numbs us in a peculiar way. With notions of ‘purpose’ and ‘worth’ necessarily adapted to fit the new paradigm, and with no better yardstick these have become equated, almost unavoidably, with notions of being socially useful in one way or another. Finally, morality, which is inherently unquantifiable, might be conveniently cut away too leaving usefulness above all else apprehended as good, virtuous and valuable. This is where utilitarianism logically leads and it is how modern society trains us to feel. What is your contribution? This is really the measure of man today.

Of course, tracing the lineage, we see utilitarianism is actually the bastard child of science – a quasi-Newtonian calculus misapplied to happiness such that all human relations can be narrowly reduced to a cost-benefit analysis. We have adopted this approach primarily because of its origins: science works! But science in turn depends upon reductionism. It maps reality, and as with every other map, does this by craftily omitting all of the detail of the actual territory; this refined attention to very specific elements is what makes all maps and scientific models useful. Utilitarianism reduces everything to usefulness.

Moreover, by successfully measuring all of creation, including each particle of our own nature, in the strict but narrow terms of what is scientifically quantifiable, we have accidentally impaired ourselves in another way. Through the high-magnification lens of science, we have learned to see trees, flowers, birds and all other creatures as cellular machines programmed and operating purely to survive and reproduce. This is a partial truth, of course, for no matter how high our magnification, science sees the world through its glass darkly, and at another level we remain keenly aware that the universe is not a wholly dead and lifeless automaton that endlessly recycles itself through ingestion and procreation. That there is more ‘meaning’ to life.

Back in the real world, the trees, the birds, the sky and the stars above that enthralled us as children, are no less wondrous if as adults we remain incurious to reflect upon their immanent mysteriousness. Indeed, not only life, but sheer existence is absolutely extraordinary and beyond all words. This we know at one level – call it ‘the unconscious’ for lack of a better term – with unflinching certainty. Importantly, and aside from death, it is the only substantial thing we can ever know for sure. The poets keep vigil to this spectacularly simple truth and are endlessly enraptured by it.

Thus the gauche and frankly silly question “what is the meaning of life?” has actually never gone away, but now hides out of bemused embarrassment in the more or less unconscious form of “what is my social function in life?” Life may be just as meaningless as it is mechanical, the acceptable view goes, but we can surely agree on the seriousness of this meaninglessness and on importance of making a worthwhile contribution. Robots in particular just need to get with the program!

When philosopher and spiritual teacher Alan Watts advises that “The meaning of life is just to be alive”, what does he mean precisely? Iain McGilchrist, who first studied literature before retraining in biology and becoming a expert is brain lateralisation, tackles this question and also considers more broadly how our pursuit of meaningful goals is related to happiness and a fulfilled life:

McGilchrist is also concerned by how meaning has been crushed through the ultra-capitalism of the West with its destructive obsession about the efficient use of ‘human resources’ and by the commensurate micro-management of the modern workplace:

*

A few years ago a friend said that, like him, I too was fed up with the old malarkey. What you want, he proposed, is “a republic of the new malarkey”! Well, since life always involves a certain amount of malarkey, then maybe this is the best we can finally hope to achieve. But then, continuing the theme, I wondered, why not aim instead for “a republic of the least malarkey”? After all, ask most people (myself included) if the world might be improved and they will generally say yes, but then ask how, and answers typically become trite and (for want of a better word) utopian. ‘Make poverty history’ is a perfect example. Remember that one? Some of us once marched under banners demanding that we ‘make poverty history’ – yes, but how? ‘Give peace a chance’, we might add – but again, getting no closer to ending the daily carnage of the forever wars.

Ask most people (again myself included) to explain the nitty-gritty of how we might make our societies better and we probably feel dumbstruck by the complexity and overwhelmed by the confusion of potential outcomes. We simply don’t quite know precisely what we want, or, better put, how to bring about the necessary changes – or at least never precisely enough to outline effective measures. Our problem, in one sense, is that positive action becomes difficult. After all, the world is a deeply and inherently puzzling place and so figuring the best course can be an inordinately difficult task.

But then ask an alternative question and you immediately receive better answers. Ask, for instance, what our society least needs and many people can instantly pull up a fairly detailed list of complaints. Pointing out stupidities, asinine rules, debilitating conventions, especially wherever our personal development is stunted or our lives are hamstrung; this comes perfectly naturally. Finding faults is just so much easier than offering details for improvements or formulating solutions. “It is very easy to criticise”, people often say, which is itself a criticism! But why? Why the eagerness to dismiss this one faculty common to all? Wouldn’t it be better to exploit it?

Which brings me to establishment of “a republic of the least malarkey”: a society constructed with the very deliberate intention of avoiding too many negatives: negatives being that much easier to put your finger on, and, crucially, to agree about. So why not make this our ambition? To set forth boldly to junk all nonsensical burdens and impositions because, aside their counterproductivity, any such transparently pointless impediments are generally as tedious as they are odious. Time is too precious to be needlessly wasted on nonsense.

*

A corresponding political movement would aim at an intelligent and humane transformation, turning away from the current drive for structuring societies on the proclaimed basis of the optimisation of efficiency and productiveness, with rigidly imposed structures that inevitably hamper the human imagination whilst infringing our most basic right: the inalienable right to be free-thinking human beings. Surely this is the most fundamental of all rights. So what of our other inalienable right, so far as practicable without infringing the freedoms and rights of others, which is to be freely-acting creatures?

All of this is a kind of ‘liberalism’, although of a very rough, unpolished form. Together with the Golden Rule, ‘liberalism’ of some kind is vital presuming we wish to live in a freer, saner and more tolerant society. Indeed, if we ever seriously decide to construct a better world for ourselves then freedom for the individual must remain the paramount concern, but so too is ensuring a nurturing and protecting society. I feel obliged therefore to add a few important caveats. As the poet and English civil war polemicist John Milton wrote:

For indeed none can love freedom heartily, but good men: the rest love not freedom, but license: which never hath more scope, or more indulgence than under tyrants. 14

The great danger of liberalism, as Milton says, is that inadvertently or otherwise, licence may be granted to tyrants, and then one man’s ‘freedom’ offers legitimacy, since it is reliant upon, another’s debasement and servitude. Sadly, this has been the primary mode of liberalism as it has existed until now, and in spite of the warnings of more thoughtful liberals who, from the outset, asserted loudly that unfettered individual liberty is entirely at odds with freedom that serves any common interest.

Elizabeth Anderson, Professor of Philosophy at University of Michigan, is the author of “Private Government: How Employers Rule Our Lives (and Why We Don’t Talk about It)” about the tyranny of the corporate workplace from non-disclosure agreements to punitive, restrictive work conditions and censorship. Here she discusses with journalist Chris Hedges how the libertarian model is cruel and why liberalism has historically defended the rights of capital above the rights of the individual:

Today’s self-proclaimed (neo-)liberal thinkers are misguided in another crucial and related way. Their emphasis on freedom of the market has dispelled one system of serfdom only to replace it with another that, although superficially different, is comparably repressive: the exaltation of the market to the rank of our new lord and master brings tyranny of more cleverly concealed designs.

What the liberal too often and rather too conveniently overlooks is that money, besides being an inherently utilitarian artifact, is a thoroughly and indivisibly social instrument too. That money is not some product of private contracts since these do not supply and protect its value, but that since society creates it to lubricate its means for production and distribution of goods and services, then society maintains, in principle at least, complete autonomy over it. Taxation, therefore, isn’t reducible to theft of private property since money isn’t strictly speaking either private or property.

Nor should money or the profit-making engines called corporations be put on a pedestal: money has no rights at all, only sentient beings can have rights, and likewise, having money ought to accord no special rights or privileges other than in enabling the procurement of stuff. This is what it does and nothing else. Money has been our terrible master, but now we must transform it into a useful servant, striving to break its links to power in every way this can be achieved.

In fact, the decline of money is already happening, and this is rather crucial to understand. Once industrial production becomes fully automated, and services follow, money will lose its primary function, which is as a token of exchange for labour. Without labour there will be no need to reward it. In order to ensure a smooth and humane transition to this future post-wage society (and the robots are coming sooner than we think), we need an honest reflection of our values: values entirely without any pound or dollar sign attached. If we are serious about our collective futures, this fundamental reevaluation of life has to happen without delay and in earnest, long before we are completely freed from treadmill of work itself.

James Suzman, author of “Work: A History of How We Spend Our Time”, here discusses how work as we know it is really a modern concept that didn’t exist until recently:

But then, final and complete individual freedom (as we often claim to desire) is only attainable once the reins that harnessed us to work have begun to slacken. Meanwhile, unbridling ourselves of the work ethic, as unavoidable as it is, is no straightforward matter, since it requires the tackling opponents on all sides. Both left and right, for contrary reasons, are mindful to keep the workers hard at it.

Indeed, all that ultimately stands between us and this gateway to an unprecedented age of freedom and abundance are two abiding obstructions. The first of these: further advances and refinements to our technology, are certain to arise whatever we decide to do; whereas, the second, that invisible but super-sticky glue which binds money to political power, can never be fully dissolved unless we act very decisively to see that it is.

This second obstacle is virtually immoveable, and yet we must finally meet it with our truly irresistible force, if only because tremendous concentrations of wealth and power are overbearingly anti-democratic. In fact they reinforce themselves entirely to the exclusion of the dispossessed, and as the tie between money and power continually tightens, so the world is made captive to a tiny privileged coterie in what are already de facto plutocracies where the lives of workers increasingly resemble those of more visibly bonded slaves – held captive by chains of debts rather than steel. So long as the economic system is not reformed, we will head unswerving to a time when the current labour resource is made totally redundant. If no action is taken, this future prospect leaves us infinitely worse off again.

Moreover, the obstacles we face are interconnected, since for so long as a few moneyed interests hold such an iron grip on political power (as is currently the case), all technological development must remain primarily directed to serve and maintain these special interests. Rather glaringly, government money is today ceaselessly pumped into the giant hands of the military-industrial complex.

Suppose instead that this enormous expenditure on the weapons industry, and thus into weapons research, was redirected to transform methods of energy production and transportation systems. Imagine then how more wonderful our lives would be had this wasteful investment in destruction already been funneled into peacetime projects. And here I mean real investment in the fullest, truest sense of time and human ingenuity, rather than simply investment of money – which is only ever a tool remember.

Full and final severance of financial and political power is extremely hard to achieve, of course, but there is a great deal that could be done to remedy the present crisis. However, to begin to move in the right direction we are compelled first to organise. This is as urgent as it is imperative. Seizing power from the one-percent must become the primary goal for all who sincerely wish to usher in a better age.

Here is Comedian Lee Camp considering the same issue in one of his “Moments of Clarity”:

*

Effing utilitarianism

If we require a more ideologically framed foundation then I also half-jokingly make the following proposal: our new approach can be futilitarianism. 15 That is, a full one-eighty degree U-turn against utilitarianism and its consequentialist basis in which ends alone purportedly justify means. Let us instead turn this inculcated foolishness entirely on its head such that, and aside from properly disconnecting moral value from mere usefulness, we remind ourselves, as Gandhi correctly asserted, that ‘means’ are also ‘ends in the making’. Thus we grant that, conversely, ‘means’ really can and do justify themselves, intrinsically, without regard to whatever the ‘ends’ may to turn out to be. 16

In other words, emphasis should be correctly placed on a person’s sincere endeavour “to do the right thing”, since this is inherently virtuous. In all ethical matters, reciprocity then becomes the touchstone again: that maxim of the Golden Rule, which holds that each should treat the other as one wishes to be treated in return. For the most ancient of all ethical rules remains the wisest and most parsimonious; and it is always better not to fix things that were never broken. 17

Futilitarianism involves an item-by-item elimination of each of our extant but inessential sociopolitical complications: an unravelling of the knots that hogtie us little by little. Beginning from the top, to first free up our financial systems, although not by so-called ‘deregulation’, since deregulation is precisely how those systems became so corrupted; but by dispelling all that is so toxic, craftily convoluted, nonsensical and plain criminal (the last ought to go without saying but evidently doesn’t!). Whilst from the bottom, the goal is to bring an end to the commercialisation of our lives on which our debt-riven (because debt-driven) economies depend: to unwind our ever-more rampant and empty consumerist culture.

In the futilitarian future, security – that most misappropriated of words – would ensure that everyone (not just the super rich) is fully protected against all conceivable forms of harm that can feasibly be eradicated, or – if eradication is not completely realisable – then greatly diminished and/or ameliorated. It will mean the individual is protected from persecution by all agents including the state itself, and will guarantee both the freedom and privacy which permit us to think and act as individuals.

From the outset, therefore, a social framework must ensure basic freedoms by acknowledging and guaranteeing not only civil liberties, but economic rights too. A living income for all and one that is eventually independent of earned salary. Such unconditional basic incomes are now under consideration, but I advocate a steady move in this direction through instituting a range of measures including extended holidays, reductions in working hours, and the lowering of the pension age. All of this should be achieved on a voluntary basis, since nobody ought to be compelled to remain idle any more than we must be compelled to work. In pursuing this goal it is also vital to maintain equivalence or preferably to increase levels of income.

Ensuring essential economic rights requires universal provision of the highest quality healthcare and optimum social entitlement. Homes and food for all. Clothing and warmth for all. Unpolluted air and clear water for all. In fact, such universal access to every necessity and much else besides is already inscribed in the United Nations Declaration of Human Rights (UDHR) under Article 25, which reads:

Everyone has the right to a standard of living adequate for the health and well-being of himself and of his family, including food, clothing, housing and medical care and necessary social services, and the right to security in the event of unemployment, sickness, disability, widowhood, old age or other lack of livelihood in circumstances beyond his control. 18

The overarching aim is to reconstruct every society (beginning with our own) to eliminate the ills of poverty because there is ample energy, food, and even non-essential but desirable material goods for everyone alive in the world today and much more again.

Emphatically, this does NOT require any form of imposed population controls, since prosperity automatically correlates with population stability (as proved by the steadily declining populations in the Western world), and so we should resolutely reject the scaremongering about imminent global scarcity of food or other vital resources. In fact contrary to all the neo-Malthusian prophesies of doom, just as the population of the world is peaking 19 we still have a great plenty of food to go around (lacking only the political and economic will to distribute it fairly) 20, with official UN estimates indicating that we shall continue to have such abundance both for the immediate future and far beyond. 21

Likewise, problems associated with energy production and hazards like pollution need to be tackled as a priority. To such ends, the brightest minds should be organised to find daring solutions to our energy needs – a new Manhattan Project, but this time to save lives. For technology justly configured is the essential key to humanity’s continuing betterment.

In short, the futilitarian cry is “Basta!” Enough is enough! Enough of poverty, and of curable sickness. Enough of excessive hard labour. An end to so much insanity.

The long-term vision might see an international community no longer perpetually at war, nor hypnotised and zombified by the infinitely-receding baubles of our faux-free markets nor the limiting and phoney promise of “freedom of choice”. Likewise, it marks a sharp retreat from our red in tooth and claw ‘meritocracies’, offering genuine hope (that most shamelessly abused of all words!) to millions in our own societies, who devoid of respect and finding little evidence of compassion, exist in abject desperation having long since turned their backs both on politics and society. Numbing their hardships with recourse to narcotics (criminalised by virtue of that other war against the dispossessed) or, more permissibly, since corporately profitable, they fill the emptiness with lifelong dependence upon doses of fully legally sanctioned opiates.

Besides the regular bread and circuses of TV, Hollywood and wall-to-wall professional sport, the rush of high-speed editing and ceaseless agitation offered through CGI, we also have nonstop access to more and more digital pacifiers thanks to iphones, Candy Crush, and TikTok. Driven to worship the tawdry, there was never a more distracted and narcissistic age than ours. It is self-evident however that we are hooked on painkillers because we are so deeply racked with pain. No amount of such distractions can ever satisfy us: the emptiness persists.

Lastly, and should we find a requirement for some pithy and memorable slogan, I propose recycling this one: “people before profits”. Generously acted upon, the rest follows automatically. Or, if such a slogan smacks too much of pleading, then let’s be more emphatic saying, “Power to the people!” Hackneyed, yes, but risible – why risible? “Power to the people” speaks to the heart and soul of what it should literally mean to live in any democracy. Our greatest tragedy is that the people have long since forgotten their birthright.

As playwright Harold Pinter said in the final words of his magnificent Nobel Lecture speech delivered in late 2005 when he was already dying from cancer:

I believe that despite the enormous odds which exist, unflinching, unswerving, fierce intellectual determination, as citizens, to define the real truth of our lives and our societies is a crucial obligation which devolves upon us all. It is in fact mandatory.

If such a determination is not embodied in our political vision we have no hope of restoring what is so nearly lost to us – the dignity of man.

Go to first chapter.

*

Please note that for the purposes of ‘publishing’ here I have taken advantage of the option to incorporate hypertext links and embed videos – in order to distinguish additional commentary from the original text all newly incorporated text has been italised.

*

1 A warning to Congress that the growth of private power could lead to fascism, delivered by Franklin D. Roosevelt on April 29, 1938.

2 The Altamont Free Concert was held in northern California in December 1969. The security had been given over to a chapter of Hells Angels. It is mostly remembered for violence and a number of deaths, including the murder of Meredith Curly Hunter, Jr.

3 The Tate–LaBianca murders were a series of murders perpetrated by members of the Manson Family during August 8–10, 1969, in Los Angeles, California, under the direction of Tex Watson and Charles Manson.

4 Quote from Chapter 2 entitled “The Garden of Live Flowers” of Through the Looking-Glass (1871) by Lewis Carroll.

5 Wikipedia devotes an entire entry to “List of University of Oxford people with PPE degrees which begins:

Philosophy, Politics and Economics (PPE) at Oxford University has traditionally been a degree read by those seeking a career in politics, public life (including senior positions in Her Majesty’s Civil Service) and journalism. This list does not include those notable figures, such as U.S. President Bill Clinton, who studied PPE at the university but did not complete their degrees.

https://en.wikipedia.org/wiki/List_of_University_of_Oxford_people_with_PPE_degrees

6 From The Sane Society, Ch. 9: Summary — Conclusion, written by Erich Fromm, published in 1955.

7 He was awarded a PhD in philosophy, but perhaps a more fitting title is ‘futurist’.

8

Bostrom, a 43-year-old Swedish-born philosopher, has lately acquired something of the status of prophet of doom among those currently doing most to shape our civilisation: the tech billionaires of Silicon Valley. His reputation rests primarily on his book Superintelligence: Paths, Dangers, Strategies, which was a surprise New York Times bestseller last year and now arrives in paperback, trailing must-read recommendations from Bill Gates and Tesla’s Elon Musk. (In the best kind of literary review, Musk also gave Bostrom’s institute £1m to continue to pursue its inquiries.)

From an article entitled “Artificial intelligence: ‘We’re like children playing with a bomb’” written by Tim Adams, published in the Guardian on June 12, 2016. https://www.theguardian.com/technology/2016/jun/12/nick-bostrom-artificial-intelligence-machine

9 From an article entitled “Ctrl-Alt-Del inventor makes final reboot: David Bradley, we salute you” written by Andrew Orlowski, published in The Register on January 29, 2004. https://www.theregister.com/2004/01/29/ctrlaltdel_inventor_makes_final_reboot/

10 From the Epilogue of Eichmann in Jerusalem: A report on the Banality of Evil written by Hannah Arendt, published in 1963.

11 The word ‘fascism’ is beginning to be usefully reclaimed. Reattached with careful deliberation and appropriateness to the situation we find unfolding today. For instance, veteran journalist and political analyst John Pilger writes:

Under the “weak” Obama, militarism has risen perhaps as never before. With not a single tank on the White House lawn, a military coup has taken place in Washington. In 2008, while his liberal devotees dried their eyes, Obama accepted the entire Pentagon of his predecessor, George Bush: its wars and war crimes. As the constitution is replaced by an emerging police state, those who destroyed Iraq with shock and awe, piled up the rubble in Afghanistan and reduced Libya to a Hobbesian nightmare, are ascendant across the US administration. Behind their beribboned facade, more former US soldiers are killing themselves than are dying on battlefields. Last year 6,500 veterans took their own lives. Put out more flags.

The historian Norman Pollack calls this “liberal fascism”: “For goose-steppers substitute the seemingly more innocuous militarisation of the total culture. And for the bombastic leader, we have the reformer manqué, blithely at work, planning and executing assassination, smiling all the while.” Every Tuesday the “humanitarian” Obama personally oversees a worldwide terror network of drones that “bugsplat” people, their rescuers and mourners. In the west’s comfort zones, the first black leader of the land of slavery still feels good, as if his very existence represents a social advance, regardless of his trail of blood. This obeisance to a symbol has all but destroyed the US anti-war movement – Obama’s singular achievement.

In Britain, the distractions of the fakery of image and identity politics have not quite succeeded. A stirring has begun, though people of conscience should hurry. The judges at Nuremberg were succinct: “Individual citizens have the duty to violate domestic laws to prevent crimes against peace and humanity.” The ordinary people of Syria, and countless others, and our own self-respect, deserve nothing less now.

From “The silent military coup that took over Washington” written by John Pilger, published in the Guardian on September 10, 2013. http://www.theguardian.com/commentisfree/2013/sep/10/silent-military-coup-took-over-washington

12 In the original German: “Niemand ist mehr Sklave, als der sich für frei hält, ohne es zu sein.

From Book II, Ch. 5 of Die Wahlverwandtschaften (‘Elective Affinities’ or ‘Kindred by Choice’) by Johann Wolfgang von Goethe, published in 1809.

13    From The Soul of Man under socialism, an essay by Oscar Wilde published in 1891.

14    From Tenure of Kings and Magistrates written by John Milton, published in 1649.

15    I recently discovered that there is already a name for the kind of social philosophy I have tried to outline here. Apparently it’s called “metanoia” and that fine with me… a rose by any other name. In any case, the term futilitarianism was originally coined as a joke by a friend. Suggested as a useful working title to encapsulate the views of our mutual friend, James, the economist, who gets a mention earlier in the book. It was a great joke – one of those jokes that causes you to laugh first and then to think more deeply afterwards. I have kept the word in mind every since simply because it fit so comfortably with my own developing thoughts about life, the universe and everything – thoughts fleshed out and committed to the pages of this book. Of course, neologisms are useful only when they happen to plug a gap, and futilitarianism serves that function. Once I had the word I wanted to know what it might mean. The joke became a matter for playful contemplation, and that contemplation became what I hope is a playful book – playful but serious – as the best jokes always are.

16    After writing this I came across a quote attributed to Aldous Huxley (from source unknown) as follows: “But the nature of the universe is such that the ends never justify the means. On the contrary, the means always determine the end.”

17    There are many formulations of the Golden Rule. A multitude of philosophical attempts to refine and more strictly formalise the basic tenet to the point of logical perfection. Kant’s concept of the “categorical imperative” is one such reformulation. But these reformulations create more confusion than they solve. There simply is no absolutely perfect way to state the Golden Rule and recast it into a solid law. The Golden Rule better understood and applied as a universal guideline. Acting in accordance with the spirit of the rule is what matters.

18    The Universal Declaration of Human Rights (UDHR) is a declaration adopted by the United Nations General Assembly on 10 December 1948. It consists of 30 articles which have been elaborated in subsequent international treaties, regional human rights instruments, national constitutions and laws. Eleanor Roosevelt, first chairwoman of the Commission on Human Rights (CHR) that drafted the Declaration, stated that it “may well become the international Magna Carta of all men everywhere.”

These notes are taken from the wikipedia entry on UDHR. http://en.wikipedia.org/wiki/UN_Declaration_of_Universal_Human_Rights

19    We have now reached what is called “peak child”, which means that although the overall population of the world will continue to grow for a few moire decades, the number of children in the world has already stopped rising. The global population is set to reach at 10 billion people, due to the “Great Fill-Up”.  World-famous statistician Professor Hans Rosling explains this using building blocks to illustrate the point [from 10 mins in]:

20

After 30 years of rapid growth in agricultural production, the world can produce enough food to provide every person with more than 2 700 Calories per day level which is normally sufficient to ensure that all have access to adequate food, provided distribution is not too unequal

From report of World Food Summit of FAO (Rome 13-17 November 1996), entitled “Food for All”. http://www.fao.org/3/x0262e/x0262e05.htm#e

21

“[However,] the slowdown [of worldwide agricultural production] has occurred not because of shortages of land or water but rather because demand for agricultural products has also slowed. This is mainly because world population growth rates have been declining since the late 1960s, and fairly high levels of food consumption per person are now being reached in many countries, beyond which further rises will be limited.” – “This study suggests that world agricultural production can grow in line with demand, provided that the necessary national and international policies to promote agriculture are put in place. Global shortages are unlikely, but serious problems already exist at national and local levels and may worsen unless focused efforts are made.” – “Agricultural production could probably meet expected demand over the period to 2030 even without major advances in modern biotechnology.”

Extracts from the Executive Summary of the FAO summary report “World agriculture: towards 2015/2030”, published in 2002. http://www.fao.org/3/y3557e/y3557e.pdf

1 Comment

Filed under « finishing the rat race », neo-liberalism

beware the naysayers!

The following article is the Prologue of a book entitled Finishing The Rat Race which I am posting chapter by chapter.

All previously uploaded chapters are available (in sequence) by following the link above or from category link in the main menu, where you will also find a table of contents and a preface on why I started writing it.

*

The saving of our world from pending doom will come, not through the complacent adjustment of the conforming majority, but through the creative maladjustment of a nonconforming minority… Human salvation lies in the hands of the creatively maladjusted.” 1                                                                     

— Martin Luther King, Jr.

*

Two decades ago, relaxing in a local pub at the end of an anti-Iraq War march, I chanced upon a discarded copy of the magazine Red Pepper. Flicking through the pages, I came to a short article written by a person I will refer to only as R. A brave soul who had gone to Baghdad as the war drums beat loudly to hunker down as a human shield in the hope that her sacrifice would deter an attack on its civilian population. Impressed by her self-sacrifice but concerned that such goodwill might be hijacked and manipulated to serve the ends of Saddam’s regime, I decided to write a letter – helpfully, there was an email address appended to the article.

To my surprise, I received a very prompt and full reply, and more surprisingly, discovered that R was a Canadian grandmother. Here is part of the reply I received:

Thank you for writing. Your letter gives me courage that there is still time to stop the awful situation. I wish I knew how. But all I can think is that with the majority of the people in the world believing this war is wrong there has to be a way to stop the terrible madness. I am now in Albania. I left Iraq and drove back to France, then flew to Albania as I have a commitment here to build a garden in the centre of this terribly damaged country. I am very torn to have left Baghdad. Some of the friends I travelled with are still there. I am not able to contact them easily except by transmitting messages through the staff at the hotel where we were living. I am very touched by the hotel team when I call because they seem so glad to hear from me and I feel I have done so little.

The following day, March 11th, I wrote back as follows:

Dear R,

How kind of you to return my letter so swiftly. You can hardly imagine how surprised I was to discover not one but two replies to my short note. In some respects I am glad to hear that you have left Baghdad and certainly you have every reason to hold your head high and to tell your grandchildren about the courageous stand you and your friends have taken. Perhaps if you were naïve then that was only in your belief that thousands would follow you into danger, since it is hard to follow your grand commitment (and more importantly, most, like myself, quite frankly lack the courage, if not also the conviction, to do so). The fact that the media were more interested in Gustavo than the human volunteers says much, I feel, for our difficulty in seeing the innocence of others (it is easy to sympathise with a dog who “has no axe to grind” but what motivates the rest of you it is easy to wonder?) And many will be cynical, since it’s hard to comprehend acts of selflessness when you inhabit a world fashioned by the heartless demands of global capitalism.

It is worrying to hear that the other human shields have been moved to “strategic sites”. This was reported on the news and given as the reason why many had already left Iraq, and we have also heard that Saddam used human shields in the last conflict to protect his armaments. I hope that your friends will not allow themselves to be sacrificed to protect Saddam – that would be an appalling tragedy.

Your analysis of the crisis is spot on: “it is unforgivable that men of violence keep each other in power by persuading frightened people that violence is the only path”. We all should act against this barbarism. You have played a big part whereas a million in London have made our voices heard in a smaller way. You ask if I have any ideas. Then may I quote you again: “protest against this war loudly and strongly in whatever way you can”! And here I believe that in Britain more than anywhere we hold the real key. The population is split and it is reckoned that without a second resolution (which in any case will undoubtedly be vetoed by the French) only something like 30% are in favour of war, which means a very sizeable majority remain frustrated. Tony Blair is a frightened man and I don’t know if you saw how badly Jack Straw (our foreign secretary) lost his composure at the UN recently. So the ruling Labour Party is deeply divided (yesterday Clare Short, a cabinet member, described Blair as “reckless”). On top of this there is a groundswell.

Last week hundreds of schoolchildren in Britain abandoned their lessons and took to the streets. In Sheffield they marched into the university and drummed up support from the much older students and then collectively they marched into the city centre. This is unprecedented. And these disaffected groups have such a diverse make-up, crossing the usual boundaries of age, class, or nationality.

These are a few very good reasons for optimism though at heart I confess that I am pessimistic for the simple reason that Blair takes no notice. ONE MILLION march into London and all he does is to acknowledge our right to free speech! That is simply not enough! What kind of democracy is run on the whim of one man? What is needed then is some way of demanding Blair’s attention.

There is a plan that when war begins (as it surely will) people should drop whatever it is they are doing and congregate outside the town hall wherever they happen to be and protest. That we should block the streets, cause peaceful civil unrest, and demand our right to be heard. If this happens then it represents the beginnings of a sea-change in what might loosely be called politics. But will it happen? Will I join the protests? Certainly I support the idea. But success depends on solidarity and a movement of colossal size when probably most (myself included) will stay at our desks (either too disinterested or too cowed to take such daring unilateral action). In any case, when war has begun it will be hard not to think that we have already failed.

Perhaps the best hope then is that we can forestall the war indefinitely – though the date indelibly in the Bush diary is March 17 – but the fact that France, Russia and Germany are refusing to co-operate and that Hans Blix has remained so unflinching throughout keeps the pressure on. We too must try to keep the pressure up, though this is difficult with time running short. One beautiful thing that happened yesterday was that at the end of a TV debate Tony Blair was actually slow hand clapped by the audience – he must be getting the message by now!

Before I finish, may I just ask about Albania? Albania is one of those places that gets forgotten. I have no idea what Albania is like these days (not that I have much idea what Albania was like during the Cold War). Then today I read an article in The Guardian newspaper saying that Britain is intending to send its asylum seekers to camps in Albania. For a government that claims to want “to liberate the people of Iraq” it takes a rather dim view of “illegal immigrants” who are we’re told “an increasing problem”. So we will send them away to camps in Albania, where The Guardian claims, they will be faced with rabies and encephalitis-carrying ticks amongst the other hazards. My government makes me sick. To judge from the tail of your email you have a much better chap in charge of Canada.

I hope that this letter finds you happy and well. I will send it to your old email address since there is nothing urgent contained within its rambling bulk. I hope I haven’t disillusioned you by taking a more pessimistic tone. And thank you for the quote from Lao Tzu (may we all be as wise) and let me finish with another, and one that is perhaps better known:

heaven and earth are ruthless, and treat the myriad creatures as straw dogs

In the words of Philip Larkin, we should be kind to one another, while there is still time.

Warmest regards, James.

Little more than a week later, on March 20th (and so a mere three days after the date anticipated) war on Iraq began in earnest. Shock and awe missile strikes punishing those down on the streets of Baghdad who had no quarrel with us at all.

As the months passed, increasingly disillusioned with the state of world affairs and depressed by problems at work which were affecting me more personally, I had continued writing to R who was keen that we should keep in contact. She was still helping out on the garden project in Albania. Eventually, however, the correspondence between us dried up, perhaps, the ties were frayed as (when I look back honestly) I increasingly presented her with issues and problems, seeking her counsel as a sort of surrogate therapist, instead of maintaining proper relations as a distant friend. In any case, the last reply I received from R began as follows:

You sound like you are in a real muddle.

Suddenly finding you are about to lose your work, part-time or otherwise is disconcerting at the best of times. Indeed, we have never met in person but nonetheless, from your writing and description of yourself you sound like someone deep in thought and short on action. I hope it is not too presumptuous of me to say so. I am a bit of an introvert myself so I can recognize the symptoms. At least I think I can.

So….my best advice of the day is to get out and get in touch with the world. Stay connected. The world is full of good and decent people but you have to seek them out. I get terribly depressed when I listen to the American media talk about Iraq and suggest that an Iraqi life is not worth that of an Americans’. It makes me sick. But as Henry Miller said to Erica Jong…..”don’t let the naysayers get you down”. Life is long and all you can really do about it is get up each day and put one foot in front of the other.

Am I that transparent, I wondered. A few informal letters and I’m an open book! No doubt this is a reason her advice stuck with me ever since. 2 But the part of her letter that most caught my attention was the quote… “don’t let the naysayers get you down”. I have frequently pondered it ever since, before gradually forming an opinion that leads to a contrary but complementary conclusion. Not that we should let the naysayers get us down, obviously, but that aside from carrying a psychological shield to guard against their highly infectious gloom and doom, we might also take great care to guard against the eternal hope of the yea-sayers.

For though, in the West at least, we are lucky to be alive during times of incomparable plenty and considerable social freedom, not to mention relative peace and political stability, there is a great deal we are justified in feeling miserable and resentful about. Firstly, that this ‘best of all times’ is already under a sustained attack, and unless we organise our fight back then this decline is likely to accelerate, both our freedom and relative prosperity withering away altogether. But secondly, that we, the human race, have long since held far greater potential, and might easily surpass this false summit offered by our impressive western civilisations. For it is really not that our ease and pleasure still relies for its purchase on the burdened backs of those who distantly suffer; if indeed it ever truly did. There is no zero-sum game at work in this regard. Moving our slavery abroad has instead created a new and different kind of underclass at home, bringing unprecedented miseries since ones never before juxtaposed by such comparative wealth.

Not long ago, the vast majority of resources were remote and insecure. Mere survival forced almost everyone into hours of labour that were excessively long and hard. Today with abundant resources, human labour is being made redundant thanks to new technologies. It is self-evident that we need to find fairer methods for distributing our resources as well as a sensible approach to maximising the new freedom arising from our gradual replacement by automated systems. Certainly we should not let the Malthusian naysayers get us down, although we must of course guard against Pollyanna optimism too, and especially of those who tell us to enjoy the good times and stop moaning. For so long as the good times can and should be far better again, then surely moaning is the least we can do. We stop moaning at our peril!

First chapter…

*

1 Martin Luther King, jr, Strength to Love. Philadelphia: Fortress Press, 1963/1981: 27-28

2 In the same letter, R also suggested “putting one foot in front of the other” more literally, recommending, to help clear away the cobwebs, that I might like to walk the Camino de Santiago, or the Way of St. James, a major route of Christian pilgrimage which starts from many locations in France, Belgium, German or inside Spain itself extending for over a thousand miles and finishing at Santiago de Compostela, the capital of the Spanish province of Galicia. I have yet to pick up her prescription (though perhaps one day in the future I shall).

Leave a comment

Filed under Albania, « finishing the rat race », Iraq

aimless weather

The following article is Chapter One of a book entitled Finishing The Rat Race.

All previously uploaded chapters are available (in sequence) by following the link above or from category link in the main menu, where you will also find a table of contents and a preface on why I started writing it.

*

“Let no one enter here who does not have faith”

— Inscription over the door on Max Planck’s Laboratory

*

“In the beginning God created the heaven and the earth. And the earth was without form, and void; and darkness was upon the face of the deep. And the spirit of God moved upon the face of the waters. And God said, Let there be light: and there was light.”

These were the words spoken by the astronauts on board Apollo 8 once they had established a lunar orbit, thereby becoming the first humans ever to leave Earth fully behind them. As a literary choice, it was one that inevitably caused considerable irritation, and especially amongst atheists around the world.

Undoubtedly there was more than a little politics involved when it came to the Apollo astronauts making a decision to read passages from the Bible. Given that the Cold War face-off had provided such impetus for the entire space programme, having steadily beaten off the challenge of the godless Soviets, if nothing else the words transmitted a kind undiplomatic rebuke, redoubled when the Eagle landing module touched down just a few months later, and the astronauts’ first duty became to plant the Stars and Stripes in the pristine moon dust. Skipping about in delight, taking some holiday snaps and bringing home a basket full of moon-rocks no longer enough.

Not that I am trying to rain on anyone’s parade. Far from it. The Moon landing involving not merely a tremendous technical achievement but also a hell of a lot of guts. It was one moment when ordinary Americans had every reason to feel pride. Viewed in an alternative light, however, this towering and singular accomplishment was also the extraordinary end product of many centuries of truly international effort. A high point in a centuries long science and engineering project set in motion by pioneers like Galileo, Kepler and, of course, Newton, which only then culminated on July 20th 1969 with such a genuinely epoch-marking event that for many minutes the world collectively held its breath … 1

Apollo 8 had been just another of the more important reconnaissance missions necessary to lay the groundwork for the moon landing itself. Another small step that led directly to that most famous step in history (so far), although as a step, the Apollo 8 mission was also breathtaking in its own right. As for the grumbling about the transmission of passages from Genesis, well the inclusion of any kind of religious element seemed inappropriate to many. After all, science and religion are not supposed to mix, but on top of which, having gained seeming ascendancy why was Science suddenly playing second fiddle again?

Religion, as a great many of its opponents readily point out, is superstition writ largest. Science, by contrast, purposefully renounces the darkness of superstition, and operates solely by virtue of the assiduous application of logic and reason. Science and religion are therefore as incompatible as night and day, and so when it came to the cutting edge of space exploration, just what did the Bible have to do with any of it? Sir Isaac Newton was doing the driving, wasn’t he…?

On the other hand, and playing Devil’s Advocate, why not choose these words? After all, the circumstances rendered a strange appropriateness and charge to the plain vocabulary of Genesis: heaven and earth; void and darkness; the face of the waters. A description of the act of creation so understated, and yet evocative, that it’s hard to recall a more memorable paragraph in the whole literary canon, and few with greater economy. If the astronauts or NASA were endorsing the biblical story of creation that would have been another matter, of course, but here I think we can forgive the perceived faux pas – ‘one false step amidst a giant leap forward for mankind!’

My personal wish is that as Neil and Buzz were setting off to “where no man had gone before”, climbing into their Lunar Landing Module and sealing the air-lock behind them, they might forgetfully have left the flag behind to keep Michael Collins company. Leaving no signs of their extraordinary visit besides the landing section of the strange metal beetle they had flown in, and, beside it, their monumental, and somehow still astonishing, footprints.

*

Very occasionally I happen to meet intelligent and otherwise rational people who’ll made the claim that the biblical story of creation is broadly supported by the latest scientific discoveries. The universe began at a moment, they’ll explain, just as it is written. There then followed a succession of events, leading to the eventual rise of Man. All of this, they’ll insist, accurately checks out with the opening page of Genesis, whilst the theories of modern cosmology and evolutionary biology simply patch the occasional missing details. And truly, this is a desperate line of defence!

For there is no amount of creative Biblical accountancy – of interpreting days as epochs and so forth – which can successfully reconstruct the myth of Genesis in order to make it scientifically sound. The world just wasn’t created that way – wasn’t created at all, apparently – and creationism, which often claims to be an alternative theory, when it offers no theory at all, also fails to withstand the minutest degree of scrutiny. No, creationism survives merely on account of the blind and desperate faith of its adherents. Here indeed is how a modern cosmologist might have gone about rewriting the Biblical version (if by chance they had been on hand to lend God a little assistance):

“In the beginning God created a small but intense fireball. A universal atom into which space and time itself were intrinsically wrapped. As this primordial fireball very rapidly expanded and cooled, the fundamental particles of matter condensed out of its energetic froth, and by coalescence, formed into atoms of hydrogen, helium and lithium. All this passed in a few minutes.

Clouds of those original elements, collapsing under their own weight, then formed into the first stars. The loss of gravitational potential energy heating the gases in these proto-stars to sufficiently high temperatures (many millions of degrees) to trigger nuclear fusion. In the cores of such early giants, the atoms of hydrogen and helium were now just beginning to be fused into ever-heavier elements through a series of stages known as nucleosynthesis. Happily this fusion of smaller atoms into increasingly larger ones generated an abundance of energy. Enough to keep the core temperature of each star above a million degrees; hot enough to sustain the fusion of more and more atoms. So it was that the hydrogen begat helium, helium begat lithium, lithium begat beryllium and boron… And God saw that it was good.

After a few billion years had passed, these same stars, which had hitherto been in a state of hydrostatic balance – thermal and radiation pressure 2 together supporting the weight of the gases – were burning low on fuel. During this last stage, at the end of a long chain of exergonic 3 fusion reactions, atoms as large as iron were being created for the very first time. Beyond the production of iron, however, this nucleosynthesis into even heavier elements becomes energy exhaustive, and so the process of fusion could no longer remain self-sustaining. So it came to pass that the first generation stars were starting to die.

But these stars were not about to fizzle out like so many guttering candles. The final stage of their demise involved not a whimper, but bangs of unimaginable power. Beginning as a collapse, an accelerating collapse that would inevitably and catastrophically rebound, each star was torn apart within a few seconds, the remnants propelled at hyper-velocities out into deep space. And it was during these brief but almighty supernova explosions when the heavier elements (lead, gold and ultimately all the stable elements in the periodic table) came into being.

Ages came and passed. Pockets of the supernova debris, now drifting about in tenuous clouds, and enriched with those heavier elements, began to coalesce a second time: the influence of gravity rolling the dust into new stars. Our Sun is one star born not from that generation, but the next, being one of almost countless numbers of third generation of stars: our entire Solar System emerging indeed from a twice-processed aggregation of swirling supernova debris. All this had passed around 5 billion years ago, approximately 14 billion years after the birth of time itself.”

Now very obviously in this modern reworking there can be no Earth at the time of creation, so the story in Genesis fails to accord with the science right from its outset: from chapter one, verse one. For there is simply no room for the Earth when the whole universe is still smaller than a grapefruit.

I can already hear the protests of course: for Earth we must read Universe apparently, in order to make any meaningful comparison. Okay, so playing along, what then becomes of heaven? For God created both heaven and earth remember. Well, if heaven was once some place above our heads (as it surely was for people living under the stars at the time when Genesis was written) then to accord with the current theories of cosmology, perhaps those who still subscribe to the entire Biblical story imagine its existence as a parallel universe; linked through a wormhole we call death. Truly, the Lord works in mysterious ways!

 *

Some readers will doubtless baulk at the idea of God being the creator of anything, and yet I think we should honestly admit that nothing in modern cosmology with certainty precludes the existence of an original creative force; of God only as the primum mobile, the first-mover, igniting the primordial spark. Indeed, it may come as a surprise to discover (as it did for me) that one of the first proponents of the currently accepted scientific theory – now universally known as the Big Bang Theory – was by vocation a Roman Catholic priest.

Father Georges Lemaître, a Belgian professor of physics and astronomy, having quickly recognised the cosmological possibilities latent within Einstein’s then still novel theory of General Relativity, published his ‘hypothesis of the primeval atom’ in the prestigious scientific journal Nature as long ago as 1931. Yet interestingly, his ideas did not receive much support at the time, in part due to lack of evidence, but also because many contemporary physicists initially rejected all such theories of spontaneous universal origin as being an entirely religious import. But science isn’t built on belief, and so it can’t be held hostage to orthodoxy in the same way that religious conviction can. This is where science and religion absolutely depart. Although, in order to explore this further, it is first helpful to consider two vitally important though rather difficult questions: “what is science?” and “what is religion?”

*

I have a friend who tells me that science is the search for knowledge; an idea that fits very happily with the word’s etymology: from Latin scientia, meaning “to know”. Meanwhile the dictionary itself offers another useful definition: “a branch of knowledge conducted on objective principles involving the systematized observation of, and experiment with phenomena.” According to this more complete description, science is not any particular set of knowledge, but rather a system or systems that aim at objectivity.

Scientific facts exist, of course, but these are simply ideas that have been proved irrefutable. For instance, that the Earth is a ball that moves around the Sun. This is a fact and it is a scientific one. For the most part, however, scientists do not work with facts as straightforward as this. And rather than facts, the most common currency of working scientists is theories. Scientific theories are not to be believed in as such, but a means to encompass the best understanding available. They exist in order to be challenged, and thus to be improved upon.

In Science, belief begins and ends as follows: that some forms of investigation, by virtue of being objective, lead to better solutions than other, less objective approaches. This is the only orthodoxy to which all scientists are committed, and so, in the final analysis, being scientific means nothing more or less than an implicit refusal to admit knowledge aside from what can be observed and measured. For Science is an inherently empirical approach, with its prime directive and perhaps also its élan vital being that: in testing, we trust.

I could leave the question of science right there and move on to consider the question of religion, but before I do so, I would like to put one important matter straight. Whatever it is that science is and does, it also helps to understand that the majority of scientists rarely if ever consider this question.

As a physics undergraduate myself, I learnt quite literally nothing about the underlying philosophies of science (there was an addition module – a final year option – addressing this topic but unfortunately it was oversubscribed). Aside from this, I was never taught to analyse the empirical method in and of itself. I personally learnt absolutely nothing about hypotheses, let alone how to test them (and in case this should lead readers to think my university education was itself substandard, then let me also admit, at the risk of appearing an arrogant braggart, that I attended one of the best scientific academies in the country – Imperial College would no doubt say the best). Yet they did not teach us about hypotheses, and for the simple reason that the vast majority of physicists rarely bother their heads about them. Instead, the scientists I’ve known (and again, I was a research student for three years) do what might be broadly termed “investigations”.

An investigation is just that, and it might involve any variety of techniques and approaches. During the most exciting stages of the work, the adept scientist may indeed rely as much on guesswork and intuition as on academic training and logical reasoning. Famously, for example, the chemist August Kekulé dreamt up the structure of benzene in his sleep. Proving the dream correct obviously required a bit more work.

The task set for every research scientist is to find answers. Typically then, scientists are inclined to look upon the world as if it were a puzzle (the best puzzle available), and as with any other puzzle, the point is just to find a satisfactory solution.

So why then did I begin with talk of scientific methods? Well because, as with most puzzles, some methods will prove more efficacious than others, but also because in this case there is no answer to be found at the bottom of page 30 – so we’d better be as sure as we can, that the answer we find is the best available one. Which in turn means applying the best (i.e., most appropriate and reliable) methods at hand, or else developing still better ones.

By ‘method’, I do not mean simply whatever approach the scientist employs to test his or her own guesses about the puzzle, but just as importantly, a system that can be used to prove this solution to the satisfaction of a wider scientific community. For methods too are accepted only once they have been tried and tested.

So when the philosopher Karl Popper claims that the scientific method depends upon “testable hypotheses” (or as my friend calls them “detestable hypotheses”) I would say fair enough… but let’s not mistake this definition for a description of what scientists actually do. We may accept that science must make statements that can be falsified – this is indeed a useful “line of demarcation”, as Popper puts it 4 – and we can call these statements “testable hypotheses” if we choose – but science is simply about broadening and refining our knowledge and understanding, and any approach that is scientifically accountable will really do just fine.

*

So what of religion? Well, that’s a pricklier issue again, of course, so let me swerve clear of any direct answer for the moment so as to draw a further comparison with science.

Where a religious person may say, I have faith in such and such because it is written so, a scientist, assuming she is honest, ought only to say that “given the evidence we have thus far collected and collated, our best explanation is the following…” As more evidence becomes available, our scientist, assuming she has integrity (at least as a scientist), may humbly (or not) concur that her previously accepted best explanation is no longer satisfactory. In short, the scientist is always required by virtue of their profession to keep an open mind; the truth of their discipline being something that’s forever unfolding and producing facts that are rarely final.

For the religious-minded, however, the very opposite may apply, and for all who know that the true shape of things is already revealed to them through faith, there must be absolute restrictions to further open-minded inquiry. (Not that all religions stress the importance of such unassailable beliefs – some do not.)

Where it is the duty of every scientist to accept all genuine challenges, and to allow (as Richard Feynman once put it) for Nature to come out “the way she is”, it is the duty for many religious believers (though not, as I say, of all who are religious) to maintain a more rigidly fixed view of the world. Here again, however, it ought to be stressed that the scientist’s constant and single-minded aim for objectivity is not necessarily dependent on his or her lack of beliefs or subjective opinion – scientists are, after all, only human. So virtually all scientists come to their puzzles with preconceived hunches, and, whether determined by the head or the heart, have a preference for one solution over another. But this doesn’t much matter, so long as they are rigorous in their science.

Indeed, many of the most brilliant scientific minds have also held strongly religious convictions (Newton and Darwin spring immediately to mind). In studying that great work called Nature, Newton was implicitly trying to understand the mind of God, and finally Newton’s discoveries did not shatter his belief in God, but instead confirmed for him that there must be an intelligent agency at large, or at least one that set things initially in motion.             Darwin’s faith was more fundamentally rocked (as we shall see), yet he came to study Nature as another devout believer. But the art of the scientist in every case is to recognise such prejudices and put them to one side, and this is the original reason for developing such strict and rigorous methodologies. Ultimately, to reiterate, Science is no more or less powerful than its own methods for inquiry. Which is how it was that physicists and astronomers gradually put aside their reservations as the evidence grew in favour of Father Lemaître’s theory of creation.

So the lesson here is that whereas religion demands faith, science asks always for the allowance of doubt and uncertainty. And just as St Thomas asked to see the holes in Christ’s palms, so too every responsible scientist is called to do the same, day in and day out. Doubting Thomas should be a patron saint of all scientists.

*

I wish to change the subject. It is not my aim to pitch science against religion and pretend that science is somehow the victor, when in truth I regard this as a phoney war. On its own territory, which is within the bounds of what is observable and measurable, science must always win. This is inevitable. Those who still look for answers to scientific questions in the ancient writings of holy men are only deceiving themselves.

But science too has its boundaries, and, as the philosopher Ludwig Wittgenstein argued in his famous (if notoriously difficult) Tractatus Logico-Philosophicus – proceeding via an interwoven sequence of numbered and nested propositions and aphorisms to systematically unravel the complex relationship between language, thought and the world – rational inquiry, though our most promising guide for uncovering the facts of existence, can never be complete.

Just as the Universe apparently won’t allow us to capture every last drop of heat energy and make it do work for us, at least according to current thermodynamic theories, so Wittgenstein argued (to his own satisfaction and also to the exacting standards of Bertrand Russell) an analogous limitation applies to all systems of enquiry designed for capturing truth. Even the most elaborate engines in the world cannot be made 100% efficient, and likewise the most carefully constructed forms of philosophical investigation, even accepting Science as the most magnificent philosophical truth engine we shall ever devise (as Wittgenstein did 5), will inescapably be limited to that same extent – perfection in both cases being simply unattainable.

Many have racked their brains to think up the most cunning of contraptions, but none have invented a perpetual motion machine, and the same, according to Wittgenstein, goes for anyone wishing to generate any comprehensive theory of everything, which is just another human fantasy. 6 Most significantly and most controversially, Wittgenstein says that no method can be devised for securing any certain truths regarding ethics, aesthetics, or metaphysics, and that consequently all attempts at pure and detached philosophical talk of these vital matters is mere sophistry.

Having revealed the ultimate limitations to reasoning, Wittgenstein then arrives at his seventh, and perhaps most famous proposition in this most famous and celebrated of works. A stand-alone declaration: it is the metaphorical equivalent of slamming the book shut!

“What we cannot speak of we must pass over in silence.” 7, he says, suddenly permitting himself the licence of a poet.

This was his first and also last hurrah as a philosopher (or so he thought 8), Wittgenstein taking the lead from his own writings – and what greater measure of integrity for a philosopher than to live according to their own espoused principles. Ditching his blossoming career at Cambridge, he set out in pursuit of a simpler life back in his Austrian homeland, first (and somewhat disastrously) as a primary school teacher, and then more humbly as a gardener at a monastery. (Although at length, of course, he did famously return to Cambridge to resume and extend his “philosophical investigations”).

But isn’t this all just a redressing of much earlier ideas of scepticism? Well, Wittgenstein is quick to distance himself from such negative doctrines, for he was certainly not denying truth in all regards (and never would). But faced by our insurmountable limitations to knowledge, Wittgenstein is instead asking those who discuss philosophies beyond the natural sciences to intellectually pipe-down. Perhaps he speaks too boldly (some would say too arrogantly). Maybe he’s just missing the point that others more talented would have grasped, then stomping off in a huff. After all, he eventually turned tail in 1929, picking up where he’d left off in Cambridge, returning in part to criticise his own stumbling first attempt. But then what in philosophy was ever perfectly watertight?

The one thing he was constantly at pains to point out: that all philosophy is an activity and not, as others had believed, the golden road to any lasting doctrinal end. 9  And it’s not that Wittgenstein was really stamping his feet and saying “impossible!”, but rather that he was attempting to draw some necessary and useful boundaries. Trying to stake out where claims to philosophic truth legitimately begin and end. An enterprise perhaps most relevant to the natural sciences, an arena of especially precise investigation, and one where Wittgenstein’s guiding principle – that anything which can be usefully said may be said clearly or not at all – can be held as a fair measure against all theories. Indeed, I believe this insistence upon clarity provides a litmus test for claims of “scientific objectivity” from every field.

Embedded above is a film by Christopher Nupen entitled “The Language Of The New Music” about Ludwig Wittgenstein and Arnold Schonberg; two men whose lives and ideas run parallel in the development of Viennese radicalism. Both men emerged from the turmoil of the Habsburg Empire in its closing days with the idea of analysing language and purging it with critical intent, believing that in the analysis and purification of language lies the greatest hope that we have.

*

Let me return to the question of religion itself, not to inquire further into “what it is” (since religion takes many and varied forms, the nature of which we may return to later), but rather to ask more pragmatically “whether or not we are better with or without it”, in whatever form. A great many thinkers past and present have toyed with this question; a considerable few finding grounds to answer with a very resounding “without”.

In current times there has been no more outspoken advocate of banishing all religion than the biologist Richard Dawkins. Dawkins, who aside from being a scientist of unquestionable ability and achievement, is also an artful and lively writer; his books on neo-Darwinian evolutionary theory being just as clear and precise as they are wonderfully detailed and inspiring. He allows Nature to shine forth with her own brilliance, though never shirking descriptions of her darker ways. I’m very happy to say that I’ve learnt a great deal from reading Dawkins’ books and am grateful to him for that.

In his most famous (although by no means his best) book, The Selfish Gene, Dawkins set out to uncover the arena wherein the evolution of life is ultimately played out. After carefully considering a variety of hypotheses including competition between species, or the rivalry within groups and between individuals, he concludes that in all cases the real drama takes place at a lower, altogether more foundational level. Evolution, he explains, after a great deal of scrupulous evidential analysis, is driven by competition between fragments of DNA called genes, and these blind molecules care not one jot about anything or anyone. This is why the eponymous gene is so selfish (and Dawkins may perhaps have chosen his title a little more carefully, since those who haven’t read beyond the cover may wrongly presume that scientists have discovered the gene for selfishness, which is most certainly not the case). But I would like to save any further discussion about theories of biological evolution, and of how these have shaped our understanding of what life is (and hence what we are), for later chapters. Here instead I want to briefly consider Dawkins’ idea not of genes but “memes”.

*

In human society, Dawkins says in his final chapter of The Selfish Gene, change is effected far more rapidly by shifts in ideas rather than by those more steady shifts in our biology. So in order to understand our later development, he presents the notion of the parallel evolution between kinds of primal idea-fragments, which he calls “memes”. Memes that are most successful (i.e., the most widely promulgated) will, says Dawkins, like genes, possess particular qualities that increase their chances for survival and reproduction. In this case, memes that say “I am true so tell others” or more dangerously “destroy any opposition to my essential truth” are likely to do especially well in the overall field of competition. Indeed, says Dawkins, these sorts of memes have already spread and infected us like viruses.

For Dawkins, religious beliefs are some of the best examples of these successful selfish memes, persisting not because of any inherent truth, but simply because they have become wonderfully adapted for survival and transmission. His idea (a meme itself presumably) certainly isn’t hard science – in fact it’s all rather hand-waving stuff – but as a vaguely hand-waving response I’d have to admit that he has a partial point. Ideas that encourage self-satisfied proselytising are often spread more virulently than similar ideas that do not. Yet ideas also spread because they are just frankly better ideas, so how can Dawkins’ theory of memes bring this more positive reason into account? Can his same idea explain, for instance, why the ideas of science and liberal humanism have also spread so far and wide? Aren’t these merely other kinds of successful meme that have no special privilege above memes that encourage sun worship and blood sacrifice?

My feeling here is that Dawkins comes from the wrong direction. There is no rigorous theory for the evolution of memes, nor can there be, since there is no clearly discernible, let alone universal mechanism, behind the variation and selection of ideas. But then of course Dawkins knows this perfectly well and never attempts to make a serious case. So why does he mention memes at all?

Well, as an atheistic materialist, he obviously already knows the answer he wants. So this faux-theory of memes is just his damnedest attempt to ensure such a right result. Religion operating as a virus is an explanation that plainly satisfies him, and whilst his route to discovering that answer depends on altogether shaky methodology, he puts aside his otherwise impeccable scientific principles, and being driven to prove what in truth he only feels, he spins a theory backwards from a prejudice. What Dawkins and others have perhaps failed to recognise is that in the fullest sense, questions of religion – of why we are here, of why we suffer, of what makes a good life – will never be cracked by the sledgehammer of reason, for questions of value lie outside the bounds of scientific analysis. Or if he does recognise this, then the failing instead is to understand that there are many, quite different in temperament, who will always need attempted answers to these profound questions.

*

I didn’t grow up in a particularly religious environment. My mother had attended Sunday school, and there she’d learnt to trust in the idea of heaven and the eternal hereafter. It wasn’t hell-fire stuff and she was perfectly happy to keep her faith private. My father was more agnostic. He would probably now tell you that he was always an atheist but, in truth, and like many good atheists, he was actually an agnostic. The test of this is simple enough: the fact that he quite often admitted how nice it would be to have faith in something, although his own belief was just that Jesus was a good bloke and the world would be much nicer if people to tried to emulate him a bit. (Which is a Christian heresy, of course!)

I was lucky enough to attend a small primary school in a sleepy Shropshire village. Although it was a church school of sorts, religious instruction involved nothing more than the occasional edifying parable, various hymns, ancient and modern, and the Lord’s Prayer mumbled daily at the end of assembly. Not exactly what you’d call indoctrination. At secondary school, religious instruction became more formalised – one hour each week, presumably to satisfy state legislature. Then, as the years went by, our lessons in R.E. shifted from a purely Christian syllabus to one with more multicultural aspirations. So we learnt about Judaism, Islam, and even Sikhism, although thinking back I feel sure that our teacher must have delivered such alternative lessons through gritted teeth. I recall once how a classmate confused the creature on top of a Christmas tree with a fairy. Hark, how you should have heard her!

Being rather devout, this same teacher – a young, highly-strung, and staunchly virginal spinster – also set up a Christian Union club that she ran during the lunch hour, and for some reason I joined up. Perhaps it had to do with a school-friend telling me about Pascal’s wager: that you might as well believe in God since you stand to gain so much for the price of so small a stake. In any case, for a few weeks or months I tried to believe, or at least tried to discover precisely what it was that I was supposed to be believing in, though I quickly gave up. Indeed, the whole process actually made me hostile to religion. So for a time I would actively curse the God in the sky – test him out a bit – which proves only that I was believing in something.

Well to cut a long story short, whatever strain of religion I’d contracted, it was something that did affect me to a considerable extent in my late teens and early twenties. Of course, by then I regarded myself a fervent atheist, having concluded that “the big man in the sky” was nothing more or less than an ugly cultural artifact, something alien, someone else’s figment planted in my own imagination… and yet still I found that I had this God twitch.              Occasionally, and especially for some reason whilst on long journeys driving the car, I’d find myself ruminating on the possibility of his all-seeing eyes watching over me. So, by and by, I decided to make a totally conscious effort to free myself from this mind-patrolling spectre, snuffing out all thought of God whenever it arose. To pay no heed to it. And little by little the thought died off. God was dead, or at least a stupid idea of God, a graven image, and one I’d contracted in spite of such mild exposure to Christian teachings. A mind-shackle that was really no different from my many other contracted neuroses. Well, as I slowly expunged this chimera, I discovered another way to think about religion, although I hesitate to use such a grubby word – but what’s the choice?

Spirituality – yuck! It smacks of a cowardly cop-out to apply such a slippery alternative. A weasel word. A euphemism almost, to divert attention from mistakes of religions past and present. But are there any more tasteful alternatives? And likewise – though God is just such an unspeakably filthy word (especially when He bears an upper case G like a crown), what synonym can serve the same purpose? You see how difficult it is to talk of such things when much of the available vocabulary offends (and for some reason we encounter similar problems talking about death, defecation, sex and a hundred other things, though principally death, defecation and sex). So allow me to pass the baton to the greatly overlooked genius of William James, who had a far greater mastery over words than myself, and is a most elegant author on matters of the metaphysical.

*

“There is a notion in the air about us that religion is probably only an anachronism, a case of ‘survival’, an atavistic relapse into a mode of thought which humanity in its more enlightened examples has outgrown; and this notion our religious anthropologists at present do little to counteract. This view is so widespread at the present day that I must consider it with some explicitness before I pass to my own conclusions. Let me call it the ‘Survival theory’, for brevity’s sake.” 10

Here is James steadying himself before addressing his conclusions regarding The Varieties of Religious Experience. The twentieth century has just turned. Marx and Freud are beginning to call the tunes: Science, more broadly, in the ascendant. But I shall return to these themes later in the book, restricting myself here to James’ very cautiously considered inquiries into the nature of religion itself and why it can never be adequately replaced by scientific objectivity alone. He begins by comparing the religious outlook to the scientific outlook and by considering the differences between each:

The pivot round which the religious life, as we have traced it, revolves, is the interest of the individual in his private personal destiny. Religion, in short, is a monumental chapter in the history of human egotism… Science on the other hand, has ended by utterly repudiating the personal point of view. She catalogues her elements and records her laws indifferent as to what purpose may be shown by them, and constructs her theories quite careless of their bearing on human anxieties and fates… 11

This is such a significant disagreement, James argues, that it is easy to sympathise with the more objective approach guaranteed by hard-edged precision of science, and to dismiss religious attitudes altogether:

You see how natural it is, from this point of view, to treat religion as mere survival, for religion does in fact perpetuate the traditions of the most primeval thought. To coerce the spiritual powers, or to square them and get them on our side, was, during enormous tracts of time, the one great object in our dealings with the natural world. For our ancestors, dreams, hallucinations, revelations, and cock-and-bull stories were inextricably mixed with facts… How indeed could it be otherwise? The extraordinary value, for explanation and prevision, of those mathematical and mechanical modes of conception which science uses, was a result that could not possibly have been expected in advance. Weight, movement, velocity, direction, position, what thin, pallid, uninteresting ideas! How could the richer animistic aspects of Nature, the peculiarities and oddities that make phenomena picturesquely striking or expressive, fail to have been singled out and followed by philosophy as the more promising avenue to the knowledge of Nature’s life. 12

As true heirs to the scientific enlightenment, we are asked to abandon such primeval imaginings and, by a process of deanthropomorphization (to use James’ own deliberately cumbersome term), which focuses only on the precisely defined properties of the phenomenal world so carefully delineated by science, sever the private from the cosmic. James argues, however, that such enlightenment comes at a cost:

So long as we deal with the cosmic and the general, we deal only with the symbols of reality, but as soon as we deal with private and personal phenomena as such, we deal with realities in the completest sense of the term. 13

Thus, to entirely regard one’s life through the pure and impersonal lens of scientific inquiry is to see through a glass, not so much too darkly, as too impartially. Whilst being expected to leave out from our descriptions of the world “all the various feelings of the individual pinch of destiny, [and] all the various spiritual attitudes”, James compares with being offered “a printed bill of fare as the equivalent for a solid meal.” He expresses the point most succinctly saying:

It is impossible, in the present temper of the scientific imagination, to find in the driftings of cosmic atoms, whether they work on the universal or on the particular scale, anything but aimless weather, doing and undoing, achieving no proper history, and leaving no result.

This is the heart of the matter, and the reason James surmises, quite correctly in my opinion:

… That religion, occupying herself with personal destinies and keeping thus in contact with the only absolute realities which we know, must necessarily play an eternal part in human history. 14

*

Mauro Bergonzi, Professor of Religion and Philosophy in Naples, speaks about the utter simplicity of what is:

*

“I gotta tell you the truth folks,” comedian George Carlin says at the start of his most famous and entertaining rant, “I gotta tell you the truth. When it comes to bullshit – big-time, major league bullshit! You have to stand in awe of the all-time champion of false promises and exaggerated claims: Religion! Think about it! Religion has actually convinced people that there’s an invisible man! – living in the sky! –  who watches everything you do, every minute of every day…”

And he’s right. It’s bonkers but it’s true, and Carlin is simply reporting what many millions of people very piously believe. Sure, plenty of Christians, Muslims and Jews hold a more nuanced faith in their one God, and yet for vast multitudes of believers, this same God is nothing but a bigger, more powerful, humanoid. A father figure.

“Man created God in his own image,” is the way a friend once put it to me. And as a big man, this kind of a God inevitably has a big man’s needs.

Of course, the gods of most, if not all, traditions have been in the business of demanding offerings of one kind or another to be sacrificed before them, for what else are gods supposed to receive in way of remuneration for their services? It’s hardly surprising then that all three of the great Abrahamic faiths turn sacrifice into a central theme. But then what sacrifice can ever be enough for the one-and-only God who already has everything? Well, as George Carlin points out, God is generally on the lookout for cash:

“He’s all-powerful, all-perfect, all-knowing and all-wise, but somehow just can’t handle money!” But still, cash only goes so far. Greater sacrifices are also required, and, as the Old Testament story of Abraham and Isaac makes abundantly clear, on some occasions nothing less than human blood-sacrifice will do. 15 The implicit lesson of this story being that the love of our Lord God requires absolute obedience, nothing less. For ours is not to reason why…

“Oh, God you are so big!” the Monty Python prayer begins – bigness being reason enough to be awed into submission. But God also wants our devotion, and then more than this, he wants our love to be unconditional and undiluted. In short, he wants our immortal souls, even if for the meantime, he’ll settle for other lesser sacrifices in lieu.

As for the more caring Christian God (the OT God restyled), well here the idea of sacrifice is up-turned. The agonising death of his own son on Golgotha apparently satisfying enough to spare the rest of us. It’s an interesting twist, even if the idea of a sacrificed king is far from novel; by dividing his former wholeness, and then sacrificing one part of himself to secure the eternal favour of his other half is a neat trick.

But still, why the requirement for such a bloody sacrifice at all? Well, is it not inevitable that every almighty Lord of Creation must sooner or later get mixed up with the God of Death? For what in nature is more unassailable than Death; the most fearsome destroyer who ultimately smites all. Somehow this God Almighty must have control over everything and that obviously includes Death.

“The ‘omnipotent’ and ‘omniscient’ God of theology,” James once wrote in a letter, “I regard as a disease of the philosophy shop.” And here again I wholeheartedly agree with James. Why…? For all the reasons given above, and, perhaps more importantly, because any “one and only” infinitist belief cannot stand the test at all. Allow me to elucidate.

The world is full of evils; some of these are the evils of mankind, but certainly not all. So what sort of a God created amoebic dysentery, bowel cancer and the Ebola virus? And what God would allow the agonies of his floods, famines, earthquakes, fires and all his other wondrously conceived natural disasters? What God would design a universe of such suffering that he invented the parasitic wasps that sting their caterpillar hosts to leave them paralysed, laying their eggs inside so that their grubs will eat the living flesh?

The trouble is that any One True Lord, presuming this Lord is also of infinite goodness, needs, by necessity, a Devil to do his earthly bidding. This is unavoidable because without an evil counterpart such an infinite and omnipotent God, by virtue of holding absolute power over all creation, must thereby permit every evil in this world, whether man-made or entirely natural in origin. And though we may of course accept that human cruelties are a necessary part of the bargain for God’s gift of freewill – which is a questionable point in itself – we are still left to account for such evils as exist beyond the limited control of our species.

Thus, to escape the problem of blaming such “acts of God” on God himself, we may choose to blame the Devil instead for all our woes, yet this leads inexorably to an insoluble dilemma. For if the Devil is a wholly distinct and self-sustaining force we have simply divided God into two opposing halves (when He must be One), whereas if we accept that this Devil is just another of the many works of the One God, then the problem never really went away in the first place. For why would any omnipotent God first create and then permit the Devil to go about in his own evil ways? It is perhaps Epicurus who puts this whole matter most succinctly:

Is God willing to prevent evil, but not able? Then he is not omnipotent. Is he able, but not willing? Then he is malevolent. Is he both able and willing? Then whence cometh evil? Is he neither able nor willing? Then why call him God? 16

It is here that we enter the thorny theological “problem of evil”, although it might equally fittingly be called the “problem of pain”, for without pain, in all its various colourations, it is hard to imagine what actual form the evil itself could take.

So confronted by the Almighty One, we might very respectfully ask, “why pain?” Or if not why pain, as such, for conceivably this God may retort that without pain we would not appreciate joy, just as we could not measure the glory of day without the darkness of night, we still might ask: but why such excessive pain, and why so arbitrarily inflicted? For what level of ecstasy can ever justify all of Nature’s cruelties?

At this point, James unceremoniously severs the Gordian knot as follows: “… the only obvious escape from paradox here is to cut loose from the monistic assumption altogether, and allow the world to have existed from its origin in pluralistic form, as an aggregate or collection of higher and lower things and principles, rather than an absolutely unitary fact. For then evil would not need to be essential; it might be, and it may always have been, an independent portion that had no rational or absolute right to live with the rest, and which we might conceivably hope to see got rid of at last…”

*

There are many who have set out to find proof of God’s existence. Some have looked for evidence in archeology – the sunken cities of Sodom and Gomorrah, the preserved remains of Noah’s Ark, and most famously, the carbon dating of the Shroud of Turin – but again and again the trails lead cold. Others turned inwards. Searching for proof of God through reason. But this is surely the oldest mistake in the book. For whatever God could ever be proved by reason would undoubtedly shrivel up into a pointless kind of a God.

But there is also a comparable mistake to be made. It is repeated by all who still try, and after so many attempts have failed, to absolutely refute God’s existence. For God, even the Judeo-Christian-Islamic God, can in some more elusive sense, remain subtle enough to slip all the nets. He need not maintain the form of the big man in the sky, but can diffuse into an altogether more mysterious form of cosmic consciousness. In this more mystical form, with its emphasis on immediate apprehension, history also sinks into the background.

Dawkins and others who adhere to a strictly anti-religious view of the world are in the habit of disregarding these more subtle and tolerant religious attitudes. Fashioning arguments that whip up indignation in their largely irreligious audience, they focus on the rigid doctrines of fundamentalists. And obviously, they will never shake the pig-headed faith of such fundamentalists, but then neither will their appeals to scientific rationalism deflect many from holding more flexible and considered religious viewpoints. The reason for this is simple enough: that man (or, at least, most people) cannot live by bread alone. So, for the genuinely agnostic inquirer, strict atheism provides only an unsatisfactory existential escape hatch.

In the year 2000, the world-renowned theoretical physicist and mathematician Freeman Dyson won the Templeton Prize for Progress in Religion 17. In his acceptance speech he staked out the rightful position of religion as follows:

I am content to be one of the multitude of Christians who do not care much about the doctrine of the Trinity or the historical truth of the gospels. Both as a scientist and as a religious person, I am accustomed to living with uncertainty. Science is exciting because it is full of unsolved mysteries, and religion is exciting for the same reason. The greatest unsolved mysteries are the mysteries of our existence as conscious beings in a small corner of a vast universe. Why are we here? Does the universe have a purpose? Whence comes our knowledge of good and evil? These mysteries, and a hundred others like them, are beyond the reach of science. They lie on the other side of the border, within the jurisdiction of religion.

So the origins of science and religion are the same; he says, adding a little later:

Science and religion are two windows that people look through, trying to understand the big universe outside, trying to understand why we are here. The two windows give different views, but they look out at the same universe. Both views are one-sided; neither is complete. Both leave out essential features of the real world. And both are worthy of respect.

Trouble arises when either science or religion claims universal jurisdiction, when either religious dogma or scientific dogma claims to be infallible. Religious creationists and scientific materialists are equally dogmatic and insensitive. By their arrogance they bring both science and religion into disrepute. 18

By restoring mystery to its proper place at the centre of our lives, Dyson’s uncertainty might indeed offer the possibility for actual religious progress. It might achieve something that the purer atheism almost certainly never will. Hallelujah and amen!

*

Once upon a time I was an atheist too, only slowly coming to realise that being so sure-footed about the inessential non-spirituality of existence requires an element of faith of its own. It requires a faith in the ultimate non-mystery of the material universe. That everything is, in principle at least, fathomable. Not that this means our atheistic scientific worldview must inevitably be duller nor that it automatically considers life less wonderful. Not at all. Life and the rest of it may appear to be just as aimless as weather, to steal James’ choice metaphor, but this has a kind of beauty of its own, as many an atheist will affirm. And there’s security of a different, some would say higher form, in the acceptance and affirmation of perfectly aimless existence. It can feel like a weight lifted.

Yet, the rarely admitted truth is that the carriers of the scientific light of reason (of whom I remain very much one) are just as uncertain as the average Joe Churchgoer about what might loosely be termed the supernatural (or supranatural) – by which I mean both the ultimately unknowable, and also, whatever strange and various events still remain unexplained by our accepted laws of the natural world. All of which stands to reason: the inexplicable lying, by its very definition, outside the province of science, whilst, at the same time, a bristling realisation that the universe is inherently and intractably mysterious stirs unconsciously at the back of all our minds, even those of the most logical and rational of thinkers. For the stark truth is that existence itself is spooky! And consequently, scientists too are sometimes afraid of the dark.

Finally then, the practising scientist, putting aside all questions of ultimate meaning or purpose, for these concerns are beyond the scope of their professional inquiries, must admit that they sideline such matters only on the grounds of expedience. The only useful scientific questions being ones that can be meticulously framed. So whilst science is necessarily dispassionate and preoccupied with material facts, it does not follow that being scientific means to mistake the world as revealed by science for the scientific model that approximates it – any model of the universe being, at best, obviously a pale approximation to the true complexity of the original.

Scientists then are not the new high priests and priestesses of our times, because their role is cast quite differently. Gazing downwards rather than upwards, to earth rather than heaven, they pick away at the apparently lesser details in the hope of unravelling the bigger picture. Turning outwards instead of inwards, deliberately avoiding subjective interpretations in favour of tests and measurements, they seek to avoid opinion and to rise above prejudice. All of this requires a kind of modesty, or should.

But there is also a fake religion, one that dresses itself in the brilliant white of laboratory coats. It pleads that the only true way to understanding is a scientific one, disavowing all alternatives to its own rational authority. Of course such claims to absolute authority are no less fraudulent than claims of papal infallibility or the divine right of kings, but true devotees to the new religion are blind to such comparisons. More importantly, they fail to see that all claims to an exclusive understanding, whether resting on the doctrines of religion or by the microscopic scrutiny of science, aside from being false claims, necessarily involves a diminution of life itself. That at its most extreme, this new religion of scientific materialism leads unswervingly to what William Blake called “the sleep of Newton”: a mindfulness only to what can be measured and calculated. And truly this requires a tremendous sacrifice.

*

James Tunney, LLM, is an Irish Barrister who has lectured on legal matters throughout the world. He is also a poet, visual artist, and author of “The Mystical Accord: Sutras to Suit Our Times, Lines for Spiritual Evolution”. In addition, he has written two dystopian novels – “Blue Lies September”, and “Ireland I Don’t Recognize Who She Is”. Here he speaks with host of “New Thinking Allowed”, Jeffrey Mishlove about the ‘Perennial Philosophy’ tradition found in cultures throughout the world, for which the essential core tenet is mysticism. What is meant by mysticism is discussed at length, and as Tunney explains, one important characteristic shared by all mystical traditions is the primary recognition of humans (and animals) as spiritual beings. Thus, scientism as a cultural force, by virtue of its absolutist materialist dogma, is necessarily antagonistic to all forms of mysticism:

*

So by degrees I’ve been converted back to agnostism, for all its shamefulness. Agnosticism meaning “without knowledge”. I really have no idea whether or not a god of any useful description exists, nor even whether this is a reasonable question, yet I can still confidently rule out many of his supposed manifestations (especially those where his name is top-heavy with its illuminated capital G). But any detailed speculation on the nature of god or, if you prefer, the spiritual, is what William James calls “passing to the limit”, and in passing that limit we come to what James called the “over-beliefs”.

Over-beliefs are the prime religious currency in which churches do the bulk of their business. They are what most distinguish the Lutherans from the Catholics; the Sunnis from the Shias; and more schismatically again, the Christians from the Muslims. All the carefully formulated dogma about the Holy Trinity, the Immaculate Conception, the virgin birth; the sacraments and the catechisms; and the ways of invocation of the One True God; or in more Easterly traditions, the karmic cycle and the various means and modes of reincarnation, and so on and so forth, all are over-beliefs, for they attempt to cross the threshold from “the sensible and merely understandable world” to “the hither side”. In his own conclusions, James suggested a more “pluralistic hypothesis” to square the varieties of religious experience:

Meanwhile the practical needs and experiences of religion seem to me sufficiently met by the belief that beyond each man and in a fashion continuous with him there exists a larger power which is friendly to him and to his ideals. All that the facts require is that the power should be other and larger than our conscious selves. Anything larger will do, if only it be large enough to trust for the next step. It need not be infinite, it need not be solitary. It might conceivably even be only a larger and more godlike self, of which the present self would then be but the mutilated expression, and the universe might conceivably be a collection of such selves, of different degrees of inclusiveness, with no absolute unity realized in it at all…

These are James’ overbeliefs and they broadly concur with my own. Though mine have also been tinted a little by Eastern hues. Intuitively I am drawn by the Taoist notion of the constant flux of eternal becoming. An unnameable current of creation with an effortless strength like the strength of water, which is subtle, flexible and unstoppable. Accordingly, my intuition respects the Taoist directive to flow effortlessly with this eternal current, for there is no sense in swimming against it. And this is a philosophy that compliments well the mindfulness of Zen (or Ch’an), with its playful seriousness, its snapping fingers calling the wandering attention back to the here and now. I can easily empathise with the Zen student’s search for the raw nakedness of naked existence, with its requirement to the strip all veils of presumed understanding; focusing upon where the outer and inner worlds reflect, to achieve a spontaneous but ineffable awakening. I can see it as a potentiality, and it does not jar against the hard-won rationality of my scientific training. In contrast to so much of the declarative wiseacring of Western philosophy, mastery of both disciplines is all about knowing when to shut up. As mythologist Joseph Campbell, author of The Hero with a Thousand Faces, once said:

God is a thought, God is an idea, but its reference is to something that transcends all thinking. I mean, he’s beyond being, beyond the category of being or nonbeing. Is he or is he not? Neither is nor is not. Every god, every mythology, every religion, is true in this sense: it is true as metaphorical of the human and cosmic mystery. He who thinks he knows doesn’t know. He who knows that he doesn’t know, knows. 19

I am not of course a Taoist nor a Buddhist of any kind. I am unaffiliated to any church. But I am drawn to Taoism and Zen Buddhism because of their appeals to objectivity, with emphasis on revelation above and beyond belief. For in neither Taoism nor Zen is any shape of God decreed or delineated: God being as much a zero and a one. And as a one-nothing, or a no-thing, this no-God requires no sacrifice, no high calls to blind obedience; for the Universe is as the Universe does. Yet something of the religious remains, beyond the purely philosophical, a something that strict atheism lacks: a personal role within the cosmic drama, which escapes the absurd chance and purposeless drifting of materialist scientism. 20

So it is that I choose to adopt them to an extent. To draw on their philosophies, and to marry these on again with ideas found in strands of Western Existentialism, to aspects of liberal humanism and to the better parts of Christianity (distilled in the songs of Blake, for instance). But whilst it may be edifying to pick the best from traditions of both East and West, to satisfy my god-shaped hole, I see too that such a pick-and-mix approach is prone to make as many false turns as any traditional religious route – it is interesting to note here that the word “heretic” derives from the Greek hairetikos, meaning “able to choose”. For there are no actual boundaries here. So what of the many shamanic traditions and tribal gods of primitive society? What about our own pagan heritage? Isn’t it time to get out the crystals and stuff some candles in my ears? Mesmerised by a hotchpotch of half-comprehended ideas and beliefs, just where are the safeguards preventing any freewheeling religious adventurer from falling into a woolly-headed New Ageism?

Well, it’s not for me or anyone else to call the tune. Live and let live – everyone should be entitled to march to the beat of their own drums, always taking care not to trample the toes of others in the process. But this idea of the New Age is a funny business, and I wish to save my thoughts on that (perhaps for another book). Meanwhile, my sole defence against charges of constructing a pick-and-mix religion is this: if you’d lost your keys where would you look for them? In your pocket? Down at your feet? Only under the streetlights? Oh, you have your keys – well then, good for you! Now, please don’t expect everyone else to stop looking around for theirs, or restricted to searching only under the most immediate and convenient lamppost.

Having said all this, and rather shamefully spoken too much on matters that better deserve silence, it now behoves me to add that I am certainly careful when it comes to choosing between personal over-beliefs, adhering to one rule: that what is discredited by steadfast and rigorous scientific trial is guaranteed baloney. Miracles, of course, are quite out of the question, failing on account of their own self-defining impossibility. Equally I have no time for animalistic gods of any persuasion, whether or not they share a human face. But my deepest distrust is not of religions per se (since, to repeat, these are many and varied in form, and then good and bad in parts), but more specifically, for the seemingly numberless religious organs we call creeds, sects, churches and so on.

To contend that religion is always about power is to miss the bigger picture, as I hope I’ve satisfactorily shown, and yet… It would be wise for the sheep to beware the shepherd. This much agreed, however, I feel sure that religion, in some wiser form, still has an important role to play in many of our individual lives and for the sake of all our futures. You may be surprised to learn that George Orwell thought similarly, and made his opinion felt in his essay Notes on the Way (an essay which, at intervals, I shall return to later):

… Marx’s famous saying that ‘religion is the opium of the people’ is habitually wrenched out of its context and given a meaning subtly but appreciably different from the one he gave it. Marx did not say, at any rate in that place, that religion is merely a dope handed out from above; he said that it is something the people create for themselves to supply a need that he recognized to be a real one. ‘Religion is the sigh of the soul in a soulless world. Religion is the opium of the people.’ What is he saying except that man does not live by bread alone, that hatred is not enough, that a world worth living in cannot be founded on ‘realism’ and machine-guns? If he had foreseen how great his intellectual influence would be, perhaps he would have said it more often and more loudly. 21

Next chapter…

*

Addendum: mind over matter

Physicists speak about a ‘quantum theory’ but when asked what the physical reality this ‘theory’ describes is truly like, they have no useful or consistent answers at all. It works, they say, and at a mathematical level is the most precise ‘theory’ so far devised, so “shut up and calculate!” Or, if you prefer (with apologies to Shelley): look upon our quantum works and do not despair… certainly not about any gaps in our understanding about the true nature of reality that may or may not underlie it. This non-philosophical culture was the norm by the time I went to university; an opinion that was seldom if ever challenged and thus easily instilled.

Of course, quantum reality does come as a shock at first. I had genuinely felt an acute anxiety on first hearing of Schrödinger’s poor cat forever half-dead in her box. Not that we ever learnt about the famous thought experiment in class of course: no, physics abandoned Schrödinger’s cat to her interminable state of limbo long ago. Any underlying ontology was reading for pleasure only; a late-night topic for post-pub discussions.

But physics is mistaken in its beliefs. It has mixed up its modern ignorance with ultimate incomprehensibility. Schrödinger’s cat was actually meant to shock us all: most importantly, to wake up all those physicists who chose to interpret the abstraction as the world itself and decide without proof that nothing of reality exists beyond it. But we have incorporated the semi-corporeal cat into the mix of quantum oddities: as evidence of our unreal reality when the whole point was that such quantum half-death is absurd.

Moreover, what physicists today describe as ‘quantum theory’ is not strictly a theory at all but actually just a powerful predictive recipe and an engineering tool, whereas a genuine theory is yet to be written: the true quest for it is disguised by language again, because this potential future theory is what physicists currently sideline under the label ‘interpretations’ – as if they don’t much matter.

Professor of Philosopher at NYU, Tim Maudlin, explains the problem with quantum theory today and how the foundations of quantum mechanics should be understood (please ignore the perturbing observable in the background!):

Although the notion that consciousness plays a key role in quantum mechanics was seriously considered by many of the scientific luminaries of the early Twentieth Century including John von Neumann who discussed its salient role in his treatise The Mathematical Foundations of Quantum Mechanics, such interpretations have since fallen mostly out of favour (certainly amongst physicists). More recent empirical findings are however just beginning to challenge this scientific orthodoxy and may indeed rock the assertion that there is an inherent distinction between what I above called “quantum choice” and our conscious choice. In fact in contradiction to what I originally wrote, some of the latest studies are producing results that show astonishingly high correlation between conscious intention and the so-called “collapse” of the wave function.

The last word (of this chapter – not the subject!) I shall leave to Freeman Dyson:

I cannot help but think that the awareness of our brains has something to do with the process that we call “observation” in atomic physics. That is to say, I think our consciousness is not just a passive epiphenomenon carried along by the chemical events in our brains, but is an active agent forcing the molecular complexes to make choices between one quantum state and another. In other words, mind is already inherent in every electron, and the processes of human consciousness differ only in degree but not in kind from the processes of choice between quantum states which we call ‘chance’ when they are made by electrons.

*

Please note that for the purposes of ‘publishing’ here I have taken advantage of the option to incorporate hypertext links and embed videos – in order to distinguish additional commentary from the original text all newly incorporated text has been italised.

*

1    Not quite true actually. Apparently my father was one of a small number who decided not to bother watching the first men step onto the moon’s surface. He tells me that he was so sure they would make it, he couldn’t see the point. My mother watched, and apparently so did I, although still not two years old. I can’t say that I remember anything about the moment, and probably found it a lot less interesting than Bill and Ben The Flowerpot Men, but perhaps it affected me on some deeper level — could it be that seeing the first moon landing at such a tender age was part of the reason I ended up studying comets?

2    Radiation pressure is the consequence of light itself (photons) having momentum.

3    A process that releases energy to the surroundings in the form of work as opposed to endergonic, which means energy consuming. These terms are closely related to exothermic and endothermic, where energy release and absorption take the form of heat transfer.

4    Karl Popper’s precise “line of demarcation” was that, if any theory can be shown to be falsifiable, then it can usefully be described as scientific.

5

“The totality of true propositions is the whole of natural science (or the whole corpus of the natural sciences).”

— Wittgenstein, Tractatus Logico-Philosophicus, 4.11

6

“The whole modern conception of the world is founded on the illusion that the so-called laws of nature are the explanations of natural phenomena. Thus people today stop at the laws of nature, treating them as something inviolable, just as God and Fate were treated in past ages. And in fact both were right and both wrong; though the view of the ancients is clearer insofar as they have an acknowledged terminus, while the modern system tries to make it look as if everything were explained.” — Wittgenstein, Tractatus Logico-Philosophicus, 6.371-2.

7    In German: “Wovon man nicht sprechen kann, darüber muß man schweigen.”

8    “the truth of the thoughts that are here communicated seems to me unassailable and definitive.” Taken from the preface to the Tractatus Logico-Philosophicus.

9    In this first treatise of Wittgenstein (which was the only one he ever published – his later philosophy contained in “The Philosophical Investigations” being published posthumously), he begins with the totally unsupported and deeply contentious assertion that, in effect, all meaningful language involves a description, or more correctly a depiction, of fact. This follows because the use of all language involves a correlation between objects in the world and names for those objects. This is his so-called “picture theory of language” which requires, Wittgenstein claims, a one-to-one correspondence between names and objects. This given, he demonstrates that if any proposition is to be genuine it must have a definite sense, or to put it differently, for a statement to admit to any test of proof then it must at least be possible for that question to be set out absolutely clearly. For Wittgenstein this means that questions about ethics, aesthetics and theology fall outside the realm of philosophy; the reason being that they rely on words such as “goodness”, “beauty”, “truth” and “god” which have no clear one-to-one correspondence. Wittgenstein of course later changed his mind on some of this. Recognising that his picture theory was overly simplistic he returned to philosophy with a radically new idea. That the meaning of language is contained in its social usage, thereby reassigning the work of philosophers to the study of language within its natural social environment. The purpose of philosophy was now to untie the knots of these so-called “language games”. But it is easy to mistake him here – and many do – his notion being that science can properly be understood and appraised only by those who know its language, religion likewise, and so on. And not that all inquiry is merely a matter of “playing with words”.

10  The Varieties of Religious Experience: a Study in Human Nature by William James, Longmans, Green & co, 1902; from a lecture series.

11  Ibid.

12  Ibid.

13  Ibid. Italics maintained from the original source.

14  Ibid. James earlier says, “It is absurd for science to say that the egotistic elements of experience should be suppressed. The axis of reality runs solely through the egotistic places, – they are strung upon it like so many beads.”

15  Genesis Ch.22 tells how God commanded Abraham to go to the land of Moriah and to there offer up his own son Isaac as a sacrifice. The patriarch travels three days until finally he comes to the mountain, just as God had instructed, and there he tells his servant to remain until he and Isaac have ascended the mountain. Isaac, who is given the task of carrying the wood on which he will soon be sacrificed, repeatedly asks his father why there is no animal for the burnt offering. On each occasion, Abraham says that God will provide one. Finally, as Abraham draws his knife and prepares to slaughter his son, an angel stops him. Happily, a ram has been provided and it can now be sacrificed in place of Isaac.

16  This is sometimes called “the riddle of Epicurus” or “the Epicurean Paradox” even though Epicurus did not in fact leave behind any written record of this statement. The first record of it appears some four hundred or more years after and in a work by the early Christian writer Lactantius who is actually criticising the argument.

17  Freeman Dyson is undoubtedly one of the greatest scientists never to win the Nobel Prize. However, he was awarded the Lorentz Medal in 1966 and Max Planck medal in 1969. In March 2000 he was also awarded the Templeton Prize. Created in 1972 by the investor, Sir John Templeton, in an attempt to remedy what he saw as an oversight by the Nobel Prizes, which do not honour the discipline of religion. Previous Templeton Prize recipients have included the Rev. Dr. Billy Graham, Aleksandr Solzhenitsyn, Charles Colson, Ian Barbour, Paul Davies, physicist Carl Friedrich von Weizsacker, and Mother Teresa.

18  Extracts from Freeman Dyson ‘s acceptance speech for the award of the Templeton Prize, delivered on May 16, 2000 at the Washington National Cathedral.

19 From an interview conducted in 1987 by American journalist Bill Moyers as six-part series of conversations with Joseph Campbell entitled Joseph Campbell and the Power of Myth. The quote is taken from Episode 2, ‘The Message of the Myth’ broadcast on June 26, 1988. The full transcript is available here: https://billmoyers.com/content/ep-2-joseph-campbell-and-the-power-of-myth-the-message-of-the-myth/  

20  It is even tempting to envisage some grand union of these two ancient Chinese philosophies, called Zow!-ism perhaps.

21  Extract taken from Notes on the Way by George Orwell, first published in Time and Tide. London, 1940.

Leave a comment

Filed under « finishing the rat race »

the stuff of dreams

The following article is Chapter Two of a book entitled Finishing The Rat Race.

All previously uploaded chapters are available (in sequence) by following the link above or from category link in the main menu, where you will also find a table of contents and a preface on why I started writing it.

*

Oats and beans and barley grow,

Oats and beans and barley grow,

Do you or I or anyone know,

How oats and beans and barley grow?

— Traditional children’s rhyme

*

One of my earliest memories at school was being told that rabbits became quick to escape foxes, and likewise, foxes had become quicker to catch rabbits. This, the teacher said, is how one type of animal can slowly change into a new type through a process known as evolution. Well, I didn’t believe that for a minute. Such dramatic outcomes from such unremarkable causes. And why, I wondered, would something change simply because it had to – having to isn’t any reason.

Of course in many ways my teacher had missed the point (though in fairness, perhaps it was I who missed his point, off in a daydream, or curiously intent on the inconstant fluttering of a leaf against the window, or otherwise lost to the innocent pleasures of childhood reveries). Either way it doesn’t matter much. Importantly, my teacher had done his job – and done it well! He had planted a seed, which made this a most valuable lesson. But in his necessarily simplified account of evolution there was a flaw (and his version would by virtue of necessity have been a simple one, because however much I may have been distracted, the subtleties of evolution were beyond the grasp of our young minds). For what he had missed out is not why the rabbits became faster but how. The question being what “adaptive mechanism” could have driven any useful sequence of changes we might call ‘evolution’. And this is really the key point. Leaving out mention of any kind of adaptive mechanism, he was leaving open all sorts of possibilities. For instance, Lamarckism and Darwinism, though both theories of evolution, paint very different accounts of how life has developed, for they presume quite different adaptive mechanisms. I will try to explain the matter more carefully and in terms of giraffes.

*

You might ask a great many questions about giraffes. For instance, how on earth their extraordinary and striking markings could ever provide useful camouflage, though if you’re ever lucky enough to see one step almost invisibly out of dappled foliage into full light, you will certainly be sure that the effect is near perfect. Alternatively, you might ask why it is that they walk with both legs on the same side moving together. A very elegant form of locomotion. However, by far and away the most frequently asked question about giraffes is this: why do they have such long necks?

Well, here’s what Lamarck would have said. Giraffes began as ordinary antelope. Some of the antelope preferred grass and others preferred leaves. The ones that preferred leaves had an advantage if they could reach higher. To achieve this they would stretch their necks a little longer. As a direct result of acquiring this new characteristic, the foals of those slightly longer necked antelope would be also be born with slightly longer necks. They too would stretch that little bit higher. Over generations some types of the antelope would develop extremely long necks and the descendants of these eventually developed into a new species called giraffes.

The basis for Lamarck’s reasoning lies in a perfectly rational misunderstanding about genetics. He assumes that the “acquired characteristics” (i.e., those characteristics developed or acquired during life) of the parents will somehow be passed through to their offspring. It turns out however that this isn’t actually the case. He might have guessed as much I suppose. One of the oft-cited criticisms against Lamarck’s theory has been the case of Jewish boys. Why, his opponents would ask, do they ever grow foreskins in the first place?

Darwin offered an alternative hypothesis. Perhaps it goes like this, he thought: there are already differences within the population of antelope; some will have shorter necks than others to start with. Or in other words, there is already a “natural variation”. In times of plenty this may not be of significance, but in times of scarcity it could be that the antelope with longer necks have a slight advantage. This idea of course applies to any antelopes with other accidentally favourable characteristics, for example those that run faster, are better camouflaged, or have more efficient digestive systems; but let’s not go there – let’s stick to necks for a moment. The longer necked adults can reach higher and so get to those few extra leaves that will help them to survive. Having a slightly higher chance of survival means (all other factors being equal) that they are more likely to pass on their characteristics. Within a few generations there will be an inevitable increase in the population of the long-necked variety until eventually, the long-necked population might plausibly have evolved into a separate species.

What had Darwin achieved in this alternative explanation? Well, he had abolished any requirement for an heredity that depended on the transmission of “acquired characteristics.” He’d not entirely proved Lamarck wrong but only shown his ideas aren’t necessary. And although in actual fact Darwin never acknowledged Lamarck’s contribution, purely in terms of theories of heredity his own version was little better than Lamarck’s (basically, by introducing the equally flawed concept of pangenes he had finally got around the issue of Jewish foreskins). But it is not what Darwin had undermined, so much as what he had set up, that preserves his legacy. That the true driving force of evolution depends on variation and competition, in dynamic relationship that he called “natural selection”.

According to Darwin’s new vision then, the evolution of species depends upon how individuals within that species interact with their environment. Those that are best adapted will survive longer and pass on their winning characteristics, and the rest will perish without reproducing. In short, it is “the survival of the fittest” that ensures evolutionary progress; though this catchy summary was not Darwin’s own, but one that Darwin slowly adopted. (It was actually first coined by the philosopher Herbert Spencer, whose ideas I wish to return to later.)

*

Darwin still attracts a lot of criticism and much of this criticism comes from religious sections intent on promulgating the view that “it was God what done it all” –  the Creationists who refuse to acknowledge any of the overwhelming evidence whether from zoology, botany, geology, palaeontology, or embryology; rejecting reason in deference to “the word of God”. However, there are also more considered critiques.

Perhaps the most interesting of these is that Darwin’s evolutionary theory of natural selection is unscientific because it is founded on a tautology. It is after all self-evident that the fittest will survive, given that by fitness you must really be meaning “fitness for survival”. After all, it has to be admitted that sloths have survived, and in what sense can a sloth be said to “be fit” other than in its undoubted fitness to be a sloth. The assumption then is that Darwin’s idea of natural selection has added nothing that wasn’t already glaring obvious. Yet this is an unfair dismissal.

Firstly, it is unfair, because as I have said above, “the survival of the fittest” is Spenser’s contribution – one that leads rapidly into dangerous waters – but it is also unfair because it misses the way in which Darwin’s hypothesis is not only predictive, but also (as Karl Popper was so keenly aware) testable. If Darwin’s theory was a mere tautology then nothing on earth could ever disprove his claims, and yet there is room here for evidence that might truly test his theory to destruction.

How? Well, Darwin, it must be understood, had put forward a theory of gradual adaptation, so there is no accounting for any sudden leaps within his slowly branching history of life – so if, for instance, a complex new order of species suddenly arose in the fossil record without ancestry, then Darwin’s theory would need a radical rethink. Or let’s say some fossil was found with characteristics uncommon to any discovered ancestor. Here again Darwin’s theory would be seriously challenged. On the other hand, embryologists might discover discrepancies in the way eggs develop, and likewise, following the discovery of DNA and advent of modern genetics, we might find sudden abrupt shifts in the patterns of genes between species instead of gradual changes. Each of these cases would powerful evidence to challenge Darwinian theory.

But, instead of this (at least until now), these wide and varied disciplines have heaped up the supporting evidence. For example, people used to talk a lot about “the missing link”, by which they generally meant the missing link between humans and apes when scientists have in fact discovered a whole host of “missing links” in the guise of close cousins from the Neanderthals to the strange and more ancient australopithecines. For more exciting missing links, how about the fact that the jaw bone of reptiles exists in four parts and that three of those bones have slowly evolved in humans to form parts of the inner ear. How do we know? Well, there is evidence in the development of mammalian and reptilian embryos and more recently the discovery of an intermediate creature in which the bones were clearly used concomitantly for both chewing and listening. This is one of many discovered creatures that Darwin’s theory has predicted – whilst the most famous is surely the bird-lizard known as Archaeopteryx. Where, by way of comparison, are the remains of, say, Noah’s Ark?

But Darwin’s theory was not correct in all details. As I have already mentioned, his notion of pangenes was in some ways little better than Lamarck’s theory of acquired characteristics, and so it is perhaps still more remarkable that whilst he looked through a wonky glass, what he gleaned was broadly correct. Although, surprisingly perhaps, it took a monk (and one trained in physics more than in biology) to begin setting the glass properly straight. Enter Gregor Mendel.

Richard Dawkins shows how whales evolved from a cloven-hoofed ancestor, and reveals whales’ closest modern-day cousin:

*

If we think back to what people knew about the world (scientifically speaking) prior to the turn of the twentieth century, it seems astonishing what was about to be discovered within just a few decades. For instance, back in 1900 physicists were still in dispute about the existence of atoms, and meanwhile, astronomers were as then unaware of the existence of independent galaxies beyond the Milky Way. But then, in 1905, Einstein suddenly published three extraordinary papers. In the least well known of these, he proved mathematically how the jiggling Brownian motion of pollen grains on water (observed by Robert Brown almost a hundred years earlier) was caused by collisions of water molecules, and doing this, he finally validated the concept of matter being formed out of particles, and so by extension, thereby proven the existence of atoms, which finally settled a debate regarding the nature of matter that had begun more than two thousand years earlier in Greece.

Moreover, it wasn’t until the early 1920s, when Edwin Hubble (now better known as the father of the idea of the expanding universe) had succeeded in resolving the outer parts of other galaxies (previously called nebulae), detecting within their composition the collections of billions of individual stars. At last we knew that there were other galaxies just like our own Milky Way.

So in just twenty years, our universe had simultaneously grown and shrunk by a great many orders of magnitude. Nowadays, of course, we know that atoms are themselves composed of smaller particles: electrons, protons and neutrons, which are in turn fashioned from quarks 1; while the galaxies above and beyond congregate within further clusters (the Milky Way being one of the so-called Local Group, which is surely the most understated name for any known object in the whole of science).

The universe we have discovered is structured in multiple layers – though the boundaries between these layers are only boundaries of incomprehension. Looking upwards we encounter objects inconceivably large are in turn the building blocks of objects much larger again, whilst investigating the finest details of the particle world, we’ve learnt how little fleas have ever smaller fleas…

Our first stabs at understanding the origins of the trillions of galaxies in our visible universe, and of comprehending the nature of the matter and energy that comprises them, has lead to speculations based upon solid empirical findings that allow us to construct models of how the physical universe as a whole may have begun. Thus, via a joint collaboration between physicists searching on the macro- and micro-scales, we have finished up with the study of cosmology; the rigorous scientific study of the cosmos no less! (And to most physicists working at the turn of the twentieth century, the idea of a branch of physics solely devoted to the understanding of creation would surely have seemed like pure science fiction). I hope my digression has helped to set the scene a little…

*

Around the turn of the twentieth century, there also remained a mystery surrounding the science of heredity and the origin of genes. It was of course common sense that children tended to have characteristics reminiscent of their parents, but in precisely what manner those parental characteristics were hybridised had remained a matter of tremendous speculation. It was still widely believed that some kind of fluid-like mingling of genes occurred, little substantial scientific progress having been made on the older ideas about bloodlines.

But those early theories of blended inheritance, which imagined the infusing together of the two gene pools, as two liquids might mix, were mistaken. If genes really behaved this way then surely the characteristics of people would also blend together. Just as we add hot water to cold to make it warm, so a white man and a black woman would surely together procreate medium brown infants, becoming darker or lighter by generations depending on whether further black or white genes were added. Which is indeed true, up to a point, but it is not strictly true. And if it really were so simple, then the range of human characteristics might (as some racial purists had feared) gradually blend to uniformity. But the real truth about inheritance, as Mendel was quietly discovering during the middle of the 19th century, is that genes have an altogether more intriguing method of combination.

*

Mendel was a monk, who aside from observing the everyday monastic duties also taught natural science, principally physics. The work that eventually made him world-renowned, however, involved studies on peas; this was Mendel’s hobby.

He spent many years cross-fertilising varieties and making detailed observations of the succeeding generations. He compared the height of plants. He compared the positioning of flowers and pods on the stem. And he noted subtle differences in shape and colour of seeds, pods and flowers. By comparing generations, Mendel found that offspring showed traits of their parents in predictable ratios. More surprisingly, he noticed that a trait lost in one generation might suddenly re-emerge in the next. So he devised a theory to explain his findings. Like a great many scientific theories, it was ingenious in its simplicity.

Within every organism, he said, genes for each inheritable trait must occur not individually, but in pairs, and in such a way that each of these “gene-pairs” is either “dominant” or “recessive” to its partner. In this way, a gene could sometimes be expressed in the individual whilst in different circumstances it might lay dormant for a generation. But please allow me a brief paragraph to explain this modern concept of inheritance more completely and coherently.

The usual way to explain Mendelian Inheritance is in terms of human eye colours. It goes like this: There is one gene for eye colour, but two gene types. These are called “alleles”, meaning “each other”. In this case, one allele produces brown eyes (let’s call this Br), and the other produces blue eyes (Bl). You inherit one of these gene types from your mother and one from your father. So let’s say you get a brown allele from each. That means you have Br-Br and will have brown eyes. Alternatively you may get a blue allele from each, and then you’ll have Bl-Bl and so have blue eyes. So far so simple. But let’s say you get a brown from one parent and a blue from the other. What happens then? Well, Mendel says, they don’t mix, and produce green eyes or something, but that one of the genes, the brown one as it happens, will be “dominant”, which means you will have brown eyes. But here’s the interesting bit, since although you have brown eyes you will nevertheless carry an allele for blue eyes – the “recessive” allele. Now let’s say you happen to meet a beautiful brown-eyed girl, who is also carrying the combined Br-Bl genes. What will your beautiful children look like? Well, all things being equal in terms of gene combination – so assuming that you are both equally likely to contribute a Bl allele as a Br allele (i.e., that this is a purely random event) then there are only four equal possibilities: Br-Br, Br-Bl, Bl-Br, or Bl-Bl. The first three of these pairs will produce dominant brown, whilst the two recessive Bl alleles in the last pair produce blue. So if you happen to have four children, then statistically speaking, you are most like to produce three with honey brown eyes, and one imbued with eyes like sapphires. And the milkman need never have been involved.

Mendel had realised that instead of the old fashioned “analogue” system, in which our genes added together in some kind of satisfactory proportions – like two voices forming a new harmony – genes actually mix in an altogether more “digital” fashion, where sometimes, the gene type is on and sometimes it is off. Inevitably, the full truth is more complicated than this, with alleles for different genes sometimes combining in other ways, which will indeed lead to blending of some kinds of inherited traits. Yet even here, it is not the genes (in the form of the alleles) that are blended, but only the “expressed characteristics” of that pair of alleles – something called the phenotype. Thus, for generation after generation these gene types are merely shuffled and passed on. Indeed the genes themselves have a kind of immortality, constantly surviving, just as the bits and bytes in computer code are unaltered in reproductions. Of course, errors in their copying do eventually occur (and we now know that it is precisely such accidental “mutations” which, by adding increased variety to the gene pool, have served to accelerate the process of evolution). 2

Mendel’s inspired work was somehow lost to science for nearly half a century, and so although he was a contemporary of Darwin and knew of Darwin’s theory – indeed, Mendel owned a German translation of “On the Origin of Species”, in which he had underlined many passages – there is absolutely no reason to suppose that Darwin knew anything at all of Mendel’s ideas.

*

When Mendel’s papers were finally recovered in 1900, they helped set in motion a search for a molecular solution to the question of biological inheritance; a search that would eventually lead to Crick and Watson’s dawning realisation that the structure of DNA must take the form of an intertwined double-helix. Such an extraordinary molecule could peel apart and reform identical copies of itself. DNA, the immortal coil, the self-replicating molecule that lay behind all the reproductive processes of life, sent biologists (not least Crick and Watson) into whirls of excitement. It was 1953 and here was the biological equivalent to Rutherford’s momentous discovery of an inner structure to atoms, almost half a century earlier. Here was the founding of yet another new science. Whilst nuclear and particle physicists were finding more powerful ways to break matter apart, biologists would soon begin dissecting genes.

Aside from the direct consequences of current and future developments in biotechnology (a subject I touch on in the addendum below), the rapid developments in the field of genetics, have led to another significant outcome, for biologists have also slowly been proving Darwin’s basic hypothesis. Genes really do adapt from one species to another – and we are beginning to see just precisely how. Yet in complete disregard to the mounting evidence, evolutionary theory still comes under more ferocious attack than any other established theory in science. Why does Darwinism generate such furore amongst orthodox religious groups compared say to today’s equally challenging theories of modern geology? Why aren’t creationists so eager to find the fault with the field of Plate Tectonics? (Pardon the pun.) For here is a science in its comparative infancy – only formulated in the 1960s – that no less resolutely undermines the Biblical time-scale for creation, and yet it reaps no comparable pious fury. Rocks just aren’t that interesting apparently, whereas, anyone with the temerity to suggest that human beings quite literally evolved from apes… boy, did that take some courage! 3

*

Now at last, I will get to my main point, which is this: given that the question of our true origins has now been formally settled, what are we to conclude and what are the consequences to be? Or put another way, what’s the significance of discovering that just a million years ago – a heartbeat when gauged against the estimated four billion years of the full history of life on Earth – our own ancestors branched off to form a distinct new species of ape?

Well, first and foremost, I think we ought to be clear on the fact that being such relative terrestrial latecomers gives us no grounds for special pleading. We are not in fact perched atop the highest branch of some great evolutionary tree, or put differently, all creation was not somehow waiting on our tardy arrival. After all, if evolution is blind and not goal-orientated, as Darwinism proposes, then all avenues must be equally valid, even those that were never taken. So it follows that all creatures must be evolutionarily equal. Apes, dogs, cats, ants, beetles (which Darwin during his own Christian youth had noted God’s special fondness for, if judged only by their prodigious profusion), slugs, trees, lettuces, mushrooms, and even viruses; his theory makes no preference. All life has developed in parallel, and every species that is alive today, evolved from the same evolutionary roots and over the same duration simply to reach the tips of different branches. The only hierarchy here is a hierarchy of succession – of the living over the dead.

In short then, Darwinism teaches that we are just part of the great nexus of life, and no more central or paramount than our planet is central to the universe. To claim otherwise is to be unscientific, and, as Richard Dawkins has pointed out, depends entirely upon anthropocentrism and the “conceit of hindsight”.

Darwin too, quietly recognised that his theory provided no justification for any such pride in human supremacy. Likewise, he refused to draw any clear distinction between human races, correctly recognising all as a single species; an admission that says much for his intellectual courage and honesty, challenging as it did, his otherwise deeply conservative beliefs. For Darwin was a Victorian Englishman, and although not a tremendously bigoted one, it must have been hard for him to accept, that amongst many other things, his own theory of evolution meant that all races of men were of equal birth.

*

But if we agree that humans are a specialised kind of ape, then we need to be fair in all respects. We have got in a habit of presuming that mankind, or homo sapiens – “the wise man” – to apply our own vainglorious scientific denomination – of all the countless species on Earth, is the special one. Unique because, as it used often to be claimed, we alone developed the skill to use tools. Or because we have a unique capacity for complex communication. Or because we are unparalleled creators of wonderful music and poetry. Or because we are just supremely great thinkers – analytical to the point of seeking a meaning in the existence of existence itself. Or more simply, because we are self-aware, whereas most animals seem childishly oblivious even to their own reflected images. Or, most currently fashionable, because as a species we are uniquely sophisticated in an entirely cultural sense – that is, we pass on complex patterns of behaviour to one-another like no other critters.

All of our uniqueness, we owe, so it goes, to the extraordinary grey matter between our ears, with everything boiling down eventually to this: we are special because we are such brainy creatures – the cleverest around. But think about it: how can we actually be sure even in this conviction? For what solid proof have we that no other creatures on Earth can match our intellectual prowess?

Well, we might think to look immediately to brain size, but there’s a catch, as it turns out that bigger animals have bigger brain-needs merely to function. Breathing, regulating blood temperature, coping with sensory input, and so on, all require more neural processing the larger a creature becomes. So we must factor this into our equations, or else, to cite a singular example, we must concede that we are much dumber than elephants.

Okay then, let’s divide the weight of a brain by the weight of the animal it belongs to. We might even give this ratio an impressive label such as “the encephalisation quotient” or whatever. Right then, having recalibrated accordingly, we can repeat the measures and get somewhat better results this time round. Here goes: river dolphins have an EQ of 1.5; gorillas 1.76; chimpanzees 2.48; bottlenose dolphins 5.6; and humans an altogether more impressive 7.4. So proof at last that we’re streets ahead of the rest of life’s grazers. But hang on a minute, can we really trust such an arbitrary calculus? Take, for example, the case of fatter humans. Obviously they must have a lower average EQ than their thinner counterparts. So this means fatter people are stupider?

No, measurements of EQ might better be regarded as an altogether rougher indication of intelligence: a method to sort the sheep from the apes. But then, can you actually imagine for a minute, that if say, EQ gave higher results for dolphins than humans, we would ever have adopted it as a yardstick in the first place? Would we not have more likely concluded that there must be something else we’d overlooked besides body-mass? The fact that dolphins live in water and so don’t need to waste so much brain energy when standing still, or some such. For if we weren’t top of the class then we’d be sure to find that our method was flawed – and this becomes a problem when you’re trying to be rigorously scientific. So either we need more refinement in our tests for animal intelligence, with emphasis placed on being fully objective, or else we must concede that intelligence is too subtle a thing even to be usefully defined, let alone accurately scored.

However, a more bullish approach to our claims of greatness goes as follows: look around, do you see any other creatures that can manipulate their environment to such astonishing effects? None has developed the means to generate heat or refrigeration, to make medicines, or to adapt to survive in the most inhospitable of realms, or any of our other monumental achievements. Dolphins have no super-aqua equipment for exploring on land, let alone rockets to carry them to the Sea of Tranquility. Chimpanzees have never written sonnets or symphonies – and never will no matter how infinite the availability of typewriters. So the final proof of our superiority then is this, whether we call it intelligence or give it any other endorsement: technological achievement, artistic awareness, and imagination of every kind.

But what then of our very early ancestors, those living even before the rise of Cro-magnon 4, and that first great renaissance which happened more than 40,000 years ago. Cro-magnon people made tools, wore clothes, lived in huts, and painted the wonderful murals at Lascaux in France and at Altamira in Spain. They did things that are strikingly similar to the kinds of things that humans still do today. Homo sapiens of earlier times than these, however, left behind no comparable human artifacts, and yet, physiologically-speaking, were little different from you or I. Given their seeming lack of cultural development then, do we have justification for believing them intellectually inferior, or could it be that they simply exercised their wondrous imaginations in more ephemeral ways?

Or let’s take whales, as another example. Whales, once feared and loathed as little more than gigantic fish, are nowadays given a special privilege. Promoted to the ranks of the highly intelligent (after humans obviously), we have mostly stopped brutalising them. Some of us have gone further again, not merely recognising them as emotionally aware and uncommonly sensitive creatures, but ‘communing with them’. Swimming with dolphins is nowadays rated as one of the must-have life experiences along with white-water rafting and bungee jumping. So somehow, and in spite of the fact that whales have never mastered the ability to control or manipulate anything much – tool-use being a tricky business, of course, if you’re stuck with flippers – nevertheless, whales have joined an elite class: the “almost human”. We have managed to see beyond their unbridgeable lack of dexterity, because whales satisfy a great many of our other supposedly defining human abilities – ones that I outlined above.

Dolphins, we learn, can recognise their own reflections. And they use sounds, equivalent to names, as a way to distinguish one another – so do they gossip? How very anthropomorphic of me to ask! Also, and in common with many other species of cetaceans, they sing, or at least communicate by means of something we hear as song. Indeed, quite recent research based on information theory has been revealing; mathematical analysis of the song of the humpbacked whale indicates that it may be astonishingly rich in informational content – so presumably then they do gossip! And not only that, but humpbacked whales (and others of the larger whale species) share a special kind of neural cell with humans, called spindle cells. So might we gradually discover that humpbacked whales are equally as smart as humans? Oh come, come – let’s not get too carried away!

*

Do you remember a story about the little boy who fell into a zoo enclosure, whereupon he was rescued and nursed by one of the gorillas? It was all filmed, and not once but twice in fact – on different occasions and involving different gorillas, Jambo 5 and Binti Jua. 6 After these events, some in the scientific community sought to discount the evidence of their own eyes (even though others who’d worked closely with great apes saw nothing which surprised them at all). The gorillas in question, these experts asserted, evidently mistook the human child for a baby gorilla. Stupidity rather than empathy explained the whole thing. 7

Scientists are rightly cautious, of course, when attributing human motives and feelings to explain animal behaviour, however, strict denial of parallels which precludes all recognition of motives and feelings aside from those of humans becomes reductio ad absurdum. Such an overemphasis on the avoidance of anthropomorphism is no measure of objectivity and leads us just as assuredly to willful blindness as naïve sentimentality can. Indeed, to arrogantly presume that our closest evolutionary relatives, with whom we share the vast bulk of our DNA, are so utterly different that we must deny the most straightforward evidence of complex feelings and emotions reflects very badly upon us.

But then why stop with the apes? Dolphins are notoriously good at rescuing stranded swimmers, and if it wasn’t so terribly anthropomorphising I’d be tempted to say that they sometimes seem to go out of their way to help. Could it be that they find us intriguing, or perhaps laughable, or even pathetic (possibly in both senses)? – Adrift in the sea and barely able to flap around. “Why do humans decide to strand themselves?” they may legitimately wonder.

Dogs too display all the signs of liking us, or fearing us, and, at other times, of experiencing pleasure and pain, so here again what justification do those same scientists have to assume their expressions are mere simulacra? And do the birds really sing solely to attract potential mates and to guard their territory? Is the ecstatic trilling of the lark nothing more than a pre-programmed reflex? Here is what the eminent Dutch psychologist, primatologist and ethologist, Frans B.M. de Waal, has to say:

“I’ve argued that many of what philosophers call moral sentiments can be seen in other species. In chimpanzees and other animals, you see examples of sympathy, empathy, reciprocity, a willingness to follow social rules. Dogs are a good example of a species that have and obey social rules; that’s why we like them so much, even though they’re large carnivores.” 8

Here’s an entertaining youtube clip showing how goats too sometimes like to have a good time:

Rather than investigating the ample evidence of animal emotions, for too long the scientific view has been focused on the other end of the telescope. So we’ve had the behaviourists figuring that if dogs can be conditioned to salivate to the sound of bells then maybe children can be similarly trained, even to the extent of learning such unnecessary facts and skills (at least from a survival point of view) as history and algebra. Whilst more recently, with the behaviourists having exited the main stage (bells ringing loudly behind) a new wave of evolutionary psychologists has entered, and research is on-going; a search for genetic propensities for all traits from homosexuality and obesity, to anger and delinquency. Yes, genes for even the most evidently social problems, such as criminality, are being earnestly sought after, so desperate is the need of some to prove we too are nothing more than complex reflex machines; dumb robots governed by our gene-creators, much as Davros operates the controls of the Daleks. In these ways we have demoted our own species to the same base level as the supposedly automata beasts.

Moreover, simply to regard every non-human animal as a being without sentience is scientifically unfounded. If anything it is indeed based on a ‘religious’ prejudice; one derived either directly from orthodox faith, or as a distorted refraction via our modern faith in humanism. But it is also a prejudice that leads inexorably into a philosophical pickle, inspiring us to draw equally dopey mechanical caricatures of ourselves.

*

So what is Darwin’s final legacy? Well, that of course remains unclear, and though it is established that his conjectured mechanism for the development and diversity of species is broadly correct, this is no reason to believe that the whole debate is completely done and dusted. And since Darwin’s theory of evolution has an in-built bearing on our relationship to the natural world, and by interpolation, to ourselves, we would be wise to recognise its limitations.

Darwinism offers satisfactory explanations to a great many questions. How animals became camouflaged. Why they took to mimicry. What causes peacocks to grow such fabulous tails – or at least why their fabulous tails grow so prodigiously large. It also helps us to understand a certain amount of animal behaviour. Why male fish more often look after the young than males of other phylum. Why cuckoos lay their eggs in the nests of other birds. And why the creatures that produce the largest broods are most often the worst parents.

Darwinism also makes a good account of a wide range of complex and sophisticated human emotions. It copes admirably with nearly all of the seven deadly sins. Gluttony, wrath, avarice and lust present no problems at all. Sloth is a little trickier, though once we understand the benefits of conserving energy, it soon fits into place, whilst envy presumably encourages us to strive harder. Pride is perhaps the hardest to fathom, since it involves an object of affection that hardly needs inventing, at least from a Darwinian perspective. But I wish to leave aside questions of selfhood for later.

So much for the vices then, but what of the virtues. How, for example are Darwinians able to account for rise of more altruistic behaviour? And for Darwinian purists, altruism arrives as a bit of a hot potato. Not that altruism is a problem in and of itself, for this is most assuredly not the case. Acts of altruism between related individuals are to be expected. Mothers that did not carry genes to make them devoted toward their own children would be less likely to successfully pass on their genes. The same may be said for natural fathers, and this approach can be intelligently elaborated and extended to include altruism within larger, and less gene-related groups. It is a clever idea, one that can be usefully applied to understanding the organisation of various communities, including those of social insects such as bees, ants, termites and, of course, naked mole rats…! Yes, as strange as it may sound, one special species of subterranean rodents, the naked mole rats, have social structures closely related to those of the social insects, and the Darwinian approach explains this too, as Dawkins brilliantly elucidates in a chapter of his book The Selfish Gene. Yet there remains one puzzle that refuses such insightful treatment.

When I was seventeen I went off cycling with a friend. On the first day of our adventures into the wilderness that is North Wales, we hit a snag. Well, actually I hit a kerb, coming off my bike along a fast stretch of the A5 that drops steeply down into Betws-y-Coed – a route that my parents had expressly cautioned me not to take, but then as you know, boys will be boys. Anyway, as I came to a long sliding halt along the pavement (and not the road itself, as luck would have it), I noticed that a car on the opposite side had pulled up. Soon afterwards, I was being tended to by a very kindly lady. Improvising first aid using tissues from a convenient packet of wet-wipes, she gently stroked as much of the gravel from my wounds as she could. She calmed me, and she got me back on my feet, and without all her generous support we may not have got much further on our travels. I remain very grateful to this lady, a person who I am very unlikely to meet ever again. She helped me very directly, and she also helped me in another way, by teaching me one of those lessons of life that stick. For there are occasions when we all rely on the kindness of strangers, kindness that is, more often than not, as freely given as it is warmly received. Yet even such small acts of kindness pose a serious problem for Darwinian theory, at least, if it is to successfully explain all forms of animal and human behaviour. The question is simply this: when there is no reward for helping, why should anyone bother to stop?

Dawkins’ devotes an entire chapter of The Selfish Gene to precisely this subject. Taking an idea from “game theory” called “the prisoner’s dilemma”, he sets out to demonstrate that certain strategies of life that aim toward niceness are actually more likely to succeed than other more cunning and self-interested alternatives. His aim is to prove that contrary to much popular opinion “nice guys finish first”. But here is a computer game (and a relatively simple one at that), whereas life, as Dawkins knows full well, is neither simple nor a game. In consequence, Dawkins then grasps hold of another twig. Pointing out how humans are a special case – as if we needed telling…

As a species, he says, we have the unique advantage of being able to disrespect the programming of our own selfish genes. For supporting evidence he cites the use of contraception, which is certainly not the sort of thing that genes would approve of. But then why are we apparently unique in having this ability to break free of our instinctual drives? Dawkins doesn’t say. There is no explanation other than that same old recourse to just how extraordinarily clever we are – yes, we know, we know! Yet the underlying intimation is really quite staggering: that human beings have evolved to be so very, very, very clever, that we have finally surpassed even ourselves.

As for such disinterested acts of altruism, the kind of instance exemplified by the Samaritanism of my accidental friend, these, according to strict Darwinians such as Dawkins, must be accidents of design. A happy bi-product of evolution. A spillover. For this is the only explanation that evolutionary theory in its current form could ever permit.

Embedded below is one of a series of lectures given by distinguished geneticist and evolutionary biologist Richard Lewontin in 1990. The minutely detailed case he makes against the excesses of a Darwinian approach to human behaviour, as well as the latent ideology of socio-biology, is both lucid and persuasive:

*

Allow me now to drop a scientific clanger. My intention is to broaden the discussion and tackle issues about what Darwinism has to say about being human, and no less importantly, about being animal or plant. To this end then, I now wish to re-evaluate the superficially religious notion of “souls”; for more or less everything I wish to say follows from consideration of this apparently archaic concept.

So let me begin by making the seemingly preposterous and overtly contentious statement that just as Darwin’s theory in no way counters a belief in the existence of God, or gods as such, likewise, it does not entirely discredit the idea of souls. Instead, Darwin has eliminated the apparent need for belief in the existence of either souls or gods. But this is in no means the same as proving they do not exist.

Now, by taking a more Deistic view of Creation (as Darwin more or less maintained until late in his own life), one may accept the point about some kind of godly presence, for there is certainly room for God as an original creative force, and of some ultimately inscrutable kind, and yet it may still be contended that the idea of souls has altogether perished. For evolutionary theory establishes beyond all reasonable doubt that we are fundamentally no different from the other animals, or in essence from plants and bacteria. So isn’t it a bit rich then, clinging to an idea like human souls? Well, yes, if you put it that way, though we may choose to approach the same question differently.

My contention is that ordinary human relations already involves the notion of souls, only that we generally choose not to use the word soul in these contexts, presuming it to be outmoded and redundant. But perhaps given the religious weight of the word this will seem a scandalous contention, so allow me to elucidate. Everyday engagement between human beings (and no doubt other sentient animals), especially if one is suffering or in pain, automatically involves the feeling of empathy. So what then is the underlying cause of our feelings of empathy? – Only the most hard-nosed of behaviourists would dismiss it as a merely pre-programmed knee-jerk response.

Well, empathy, almost by definition, must mean that, in the other, we recognise a reflection of something found within ourselves. But then, what is it that we are seeing reflected? Do we have any name for it? And is not soul just as valid a word as any other? Or, to consider a more negative context, if someone commits an atrocity against others, then we are likely to regard this person as wicked. We might very probably wish to see this person punished. But how can anyone be wicked unless they had freedom to choose otherwise? So then, what part of this person was actually free? Was it the chemical interactions in their brain, or the electrical impulses between the neurons, or was it something altogether less tangible? And whatever the cause, we cannot punish the mass of molecular interactions that comprises their material being, because punishment involves suffering and molecules are not equipped to suffer. So ultimately we can only punish “the person within the body”, and what is “the person within the body” if not their soul?

But why is it, you may be wondering, that I want to rescue the idea of souls at all. For assuredly you may argue – and not without sound reason – that you have no want nor need for any woolly notions such as soul or spirit to encourage you to become an empathetic and loving person. You might even add that many of the cruellest people in history believed in the existence of the human soul. And I cannot counter you on either charge.

But let’s suppose that finally we have banished all notions of soul or spirit completely and forever – what have we actually achieved? And how do we give a fair account for that other quite extraordinary thing which is ordinary sentience. For quite aside from the subtle complexity of our moods and our feelings of beauty, of sympathy, of love, we must first account for our senses. Those most primary sensory impressions that form the world we experience – the redness of red objects, the warmth of fire, the saltiness of tears – the inexpressible, immediate, and ever-present streaming experience of conscious awareness that philosophers have called qualia. If there are no souls then what is actually doing the experiencing? And we should remember that here “the mind” is really nothing more or less, given our current ignorance, than a quasi-scientific synonym for soul. It is another name for the unnailable spook.

Might we have developed no less successfully as dumb automata? There is nothing in Darwin or the rest of science that calls on any requirement for self-conscious awareness to ensure our survival and reproduction. Nothing to prevent us negotiating our environment purely with sensors connected to limbs, via programmed instructions vastly more complex yet inherently no different from the ones that control this word processor, and optimised as super-machines that have no use for hesitant, stumbling, bumblingly incompetent consciousness. So what use is qualia in any case?

In purely evolutionary terms, I don’t need to experience the sensation of red to deal with red objects, any more than I need to see air in order to breathe. Given complex enough programs and a few cameras, future robots can (and presumably will) negotiate the world without need of actual sensations, let alone emotions. And how indeed could the blind mechanisms of dumb molecules have accidentally arranged into such elaborate forms to enable cognitive awareness at all? Darwin does not answer these questions – they fall beyond his remit. But then no one can answer these questions (and those who claim reasons to dismiss qualia on philosophical grounds, can in truth only dismiss the inevitably vague descriptions, rather than the ever-present phenomenon itself – or have they never experienced warmth, touched roughness nor seen red?).

And so the most ardent of today’s materialists wish to go further again. They want to rid the world of all speculation regarding the nature of mind. They say it isn’t a thing at all, but a process of the brain, which is conceivably true. (Although I’d add why stop at the brain?)

One fashionable idea goes that really we are “minding”, which is interesting enough given our accustomed error of construing the world in terms of objects rather actions; nouns coming easier than verbs to most of us. But then, whether the mind might be best represented by a noun or a verb seems for now, and given that we still know next to nothing in any neurological sense, to be purely a matter of taste.

The modern reductionism that reduces mind to brain, often throws up an additional claim. Such material processes, it claims, will one day be reproduced artificially in the form of some kind of highly advanced computer brain. Well, perhaps this will indeed happen, and perhaps one day we really will have “computers” that actually experience the world, rather than the sorts of machines today that simply respond to sensors in increasingly complex ways. I am speculating about machines with qualia: true artificial brains that are in essence just as aware as we are. But then how will we know?

Well, that’s a surprisingly tricky question and it’s one that certainly isn’t solved by the famous Turing Test, named after the father of modern computing, Alan Turing. For the Turing Test is merely a test of mimicry, claiming that if one day a computer is so cunningly programmed that it has become indistinguishable from a human intelligence then it is also equivalent. But that of course is nonsense. It is nonsense that reminds me of a very cunning mechanical duck someone once made: one that could walk like a duck, quack like a duck, and if rumours are to be believed, even to crap like a duck. A duck, however, it was not, and nor could it ever become one no matter how elaborate its clockwork innards. And as with ducks so with minds.

But let’s say we really will produce an artificial mind, and somehow we can be quite certain that we really have invented just such an incredible, epoch-changing machine. Does this mean that in the process of conceiving and manufacturing our newly conscious device, we must inevitably learn what sentience is of itself? This is not a ridiculous question. Think about it: do you need to understand the nature of light in order to manufacture a light bulb? No. The actual invention of light bulbs precedes the modern physical understanding. And do we yet have a full understanding of what light truly is, and is such a full understanding finally possible at all?

Yet there are a few scientists earnestly grappling with questions of precisely this kind, venturing dangerously near the forests and swamps of metaphysics, in search of answers that will require far better knowledge and understanding of principles of the mind. Maybe they’ll even uncover something like “the seat of the soul”, figuring out from whence consciousness springs. Though I trust that you will not misunderstand me here, for it is not that I advocate some new kind of reductionist search for the soul within, by means of dissection or the application of psychical centrifuges using high strength magnetic fields or some such. As late as the turn of the twentieth century, there was indeed a man called Dr. Duncan MacDougall, who had embarked on just such a scheme: weighing people at the point of death, in experiments to determine the mass of the human soul. 9 A futile search, of course, for soul – or mind – is unlikely to be in, at least in the usual sense, a substantial thing. And though contingent with life, we have no established evidence for its survival into death.

My own feeling is that the soul is no less mortal than our brains and nervous systems, on which it seemingly depends. But whatsoever it turns out to be, it is quite likely to be remain immeasurable – especially if we choose such rudimentary apparatus as a set of weighing scales for testing it. The truth is that we know nothing as yet, for the science of souls (or minds if you prefer) is still without its first principle. So the jury is out on whether or not science will ever explain what makes a human being a being at all, or whether is it another one of those features of existence that all philosophy is better served to “pass over in silence”.

Here is what respected cognitive scientist Steven Pinker has to say of sentience in his entertainingly presented and detailed overview of our present understanding of How the Mind Works:

“But saying that we have no scientific explanation of sentience is not the same as saying that sentience does not exist at all. I am as certain that I am sentient as I am certain of anything, and I bet you feel the same. Though I concede that my curiosity about sentience may never be satisfied, I refuse to believe that I am just confused when I think I am sentient at all! … And we cannot banish sentience from our discourse or reduce it to information access, because moral reasoning depends on it. The concept of sentience underlies our certainty that torture is wrong and that disabling a robot is the destruction of property but disabling a person is murder.” 10

*

There is a belief that is common to a camp of less fastidious professional scientists than Pinker, which, for the sake of simplicity, holds that consciousness, if it was ever attached at all, was supplied by Nature as a sort of optional add-on, in which every human experience is fully reducible to an interconnected array of sensory mechanisms and data-processing systems. Adherents to this view tend not to think too much about sentience, of course, and in rejecting their own central human experience, thereby commit a curiously deliberate act of self-mutilation that leaves only zombies fit for ever more elaborate Skinner boxes 11, even when, beyond their often clever rationalisations, we all share a profound realisation that there is far more to life than mere stimulus and response.

Orwell, wily as ever, was alert to such dangers in modern thinking, and reworking a personal anecdote into grim metaphor, he neatly presented our condition:

“… I thought of a rather cruel trick I once played on a wasp. He was sucking jam on my plate, and I cut him in half. He paid no attention, merely went on with his meal, while a tiny stream of jam trickled out of his severed œsophagus. Only when he tried to fly away did he grasp the dreadful thing that had happened to him. It is the same with modern man. The thing that has been cut away is his soul, and there was a period — twenty years, perhaps — during which he did not notice it.”

Whilst Orwell regards this loss as deeply regrettable, he also recognises that it was a very necessary evil. Given the circumstances, giving heed to how nineteenth century religious belief was “…in essence a lie, a semi-conscious device for keeping the rich rich and the poor poor…” he is nevertheless dismayed how all too hastily we’ve thrown out the baby with the holy bathwater. Thus he continues:

“Consequently there was a long period during which nearly every thinking man was in some sense a rebel, and usually a quite irresponsible rebel. Literature was largely the literature of revolt or of disintegration. Gibbon, Voltaire, Rousseau, Shelley, Byron, Dickens, Stendhal, Samuel Butler, Ibsen, Zola, Flaubert, Shaw, Joyce — in one way or another they are all of them destroyers, wreckers, saboteurs. For two hundred years we had sawed and sawed and sawed at the branch we were sitting on. And in the end, much more suddenly than anyone had foreseen, our efforts were rewarded, and down we came. But unfortunately there had been a little mistake. The thing at the bottom was not a bed of roses after all, it was a cesspool full of barbed wire.” 12

On what purely materialistic grounds can we construct any system of agreed morality? Do we settle for hedonism, living our lives on the unswerving pursuit of personal pleasure; or else insist upon the rather more palatable, though hardly more edifying alternative of eudaemonism, with its eternal pursuit of individual happiness? Our desires for pleasure and happiness are evolutionarily in-built, and it is probably fair to judge that most, if not all, find great need of both to proceed through life with any healthy kind of disposition. Pleasure and happiness are wonderful gifts, to be cherished when fortune blows them to our shore. Yet pleasure is more often short-lived, whilst happiness too is hard to maintain. So they hardly stand as rocks, providing little in the way of stability if we are to build solidly from their foundations. Moreover, they are not, as we are accustomed to imagine, objects to be sought after at all. If we chase either one then it is perfectly likely that it will recede ever further from our reach. So it is better, I believe, to look upon these true gifts as we find them, or rather, as they find us: evanescent and only ever now. Our preferred expressions of the unfolding moment of life. To measure our existence solely against them is however, to miss the far bigger picture of life, the universe and everything. 13

We might decide, of course, to raise the social above these more individualistic pursuits: settling on the Utilitarian calculus of increased happiness (or else reduced unhappiness) for the greatest number. But here’s a rough calculation, and one that, however subtly conceived, never finally escapes from its own deep moral morass. For Utilitarianism, though seeking to secure the greatest collective good, is by construction, blind to all evils as such, being concerned always and only in determining better or worse outcomes. The worst habit of Utilitarianism is to preference ends always above means. Lacking moral principle, it grants licence for “necessary evils” of every prescription: all wrongs being weighed (somehow) against perceived benefits.

We have swallowed a great deal of this kind of poison, so much that we feel uncomfortable in these secular times to speak of “acts of evil” or of “wickedness”. As if these archaic terms might soon be properly expurgated from our language. Yet still we feel the prick of our own conscience. A hard-wired sense of what is most abhorrent, combined with an innate notion of justice that once caused the child to complain “but it isn’t fair… it isn’t fair!”  Meanwhile, the “sickness” in the minds of others makes us feel sick in turn.

On what grounds can the staunchest advocates of materialism finally challenge those who might turn and say: this baby with Down’s Syndrome, this infant with polio, this old woman with Parkinson’s Disease, this schizophrenic, these otherwise healthy but unwanted babies or young children, haven’t they already suffered enough? And if they justify a little cruelty now in order to stave off greater sufferings to come, or more savagely still, claim that the greater good is served by the painless elimination of a less deserving few. What form should our prosecution take? By adopting a purely materialistic outlook then, we are collectively drawn, whether we wish it or not, toward the pit of nihilism. Even the existentialists, setting off determined to find meaning in the here and now, sooner or later recognised the need for some kind of transcendence, or else abandoned all hope.

*

Kurt Vonnegut was undoubtedly one of the most idiosyncratic of twentieth century writers. 14 During his lifetime, Vonnegut was often pigeonholed as a science fiction writer, and this was no doubt because his settings are very frequently in some way futuristic, because as science fiction goes, his stories are generally rather earth-bound. In general, Vonnegut seems more preoccupied with the unlikely interactions between his variety of freakish characters (many of whom reappear in different novels), than in using his stories as a vehicle to project his vision of the future itself. Deliberately straightforward, his writing is ungarnished and propelled by sharp, snappy sentences. He hated semi-colons, calling them grammatical hermaphrodites.

Vonnegut often used his talented imagination to tackle the gravest of subjects, clowning around with dangerous ideas, and employing the literary equivalent of slapstick comedy to puncture human vanity and to make fun of our grossest stupidities. He liked to sign off chapters with a hand-drawn asterisk, because he said it represented his own arsehole. As a satirist then, he treads a path that was pioneered by Swift and Voltaire; of saying the unsayable but disguising his contempt under the cover of phantasy. He has become a favourite author of mine.

In 1996, he was awarded the title of American Humanist of the Year. In his acceptance speech, he took the opportunity to connect together ideas that had contributed to his own understanding of what it meant to be a humanist; ideas that ranged over a characteristically shifting and diverse terrain. Here were his concluding remarks:

“When I was a little boy in Indianapolis, I used to be thankful that there were no longer torture chambers with iron maidens and racks and thumbscrews and Spanish boots and so on. But there may be more of them now than ever – not in this country but elsewhere, often in countries we call our friends. Ask the Human Rights Watch. Ask Amnesty International if this isn’t so. Don’t ask the U.S. State Department.

And the horrors of those torture chambers – their powers of persuasion – have been upgraded, like those of warfare, by applied science, by the domestication of electricity and the detailed understanding of the human nervous system, and so on. Napalm, incidentally, is a gift to civilization from the chemistry department of Harvard University.

So science is yet another human-made God to which I, unless in a satirical mood, an ironical mood, a lampooning mood, need not genuflect.” 15

*

Rene Descartes is now most famous for having declared, “cogito ergo sum”, which means of course “I think therefore I am”. It was a necessary first step, or so he felt, to escape from the paradox of absolute skepticism, which was the place he had chosen to set out at the beginning of his metaphysical meditations. What Descartes was basically saying was this: look here, I’ve been wondering whether I exist or not, but now having caught myself in the act, I can be sure that I do – for even if I still must remain unsure of everything else besides, I cannot doubt that I am doubting. It is important to realise here that Descartes’ proposition says more than perhaps first meets the eye. After all, he intends it as a stand-alone proof and thus to be logically self-consistent, and the key to understanding how is in his use of the word “therefore”. “Therefore” automatically implying his original act of thinking. If challenged then, to say how he can be certain even in that he is thinking, Descartes’ defence relies upon the very act of thinking (or doubting, as he later put it 16) described in the proposition. Thinking is undeniable, Descartes is saying, and my being depends on this. Yet this first step is already in error, and importantly, the consequences of this error are resonant still throughout modern western thought.

Rene Descartes, a Christian brought up to believe that animals had no soul (as Christians are wont to do), readily persuaded himself that they therefore felt no pain. It was a belief that permitted him to routinely perform horrific experiments in vivisection (he was a pioneer in the field). I mention this because strangely, and in spite of Darwin’s solid refutation of man’s pre-eminence over beasts, animal suffering is still regarded as entirely different in kind to human suffering, even in our post-Christian society. And I am sorry to say that scientists are hugely to blame for this double standard. Barbaric experimentation, most notoriously in the field of psychology, alongside unnecessary tests for new products and new weapons, are still performed on every species aside from ours, whilst in more terrible (and shamefully recent) times, when scientists were afforded licence to redraw the line above the species level, their subsequent demarcations made on grounds of fitness and race, the same cool-headed objectivity was applied to the handicapped, to prisoners of war, and to the Jews. It is better that we never forget how heinous atrocities have too often been committed in the name and pursuit of coldly rational science.

Rene Descartes still has a role to play in this. For by prioritising reason in order to persuade himself of his own existence, he encouraged us to follow him into error. To mix up our thinking with our being. To presume that existence is somehow predicated on reasoning, and not, at least not directly, because we feel, or because we sense, or most fundamentally, because we are.  If it is rationality that sets us apart from the beasts, then we exist in a fuller sense than the beasts ever can.

To be absolutely certain of the reality of a world beyond his mind, however, Descartes needed the help of God.  Of a living God of Truth and Love. For if were it not for the certainty of God’s existence, Descartes argued, his mind – though irrefutably extant – might yet be prey to the illusions of some kind of a “deceitful daemon”. Being nothing more than a brain in a tank, to give his idea a modern slant, and plugged into what today would most probably be called The Matrix.

Thus realising that everything he sensed and felt might conceivably be an elaborately constructed illusion, only Descartes’ profound knowledge of a God of Truth – a God who made the world as true and honest as it appeared to be – could save his philosophy from descent into pure solipsism. But this primary dualism of mind and world is itself the division of mind and body – a division of self – while to regard Reason as the primary and most perfect attribute of being, obviously established the mind above the body, and, more generally, spirit above matter. This is the lasting lesson Descartes taught and it is a lesson we have committed so deeply to our Western consciousness that we have forgotten we ever learnt it in the first place.

The significant difference in today’s world of science, with God now entirely outside of the picture, is that Descartes’ hierarchy has been totally up-ended. Matter is the new boss, and mind, its servant. 17

*

But we might also turn this whole issue on its head. We might admit the obvious. Concede that although we don’t know what it is exactly, there is some decidedly strange and immaterial part to ourselves. That it is indeed the part we most identify with – the part we refer to so lovingly as “I”. And that it is this oh-so mysterious part of us which provides all our prima facie evidence for existence itself. Though in admitting this, the question simply alters. It becomes: how to account for the presence of such a ghost inside our machines? For what outlandish contrivance would we need to reconnect the matter of our brains with any such apparently in-dwelling spirit? And whereas Rene Descartes once proposed that mind and body might be conjoined within the mysterious apparatus of our pineal gland (presumably on the grounds that the pineal gland is an oddly singular organ), we know better and so must look for less localised solutions. In short then, we may finally need to make a re-evaluation of ourselves, not merely as creatures, but as manifestations of matter itself.

Yet, in truth, all of this is really a Judeo-Christian problem; a deep bisection where other traditions never made any first incision. For what is “matter” in any case? Saying it’s all atoms and energy doesn’t give a final and complete understanding. Perhaps our original error was to force such an irreconcilable divorce between nebulous soul (or mind) and hard matter, when they are so indivisibly and gloriously codependent, for though Science draws a marked distinction between the disciplines of physics and psychology, it only stands for sake of convenience; for sake, indeed, of ignorance.

To begin then, let’s try to re-establish some sense of mystery regarding the nature of matter itself – such everyday stuff that we have long taken for granted that through careful measurements and mathematical projections its behaviour can be understood and predicted. Here indeed, Freeman Dyson brings his own expertise in quantum theory, combined with his genius for speculation, to consider the fascinating subject of mind and its relationship to matter:

“Atoms in the laboratory are weird stuff, behaving like active agents rather than inert substances. They make unpredictable choices between alternative possibilities according to the laws of quantum mechanics. It appears that mind, as manifested by the capacity to make choices, is to some extent inherent in every atom. The universe as a whole is also weird, with laws of nature that make it hospitable to the growth of mind.”

Dyson is drawing upon his very deep understanding of quantum physics, and yet already he has really said too much. Quantum choice is not the same as human choice. Quantum choice depends on random chance, which is the reason Einstein famously asserted, “God does not play dice”. Indeed I’m not sure how quantum theory, as it is currently understood, could ever account for the existence of free will and volition, quite aside from the overriding mystery of sentience itself. So Dyson’s more important point is perhaps his last one: that the universe is “hospitable for the growth of mind”. This is too often overlooked. And for Dyson, it offers reason enough for religious contemplation:

“I do not make any clear distinction between mind and God. God is what mind becomes when it has passed beyond the scale of our comprehension. God may be either a world-soul or a collection of world-souls. So I am thinking that atoms and humans and God may have minds that differ in degree but not in kind.” 18

I share with Dyson the opinion that it is better to relish these mysteries rather than to retreat to the dry deception of material certainty. For, as Shakespeare summed up so marvelously in his final play The Tempest: “we are such stuff as dreams are made on…”19 And perhaps this is still the best description we have of ourselves, even though we have no idea whatsoever, how as dream-machines, our dreams are woven.

A toast then! Feel free to join me in raising your glass… to your own mind, your psyche, your soul, call it what you will – a rose by any other name and all that. Three cheers! And to consciousness! To sentience! To uncanny awareness! That same stuff all our dreams are made on…

So with great appreciation and warm affection, here’s to that strangest of things: that thing I so very casually call my-self! But even more than this. To the actual stuff of our lives, to the brain, the entire central nervous system and far beyond. To the eyes and ears and fingertips; to the whole apparatus of our conscious awareness; and to the sentience of all our fellows, whether taking human or other forms! To the strangeness of the material world itself, from which all sentience has miraculously sparked! To the vast and incomprehensible Universe no less, whether manifestly inward or outward, for the distinction may be a finer one than we are in the habit to presume! Here’s to wondering what we are… Drink up!

Next chapter…

*

John Searle is a philosopher who has closely studied the nature of consciousness and concludes that although unique amongst biological phenomena, mind, though mysterious, is obviously a natural function of brain activity. In this lecture he summarises the many failures of the current “scientific” approach to questions of consciousness:

In the interview below Searle discusses why he rejects both the hard-line materialist dismissal of consciousness as an illusion (which is actually nonsensical) and dualist alternatives that rely upon a false division between mind and matter:

And finally, Searle outlines the main difficulties surrounding the unresolved philosophical paradox of free will – put succinctly he says although it is impossible to prove human beings have free will and any capacity for free will also seems to defy physical causality, we are compelled to experience conscious rational decision-making on a daily basis:

*

Addendum: the return of Frankenstein!

The issues surrounding the use of genetically modified organisms (GMOs) are many and complex, but it is perfectly clear that new developments in genetics, like those in nuclear physics more than half a century ago, have automatically opened the door to some quite extraordinary possibilities. Possibilities that will most assuredly impact our future no less dramatically than the advent of atomic reactors and the hydrogen bomb impacted our very recent past – and still continue to affect us today.

The need for a proper debate is long overdue but, hardly surprisingly, the huge bio-tech corporations prefer to keep the debate closed down. Monsanto, for instance, who claim that its perfectly safe to release their GMOs directly into our environment, were also in the habit of claiming that their herbicide Roundup is so harmless you can drink it! 20 But then why on earth would anyone (or at least anyone not in their pocket) trust such self-interested and deliberately compromised risk assessments? The short answer is that the precautionary principle has once again been overridden by money and influence.

What we really need, of course, is a proper debate about the use of genetic modification. A debate that is open and public: a forum for discussion amongst leading experts (and especially those not associated with the powerful bio-tech firms); scientists from other fields, who though ignorant on specifics, might bring a detached expertise by virtue of familiarity with scientific procedures; alongside representatives from other interested parties such as ‘consumers’ (that’s the rest of us by the way – we all consume, and though I hate the word too, it at least offers a little better perspective on our role without the current system, since this is how the system itself defines us).

This great debate needs to be fully inclusive, welcoming intelligent opinion, whether concordant or dissenting. No reasoned objections from any quarters being summarily dismissed as unscientific or anti-scientific, as is so often the case, because we must never leave it for technicians alone to decide on issues that so directly affect our common future. Relying on highly specialised experts alone – even when those experts are fully independent (as they so rarely are these days) –  would be as unwise as it is anti-democratic.

Genetic manipulation is already upon us. It is already helping in the prevention and treatment of diseases, and in the production of medicines such as insulin (although even here serious questions are arising with regards to the potentially harmful side-effects of using a genetically modified product). More controversial again is the development of pest- and drought-resistant strains of crops; developments that are claimed by their producers to have alleviated a great deal of human suffering already, but which seem to have brought misery of new kinds – I will come back to this later.

And then we come to the development of Genetic Use Restriction Technology (Gurt), better known as ‘suicide’ or ‘Terminator’ (to use the industry term) seeds, which are promoted by the industry as a ‘biosafety’ solution. Engineered sterility being a clever way of preventing their own genetically modified plants from causing unwanted genetic contamination – which we might think of as a new form of pollution. The argument being that if modified genes (whether pharmaceutical, herbicide resistance or ‘Terminator’ genes) from a ‘Terminator’ crop get transferred to related plants via cross-pollination, the seed produced from such pollination will be sterile. End of problem.

But this is merely an excuse, of course, and if used in this way, the new technology will ultimately prevent over a billion of the poorest people in the world from continuing in their age-old practice of saving seeds for resowing, which will, as a consequence, make these same farmers totally dependent on a few multinational bio-tech companies. All of which serves as an excellent means for monopolising the world’s food supplies, and offers a satisfactory solution only for the owners of companies like Monsanto. 21

In any case, do we really wish to allow patents on specific genes, opening the door to the corporate ownership of the building blocks to life itself? The world renowned physicist and futurist visionary Freeman Dyson draws a direct comparison to earlier forms of slavery:

“The institution of slavery was based on the legal right of slave-owners to buy and sell their property in a free market. Only in the nineteenth century did the abolitionist movement, with Quakers and other religious believers in the lead, succeed in establishing the principle that the free market does not extend to human bodies. The human body is God’s temple and not a commercial commodity. And now in the twenty-first century, for the sake of equity and human brotherhood, we must maintain the principle that the free market does not extend to human genes.” 22

Nor, I would quickly add, should it extend to the ownership of genes of other higher species of animal or plant life. Moreover, I personally have no wish whatsoever for apples, tomatoes, potatoes (or even tobacco) that provides the RDA for all my nutritional needs, or any other supposed improvement on the original designs – preferring to trust to apples, tomatoes and potatoes that evolved alongside my own human digestive system. And this ought not to be treated as merely a preference, but established as a human right, since we all have the right not to eat GMO just as we have the right to be vegan (not that I’m a vegan, by the way).

Beyond this, we also need to consider the many perfectly serious and inescapable ethical issues that arise once you are tinkering with the primary source code of life itself. Take cloning as an interesting example.

Identical twins are essentially clones, having both developed from the same fertilised egg, and thus sharing the same DNA. But then nature sometimes goes one step further again:

A form of virgin birth has been found in wild vertebrates for the first time.

Researchers in the US caught pregnant females from two snake species and genetically analysed the litters.

That proved the North American pit vipers reproduced without a male, a phenomenon called facultative parthenogenesis that has previously been found only in captive species. 23

I have since learned that parthenogenesis (reproduction without fertilisation or “virgin birth”) is surprisingly common throughout the plant and animal kingdoms. Birds do it, bees do it… and even mammals have been induced to do it. So cloning is not inherently unnatural, and if carried out successfully (as it frequently is in nature), it may one day be no more harmful nor fraught with latent dangers to be a cloned individual than an individual produced by other forms of artificial reproduction. Furthermore, since we already know what human twins are like, we already know what human clones will be like. Yet many ethical questions still hang.

For instance, should anyone be allowed to clone themselves? Or more generally, who chooses which of us are to be cloned? Do we just leave it to the market to decide? And why would we ever want a world populated by identical (or rather, approximately identical – since no two twins are truly identical and there are sound biological reasons for believing clones will never be perfectly reproduced either) human beings? Such ethical questions are forced by the new biotechnologies. And there are many further reasons for why ordinary, intelligent public opinion needs to be included in the debate.

Here is Freeman Dyson again, summarising his own cautious optimism as we enter the age of the new ‘green technologies’:

“I see two tremendous goods coming from biotechnology in the next century, first the alleviation of human misery through progress in medicine, and second the transformation of the global economy through green technology spreading wealth more equitably around the world. The two great evils to be avoided are the use of biological weapons and the corruption of human nature by buying and selling genes. I see no scientific reason why we should not achieve the good and avoid the evil.

The obstacles to achieving the good are political rather than technical. Unfortunately a large number of people in many countries are strongly opposed to green technology, for reasons having little to do with the real dangers. It is important to treat the opponents with respect, to pay attention to their fears, to go gently into the new world of green technology so that neither human dignity nor religious conviction is violated. If we can go gently, we have a good chance of achieving within a hundred years the goals of ecological sustainability and social justice that green technology brings within our reach.” 24

Dyson is being too optimistic no doubt with many of the dangers of GMOs slowly coming to light more two decades after Dyson uttered these words as part of his acceptance speech for the award of the Templeton Prize in 2000.

Meanwhile in 2012, Greenpeace issued the following press release. It contains the summary of an open letter sent by nearly a hundred Indian scientists to the Supreme Court of India:

An official report submitted by the technical Expert committee set up by the Supreme Court of India comprising of India’s leading experts in molecular biology, toxicology and biodiversity – unanimously recommends a 10-year moratorium on all field trials of GM Bt [insecticide producing due to genes from Bacillus thuringiensis] food crops, due to serious safety concerns. The committee has also recommended a moratorium on field trials of herbicide tolerant crops until independent assessment of impact and suitability, and a ban on field trials of GM crops for which India is center of origin and diversity.

The report’s recommendations are expected put a stop to all field releases of GM food crops in India, including the controversial Bt eggplant, whose commercial release was put under an indefinite moratorium there last February 2010. Contrarily, the same Bt eggplant is currently being evaluated for approval in the Philippines.

“This official unanimous declaration on the risks of GMOs, by India’s leading biotech scientists is the latest nail on the coffin for GMOs around the world,” said Daniel M. Ocampo, Sustainable Agriculture Campaigner of Greenpeace Southeast Asia. “It is yet another proof that GMOs are bad for the health, bad for the environment, bad for farmers and bad for the economy.” 25

For though it would be foolish to fail to recognise the enormous potential benefits of some of the new ‘green technologies’, any underestimate of the hazards is sheer recklessness. And this is where my own opinion differs significantly from enthusiasts like Dyson. This science is just so brilliantly new, and so staggeringly complex. The dangers are real and very difficult to over-estimate and so public concern is fully justified whether over health and safety issues, over the politico-economic repercussions, or due to anxieties of a more purely ethical kind.

*

Please note that for the purposes of ‘publishing’ here I have taken advantage of the option to incorporate hypertext links and embed videos – in order to distinguish additional commentary from the original text all newly incorporated text has been italised.

1 There is sound evidence for believing that protons and neutrons are made of quarks, whereas electrons it seems are a type of fundamental particle which has no component parts.

2 My use of the analogue/digital comparison is simplistic, of course, but then it is only intended as a loose analogy, nothing more.

3 Since writing this I have come upon a range of so-called Young Earth Theories of Geology that contradict my former opinion. Apparently there are indeed groups of Creationists intent on disproving ideas of a 4.5 billion year old planet in favour of a ten thousand year prehistory. Needless to say there is no supporting evidence for this contention.

4

“Cro-magnons are, in informal usage, a group among the late Ice Age peoples of Europe. The Cro-Magnons are identified with Homo sapiens sapiens of modern form, in the time range ca. 35,000-10,000 b.p. […] The term “Cro-Magnon” has no formal taxonomic status, since it refers neither to a species or subspecies nor to an archaeological phase or culture. The name is not commonly encountered in modern professional literature in English, since authors prefer to talk more generally of anatomically modern humans (AMH). They thus avoid a certain ambiguity in the label “Cro-Magnon”, which is sometimes used to refer to all early moderns in Europe (as opposed to the preceding Neanderthals), and sometimes to refer to a specific human group that can be distinguished from other Upper Paleolithic humans in the region. Nevertheless, the term “Cro-Magnon” is still very commonly used in popular texts because it makes an obvious distinction with the Neanderthals, and also refers directly to people rather than to the complicated succession of archaeological phases that make up the Upper Paleolithic. This evident practical value has prevented archaeologists and human paleontologists – especially in continental Europe – from dispensing entirely with the idea of Cro-Magnons.”

Taken from The Oxford Companion to Archaeology. Oxford, UK: Oxford University Press. p. 864.

5

“Jambo, Jersey Zoos world famous and much loved silverback gorilla had a truly remarkable life. He was born in Basel Zoo in Switzerland in 1961. He arrived at Jersey Zoo on the 27th April 1972. Jambo, Swahili for Hello, is perhaps better known to the public for the gentleness he displayed towards the little boy who fell into the gorilla enclosure at Jersey Zoo one afternoon in 1986. The dramatic event hit the headlines and helped dispel the myth of gorillas as fearsome and ferocious. It was a busy Sunday afternoon in August 1986 when an incredulous public witnessed Levan Merritt a small boy from Luton UK fall into the Gorilla enclosure at Jersey Zoo. “

Extract taken from “The Hero Jambo”, a tribute to Jambo written by the founder of Jersey Zoo, Gerald Durrell.

6

“LAST SUMMER, AN APE SAVED a three-year-old boy. The child, who had fallen 20 feet into the primate exhibit at Chicago’s Brookfield Zoo, was scooped up and carried to safety by Binti Jua, an eight-year-old western lowland female gorilla. The gorilla sat down on a log in a stream, cradling the boy in her lap and patting his back, and then carried him to one of the exhibit doorways before laying him down and continuing on her way.”

Extract taken from article by F. B. M. de Waal (1997) entitled “Are we in anthropodenial? Discover 18 (7): 50-53.”

7   

“Binti became a celebrity overnight, figuring in the speeches of leading politicians who held her up as an example of much-needed compassion. Some scientists were less lyrical, however. They cautioned that Binti’s motives might have been less noble than they appeared, pointing out that this gorilla had been raised by people and had been taught parental skills with a stuffed animal. The whole affair might have been one of a confused maternal instinct, they claimed.”

Ibid.

8 Quoted in an article entitled: “Confessions of a Lonely Atheist: At a time when religion pervades every aspect of public life, there’s something to be said for a revival of pagan peevishness”, written by Natalie Angier for The New York Times Magazine, from January 14, 2001.

9 In 1907, MacDougall weighed six patients who were in the process of dying (accounts of MacDougall’s experiments were published in the New York Times and the medical journal American Medicine). He used the results of his experiment to support the hypothesis that the soul had mass (21 grams to be precise), and that as the soul departed the body, so did its mass. He also measured fifteen dogs under similar conditions and reported the results as “uniformly negative”. He thus concluded that dogs did not have souls. MacDougall’s complaints about not being able to find dogs dying of the natural causes have led at least one author to conjecture that he was in fact poisoning dogs to conduct these experiments.

10 Extract taken from Chapter 2, “Thinking Machines” of Steven Pinker’s How the Mind Works, published by Penguin Science, 1997, p 148. Italics in the original.

11 An operant conditioning chamber (sometimes known as a Skinner box) is a laboratory apparatus developed by BF Skinner, founding father of “Radical Behaviourism”, during his time as a graduate student at Harvard University. It is used to study animal behaviour and investigate the effects of psychological conditioning using programmes of punishment and reward.

12 Extract taken from Notes on the Way by George Orwell, first published in Time and Tide, London, 1940.

13  I received a very long and frank objection to this paragraph from one of my friends when they read through a draft version, which I think is worth including here in the way of balance:

“I must explain that I’m a hedonist to a ridiculous degree, so much so that my “eudaemonism” (sounds dreadful –not like happiness-seeking at all!) is almost completely bound up with the pursuit of pleasure, as for me there is little difference between a life full of pleasures and a happy life.  Mind you, pleasure in my definition (as in most people’s, I guess) covers a wide array of things: from the gluttonous through to the sensuous, the aesthetic, the intellectual and even the spiritual; and I would also say that true pleasure is not a greedy piling up of things that please, but a judicious and even artistic selection of the very best, the most refined and the least likely to cause pain as a side effect  (I think this approach to pleasure is called “Epicureanism”).

Love, of course, is the biggest source of pleasure for most, and quite remarkably, it’s not only the receiving but the giving of it that makes one truly happy, even when some pain or sacrifice is involved.  This is how I explain acts of generosity like the one you describe, by the woman who helped you when you fell off your bike as a teenager: I think she must have done it because, despite the bother and the hassle of the moment, deep down it made her happy to help a fellow human being. We have all felt this way at some point or other, and as a result I believe that pleasure is not antithetical to morality, because in fact we can enjoy being kind and it makes us unhappy to see suffering around us. This doesn’t mean that we always act accordingly, and we certainly have the opposite tendency, too: there is a streak of cruelty in every human that means under some circumstances, we’ll enjoy hurting even those we love. But my point is, hedonism and a concern for others are not incompatible. The evolutionary reason for this must be that we are a social animal, so empathy is conducive to our survival as much as aggression and competitiveness may be in some environments. In our present environment, i.e. a crowded planet where survival doesn’t depend on killing lions but on getting on with each other, empathy should be promoted as the more useful of the two impulses. This isn’t going to happen, of course, but in my opinion empathy is the one more likely to make us happy in the long run.

Having attempted to clean up the name of pleasure a bit, I’ll try to address your other complaints against a life based on such principles: “Yet pleasure is more often short-lived, whilst happiness too is hard to maintain.” I agree, and this is indeed the Achilles heel of my position: I’m the most hypochondriac and anxiety-prone person I know, because as a pleasure-a-holic and happiness junkie I dread losing the things I enjoy most. The idea of ever losing [my partner], for example, is enough to give me nightmares, and I’m constantly terrified of illness as it might stop me having my fun. Death is the biggest bogie. I’m not blessed with a belief in the afterlife, or even in the cosmic harmony of all things. This is [my partner]’s belief as far as I can tell, and I’d like to share it, but I’ve always been an irrational atheist – I haven’t arrived at atheism after careful thinking, but quite the opposite, I’ve always been an atheist because I can’t feel the godliness of things, so it is more of a gut reaction with me. The closest thing to the divine for me is in beauty, the beauty of nature and art, but whether Beauty is Truth, I really don’t know, and in any case beauty, however cosmic, won’t make me immortal in any personal or individual sense. I’m horrified at the idea of ceasing to exist, and almost as much at the almost certain prospect of suffering while in the process of dying. This extreme fear is probably the consequence of my hedonist-epicurean-eudaemonism.

On the other hand, since everyone, including the most religious and ascetic people, is to some extent afraid of dying, is it really such a big disadvantage to base one’s life on the pursuit of pleasure and happiness? I guess not, although I must admit that I’d quite like to have faith in the Beyond. I suppose that I do have some of the agnostic’s openness to the mystery of the universe – as there are so many things that we don’t understand, and perhaps we aren’t even equipped to ever understand, it’s very possible that life and death have a meaning that escapes us. This is not enough to get rid of my fears, but it is a consolation at times.

Finally, I also disagree with you when you say that pleasure and happiness “are not, as we are accustomed to imagine, objects to be sought after at all. If we chase either one then it is perfectly likely that it will recede ever further from our reach.” There’s truth in this, but I think it’s also true that unless one turns these things into a priority, it is very difficult to ever achieve them. I for one find that more and more, many circumstances in my life conspire to stop me having any fun: there are painful duties to perform, ailments to cope with, bad news on a daily basis and many other kinds of difficulties, so if I didn’t insist on being happy at least a little every day, I’d soon forget how to do it. I’m rather militant about it, in fact. I’m always treating myself in some way, though to be fair to myself, a coffee and a croissant can be enough to reconcile me to a bad day at work, for example, so I’m not really very demanding. But a treat of some sort there has to be to keep me going. Otherwise, I don’t see the point.”

14  Kurt Vonnegut had originally trained to be a scientist, but says he wasn’t good enough. His older brother Bernard trained as a chemist and is credited with the discovery that sodium iodide could be used to force precipitation through “cloud seeding”. If you ask for Vonnegut in a library, you’ll probably be directed toward the Science Fiction section, since many of his books are set in strangely twisted future worlds. However, his most famous and most widely acclaimed work draws on experiences during the Second World War, and in particular on the Allied fire-bombing of Dresden. Vonnegut had personally survived the attack by virtue of being held as prisoner of war in an underground meat locker, and the irony of this forms the title of the novel, Slaughterhouse-five.

15  Extract taken from “Why My Dog Is Not a Humanist” by Kurt Vonnegut, published in Humanist, Nov 92, Vol. 52:6.5-6.

16 “We cannot doubt existence without existing while we doubt…” So begins Descartes seventh proposition from his 76 “Principles of Human Knowledge” which forms Part 1 of Principia philosophiae (Principles of Philosophy) published in Latin in 1644 and reprinted in French in 1647 – ten years after his groundbreaking treatise Discourse on the Method in which “Je pense, donc je suis” (“I think, therefore I am”) had first appeared.

http://www.gutenberg.org/cache/epub/4391/pg4391.html

17 A more poetic version of Descartes’ proof had already been constructed centuries earlier by early Islamic scholar, Avicenna, who proposed a rather beautiful thought experiment in which we imagine ourselves falling or else suspended, and thus isolated and devoid of all sensory input including any sense of our own body. The “floating man”, Avicenna says, in spite of complete absence of any perceptions of a world beyond, would nevertheless possess self-awareness. That he can still say “I am” proves that he is self-aware and that the soul exists. In consequence, Avicenna also places soul above material, although no priority is granted to reason above our other forms of cognition.

18  Further extracts from Freeman Dyson ‘s acceptance speech for the award of the Templeton Prize, delivered on May 16, 2000 at the Washington National Cathedral.

19  Prospero in Shakespeare’s The Tempest, Act IV, Scene 1.

20 In 1996, the New York Times reported that: “Dennis C. Vacco, the Attorney General of New York, ordered the company to pull ads that said Roundup was ‘safer than table salt’ and ‘practically nontoxic’ to mammals, birds and fish. The company withdrew the spots, but also said that the phrase in question was permissible under E.P.A. guidelines.”

Extract taken from wikipedia with original reference retained. http://en.wikipedia.org/wiki/Monsanto#False_advertising

21 For further arguments against “Terminator Technology”, I recommend the following website: http://www.banterminator.org/content/view/full/233

22 From Freeman Dyson’s acceptance speech for the award of the Templeton Prize, delivered on May 16, 2000 at the Washington  National Cathedral.

23  From an article entitled “Virgin births discovered in wild snakes” written by Jeremy Coles, published by BBC nature on September 12, 2012. http://www.bbc.co.uk/nature/19555550

24  Also from Freeman Dyson’s acceptance speech for the award of the Templeton Prize.

25 http://www.greenpeace.org/seasia/ph/press/releases/GMOs-declared-unsafe-in-India-Greenpeace-calls-on-PH-to-follow-suit/

This original link has since been removed but the same article can be read here:

https://web.archive.org/web/20130607155209/http://www.greenpeace.org/seasia/ph/press/releases/GMOs-declared-unsafe-in-India Greenpeace-calls-on-PH-to-follow-suit/

Leave a comment

Filed under « finishing the rat race », GMO

apes of wrath?

The following article is Chapter Three of a book entitled Finishing The Rat Race.

All previously uploaded chapters are available (in sequence) by following the link above or from category link in the main menu, where you will also find a table of contents and a preface on why I started writing it.

*

What a piece of work is a man!

— William Shakespeare 1

*

Almost a decade ago, as explosions lit up the night sky above Baghdad, I was at my parents’ home in Shropshire, sat on the sofa, and watching the rolling news coverage. After a few hours we were still watching the same news though for some reason the sound was now off and the music system on.

“It’s a funny thing,” I remarked, between sips of whisky, and not certain at all where my words were leading, “that humans can do this… and yet also… this.” I suppose that I was trying to firm up a feeling. A feeling that arose in response to the unsettling juxtaposition of images and music, and that involved my parents and myself in different ways, as detached spectators. But my father didn’t understand at first, and so I tried again.

“I mean how can it be,” I hesitated, “that on the one hand we are capable of making such beautiful things like music, and yet on the other, we are the engineers of such appalling acts of destruction?” Doubtless I could have gone on elaborating, but there was no need. My father understood my meaning, and the evidence of what I was trying to convey was starkly before us – human constructions of the sublime and the atrocious side-by-side.

In any case, the question, being as it is, a question of unavoidable and immediate importance to all of us, sort of hangs in the air perpetually, although as a question, it is usually considered and recast in alternative ways – something I shall return to – while mostly it remains not merely unanswered, but unspoken. We treat it instead like an embarrassing family secret, which is best forgotten. Framed hesitantly but well enough for my father to reply, his answer was predictable too: “that’s human nature”; which is the quick and easy answer although it actually misses the point entirely – a common fallacy technically known as ignoratio elenchi. For ‘human nature’ in no way provides an answer but simply opens a new question. Just what is human nature? – This is the question.

The generous humanity of music and the indiscriminate but cleverly conceived cruelty of carpet bombing are just different manifestations of what human beings are capable of, and thus of human nature. If you point to both and say “this is human nature”, well yes –and obviously there’s a great deal else besides – whereas if you reserve the term only for occasions when you feel disapproval, revulsion or outright horror – as many do – then your condemnation is simply another feature of “human nature”. In fact, why do we judge ourselves at all?

So this chapter represents an extremely modest attempt to grapple with what is arguably the most complex and involved question of all questions. Easy answers are good when they cut to the bone of a difficult problem, however to explain man’s inhumanity to man as well as to his other fellow creatures, surely deserves a better and fuller account than that man is by nature inhumane – if for no other reason than that the very word ‘human’ owes its origins to the earlier form ‘humane’! Upon this etymological root is there really nothing else but vainglorious self-deception and wishful thinking? I trust that language is in truth less consciously contrived.

The real question then is surely this: When man becomes inhumane, why on this occasion or in this situation, but not on all occasions and under all circumstances? And how come we still use the term ‘inhumane’ at all, if being inhumane is so hard-wired into our human nature? The lessons to be learned by tackling such questions can hardly be overstated; lessons that might well prove crucial in securing the future survival of our societies, our species, and perhaps of the whole planet.

*

I        Monkey business

There are one hundred and ninety-three living species of monkeys and apes. One hundred and ninety-two of them are covered with hair.”

— Desmond Morris 2

*

The scene: just before sunrise about one million years BC, a troop of hominids are waking up and about to discover a strange, rectangular, black monolith that has materialised from nowhere. As the initial excitement and fear of this strange new object wears off, the hominids move closer to investigate. Attracted perhaps by its remarkable geometry, its precise and unnatural blackness, they reach out tentatively to touch it and then begin to stroke it.

As a direct, though unexplained consequence of this communion, one of the ape-men has a dawning realisation. Sat amongst the skeletal remains of a dead animal, he picks up one of the sun-bleached thigh bones and begins to swing it about. Aimless at first, his flailing attempts simply scatter the other bones of the skeleton. In time, however, he gains control and his blows increase in ferocity, until at last, with one almighty thwack, he manages to shatter the skull to pieces. It is a literally epoch-making moment of discovery.

The following day, mingling beside a water-hole, a fight breaks out. His new weapon in hand, our hero deals a fatal blow against the alpha male of a rival troop. Previously at the mercy of predators and reliant on scavenging to find their food, the tribe can now be freed from fear and hunger too. Triumphant, he is the ape-man Prometheus, and in ecstatic celebration of this achievement, he tosses the bone high into the air, whereupon, spinning up and up, higher and higher into the sky, the scene cuts from spinning bone into an orbiting space-craft…

*

Stanley Kubrick’s 2001: A space odyssey is enigmatic and elusive. Told in a sequence of related if highly differentiated parts, it repeatedly confounds the viewer’s expectations – the scene sketched above is only the opening act to Kubrick’s seminal science-fiction epic.

Kubrick said “you are free to speculate as you wish about the philosophical and allegorical meaning of the film” 3 So taking Kubrick at his word, I shall do just that – although not for every aspect of the film, but specifically for his first scene, up to and including that most revered and celebrated ‘match cut’ in cinema history, and its relationship to Kubrick’s mesmerising and seemingly bewildering climax: moments of transformation, when reality per se is re-imagined. Although on one level, at least, all of the ideas conveyed in this opening as well as the more mysterious closing scenes (more below) are abundantly clear. For Kubrick’s exoteric message involves the familiar Darwinian interplay between the foxes and the rabbits and their perpetual battle for survival, which is the fundamental driving force behind the evolutionary development of natural species.

Not that Darwin’s conception should to be misunderstood as war in the everyday sense, however, although this is a very popular interpretation; for one thing the adversaries in these Darwinian arm races, most often predator and prey, in general remain wholly unaware of any escalation in armaments and armour. Snakes, for example, have never sought to strengthen their venom, any more than their potential victims, most spectacularly the opossums that evolved to prey on them, made any conscious attempts to hone their blood-clotting agents. Today’s snake-eating opossums have extraordinary immunity to the venom of their prey purely because natural selection strongly favoured opossums with heightened immunity.

Of course, the case is quite different when we come to humankind. For it is humans alone who deliberately escalate their methods of attack and response and do so by means of technology. To talk of an “arms race” between species is therefore a somewhat clumsy metaphor for what actually occurs in nature – although Darwin is accurately reporting what he finds.

And there is another crucial difference between the Darwinian ‘arms race’ and the human variant. Competition between species is not always as direct as between predator and prey, and frequently looks nothing like a war at all. Indeed, it is more often analogous to the competitiveness of two hungry adventurers lost in a forest. For it may well be that both of our adventurers are completely unaware that somewhere in the midst of the forest there is a hamburger left on a picnic table. While neither adventurer may be aware of the presence of the other, yet they are – at least in a strict Darwinian sense – in competition, since if either one stumbles accidentally upon the hamburger, it happens that, and merely by process of elimination, the other has lost his chance of a meal. As competitors then, the faster walker, or the one with keener eyes, or the one with greatest stamina, will gain a very slight but significant advantage on the other. Thus, perpetual competition between individuals need never amount to war, or even to battles, and this is how Darwin’s ideas are properly understood.

In any case, such contests of adaptation, whether between predators and prey, or sapling trees racing towards the sunlight, can never actually be won. The rabbits may get quicker but the foxes must get quicker too, since if either species fails to adapt then it will not survive long. So it’s actually a perpetual if dynamic stalemate, with species trapped like the Red Queen in Alice Through the Looking-Glass, always having to keep moving ahead just to hold their ground – a paradox that evolutionary biologists indeed refer to as “the red queen hypothesis” 4.

We might still judge that both sides are advancing, since there is, undeniably, a kind of evolutionary progress, with the foxes growing craftier as the rabbits get smarter too, and so we might conclude that such an evolutionary ‘arms race’ is the royal road to all natural progress – although Darwin noted that other evolutionary pressures including, most notably sexual selection, has tremendous influence as well. We might even go further by extending the principle in order to admit our own steady technological empowerment, viewed objectively as being a by-product of our own rather more deliberate arms race. Progress thus assured by the constant and seemingly inexorable fight for survival against hunger and the elements, and no less significantly, by the constant squabbling of our warring tribes over land and resources.

Space Odyssey draws deep from the science of Darwinism, and spins a tale of our future. From bony proto-tool, slowly but inexorably, we come to the mastery of space travel. From terrestrial infants, to cosmically-free adults – this is the overarching story of 2001. But wait, there’s more to that first scene than immediately meets the eye. That space-craft which Kubrick cuts to; it isn’t just any old space-craft…

Look quite closely and you might see that it’s actually one of four space-craft, similar in design, which form the components of an orbiting nuclear missile base, and though in the film this is not as clear as in Arthur C. Clarke’s parallel version of the story (the novel and film were co-creations written side-by-side), the missiles are there if you peer hard enough.

So Space Odyssey is, at least on one level, the depiction of technological development, which, though superficially from first tool to more magnificent uber-tool (i.e., the spacecraft), is also – and explicitly in the novel – a development from the first weapon to what is, up to now, the ultimate weapon, and thus from the first hominid-cide to the potential annihilation of the entire human population. 5

Yet 2001, the year in the title, also magically heralds a new dawn for mankind: a dawn that, as with every other dawn, bursts from the darkest hours. The meaning therefore, as far as I judge it, is that we, as parts of nature, are born to be both creators and destroyers; agents of light and darkness. That our innate but unassailable evolutionary drive, dark as it can be, also has the potential to lead us to the film’s weirdly antiseptic yet quasi-mystical conclusion, and the inevitability of our grandest awakening – a cosmic renaissance as we follow our destiny towards the stars.

Asked in an interview whether he agreed with some critics who had described 2001 as a profoundly religious film, Kubrick replied:

“I will say that the God concept is at the heart of 2001—but not any traditional, anthropomorphic image of God. I don’t believe in any of Earth’s monotheistic religions, but I do believe that one can construct an intriguing scientific definition of God, once you accept the fact that there are approximately 100 billion stars in our galaxy alone, that its star is a life-giving sun and that there are approximately 100 billion galaxies in just the visible universe.”

Continuing:

“When you think of the giant technological strides that man has made in a few millennia—less than a microsecond in the cosmology of the universe—can you imagine the evolutionary development that much older life forms have taken? They may have progressed from biological species, which are fragile shells for the mind at best, into immortal machine entities—and then, over innumerable eons, they could emerge from the chrysalis of matter transformed into beings of pure energy and spirit. Their potentialities would be limitless and their intelligence ungraspable by humans.”

When the interviewer pressed further, inquiring what this envisioned cosmic evolutionary path has to do with the nature of God, Kubrick added:

“Everything—because these beings would be gods to the billions of less advanced races in the universe, just as man would appear a god to an ant that somehow comprehended man’s existence. They would possess the twin attributes of all deities—omniscience and omnipotence… They would be incomprehensible to us except as gods; and if the tendrils of their consciousness ever brushed men’s minds, it is only the hand of God we could grasp as an explanation.” 6

Kubrick was an atheist although unlike many atheists he acknowledged the religious impulse is an instinctual drive no less irrepressible than our hungers to eat and to procreate. This is so because at the irreducible heart of religion lies pure transcendence: the climbing up and beyond ordinary states of being. This desire to transcend whether by shamanic communion with the ancestors and animalistic spirits, monastic practices of meditation and devotion, or by brute technological means is something common to all cultures.

Thus the overarching message in 2001 is firstly that human nature is nature, for good and ill, and secondly that our innate capacity for reason will inexorably propel us to transcendence of our terrestrial origins. In short, it is the theory of Darwinian evolution writ large. Darwinism appropriated and repackaged as an updated creation story – a new mythology and surrogate religion that lends an alternative meaning of life. We will cease to worship nature or humanity, which is nature, it says, and if we continue to worship anything at all, our new icons will be representative only of Progress (capital P). Thus, evolution usurps god! Of course, the symbolism of 2001 can be given esoteric meaning too – indeed, there can never be a final exhaustive analysis of 2001 because like all masterpieces the full meaning is open to an infinitude of interpretations – and this I leave entirely for others to speculate upon.

In 1997, Arthur C. Clarke was invited by the BBC to appear on a special edition of the documentary series ‘Seven Wonders of the World’ (Season 2):

*

I have returned to Darwin only because his vision of reality has become the accepted one. Acknowledging that human nature is just another natural outgrowth, it is tempting therefore to look to Darwin for answers. However, as I touched upon in the previous chapter, though Darwinism as biological mechanism is extremely well-established science, interpretations that follow from those evolutionary principles differ, and this is especially the case when we try to understand patterns of animal behaviour: how much stress to place on our own biological origins remains an even more hotly debated subject. And if we are to adjudicate fairly then one important consideration must be where Darwin’s own ideas originated.

In fact, as with all great scientific discoveries, we can trace a number of precursors including the nascent theory of his grandfather Erasmus, a founder member of the Lunar Society, who wrote lyrically in his seminal work Zoonomia:

“Would it be too bold to imagine, that in the great length of time, since the earth began to exist, perhaps millions of ages before the commencement of the history of mankind, would it be too bold to imagine, that all warm-blooded animals have arisen from one living filament, which THE GREAT FIRST CAUSE endued with animality, with the power of acquiring new parts, attended with new propensities, directed by irritations, sensations, volitions, and associations; and thus possessing the faculty of continuing to improve by its own inherent activity, and of delivering down those improvements by generation to its posterity, world without end!” 7

So doubtless Erasmus sowed the seeds for the Darwinian revolution, although his influence alone does not account for Charles Darwin’s central tenet that it is “the struggle for existence” which provides, as indeed it does, one plausible and vitally important mechanism in the process of natural selection, and thus, a key component in his complete explanation for the existence of such an abundant diversity of species. But again, what caused Charles Darwin to suspect that “the struggle for existence” necessarily involved such “a war of all against all” to begin with?

Well, it turns out that he had borrowed the first idea of “the struggle for existence”, a phrase that he uses as his title heading chapter three of The Origin of Species, directly from Thomas Malthus 8. Interestingly, Alfred Russell Wallace, the less remembered co-discoverer of evolutionary natural selection, who reached his own conclusions independent of Darwin’s work, had also been inspired in part by thoughts of this same concept, which though ancient in origin was by then generally attributed to Malthus.

The notion of “a war of all against all” however traces back further, at least as far back as the English Civil War, and to the writings of highly influential political philosopher, Thomas Hobbes. 9 So it is indirectly from the writings of these two redoubtable Thomases that much our modern thinking about Nature and therefore, by extension, about human nature, has itself evolved. It is instructive therefore to examine the original context from which the formation and development of Hobbes and Malthus’s own ideas occurred; contributions that have been crucial to the evolution of not only evolutionary thinking, but foundational to the development of our post-enlightenment western civilisation. To avoid too much of a digression, I have decided to leave further discussion of Malthus and his continuing legacy for the addendum below, and here to focus attention on the thoughts and influence of Hobbes. But to get to Hobbes, who first devoted his attention to the study of the natural sciences and optics in particular, I will provide a brief diversion by way of my own subject, Physics.

*

The title of Thomas Pynchon’s most celebrated novel Gravity’s Rainbow published in 1973 darkly alludes to the ballistic flight path of Germany’s V2 rockets that fell over London during the last days of the Second World War. Pynchon was able to conjure up this provocative metaphor because by the time of the late twentieth century everyone already knew very well, and seemingly by their direct experience, how projectiles follow a symmetrical and parabolic arc. It is strange to think, therefore, that for well over a millennium people in the western world, including the most scholarly among them, had believed that motion followed a set of quite different laws, presuming the trajectory of a thrown object, rather than following any sweeping arc, must be understood instead as comprised of two quite distinct phases.

Firstly, impelled by a force this object was presumed to enter a stage of “unnatural motion” as it climbed away from the earth’s surface – its natural resting place – before having eventually run out of steam, when it abruptly falls back to earth under “natural motion”. This is indeed our most common sense view of motion – a view any child would instantly recognise and immediately comprehend – although as with many common sense views of the physical world, it is absolutely wrong.

This rather striking illustration of scientific progress was first brought to my attention by a university professor who worked it into an unforgettable demonstration at the beginning of a lecture on error analysis. On the blackboard he first sketched out the two competing hypotheses: a beautifully smooth arc captioned ‘Galileo’ and before it a pair of arrows up and then down labeled ‘Aristotle’. Obviously Galileo was about to win, but then came the punchline as he pulled out a balloon, slapped it at an approximate angle of forty-five degrees before we all watched it drift back to earth just as Aristotle rightly predicted. With tremendous glee he finally drew a huge chalk cross across Galileo, and declared the message (if you didn’t understand) that above and beyond all the other considerations, it is essential you first design your experiment and carry out your observations with due care! 10

*

Legend tells us that Newton was sitting under an apple tree in his garden, unable to fathom what force could maintain the earth in its orbit around the sun, when all of a sudden an apple fell and hit him on the head. And if this is a faithful account of Newton’s Eureka moment, then the symbolism is surely remarkable. We might even say that it was this fall of Newton’s apple that redeemed humanity after the original Fall; snapping Newton and by extension all humanity instantaneously from darkness into an Age of Reason. For if expulsion from Eden involved eating an apple, Newton’s apple paved the way for a new golden age. As poet Alexander Pope wrote so exuberantly: “Nature and Nature’s laws lay hid in night: God said, Let Newton be! and all was light.” 11

Of course Newton’s journey into light had not been a solo venture, and as he said himself, “if I have seen further, it is by standing on the shoulders of giants.” 12 The predecessors and contemporaries Newton pays homage to include Descartes, Huygens, and Kepler, although the name that stands tallest today is once again Galileo. For it was Galileo’s observations and insights that led more or less directly to what we describe today as Newton’s Laws, and in particular Newton’s First Law, which states (in various formulations) that objects remain in uniform motion or at rest unless acted upon by a force.

This deceptively simple law has many surprising consequences. It means that when we see an object moving faster and faster or slower and slower or – and this is an important point – changing its direction of motion, then there must be a force impelling it. Thus it follows that there is a requirement for a force to arc the path of the earth about the sun, and, likewise, one causing the moon to revolve about the earth; hence gravity. Conversely, if an object is at rest (or moving in a straight line at constant speed – the law makes no distinction) then we know the forces acting on it must be balanced in such a way as to cancel to zero. Thus, we can tell purely from any object’s motion whether the forces acting on it are ‘in equilibrium’ or not.

An alternative way of thinking about Newton’s First Law requires the introduction of a related idea called ‘inertia’. Inertia is the reluctance of any object to change its motion, and, it turned out that the more massive the object, the greater its inertia – here I am paraphrasing Newton’s Second law. What this means in practice is that if you set up a situation where there are no resistive forces then an object will travel continually with unchanging velocity. This completely counterintuitive discovery was arguably Galileo’s finest achievement and it is the principle that permits modern hyperloop technology – high speed maglev trains that run without friction through vacuum tunnels. It also permitted Galileo’s understanding of how the earth could revolve indefinitely around the sun and without us ever noticing.

While others falsely presumed that the birds would get left behind if the earth was in motion, Galileo saw that the earth’s moving platform was no different in principle from a moving ship, and that, like on board the ship, nothing gets left behind as it travels forward – this is easier to envisage if you think about being on a train or car and recall how it feels at constant speed, and how you sometimes cannot even tell whether the train you are on or the one on the other platform is moving.

Of course, when Galileo insisted on a heliocentric reality, he was directly challenging Papal authority and paid the inevitable price for his impertinence. Moreover, when he implored his opponents merely to look through his own telescope and see for themselves, they quickly declined the invitation. This is the nature of fundamentalism – not just religious variants but all forms. It is also in our own nature – this confirmation bias – to have little or no desire to learn we might be wrong about matters of central concern. So the Inquisition in Rome tried him instead, and naturally found him guilty, sentencing Galileo to lifelong house arrest with a strict ban on ever publishing anything again. Given the age, this was comparatively lenient; two decades earlier the Dominican friar and philosopher Giordano Bruno, who amongst other blasphemies had dared to suggest the universe had no centre and that the stars were just other suns surrounded by planets of their own, was burned at the stake.

Today, our temptation is to regard the Vatican’s hostility to Galileo’s new science as a straightforward attempt to deny the reality because it devalues the Biblical story which places not just earth, but the holy city of Jerusalem at the centre of the cosmos. However, Galileo’s heresy actually strikes a more fundamental blow, and one that challenges not just papal infallibility and the millennium-long Scholastic tradition – the tripartite dialectical synergy of Aristotle, Neoplatonism and Christianity – but by extension, the entire hierarchical establishment of the late medieval period and much more.

Prior to Galileo, as my professor illustrated so expertly with his hilarious balloon demonstration, the view had endured that all objects obeyed laws according to their inherent nature. Thus, rocks fell to earth because they were by nature ‘earthly’, whereas the sun and moon remained high above because they were made of more heavenly stuff. In short, things knew their place. By contrast, Galileo’s explanation is startlingly egalitarian. According to this new opinion, not only do all objects follow common laws – laws that apply even to celestial bodies like the planets and moon – but they are forced to do so because they are inherently inert. Not impelled by inner drives – a living essence – but compelled always and absolutely by external forces. At a stroke the universe is hereby reduced to mechanics; its inner workings utterly akin to a most highly elaborate mechanism. At a stroke, it is reasonable to say indeed, that Galileo killed the cosmos.

Now if Newton’s apple is a reworking of the Fall of Man as humanity’s redemption through scientific progress, then the best-known fable of Galileo (since the tale itself is again wholly apocryphal), is how he instructed someone to drop cannon balls of differing sizes from the Leaning Tower of Pisa in order to test how objects fell to earth, observing that they landed simultaneously on the grass together. The experiment itself was famously recreated by Apollo astronauts on the moon’s surface where without the hindrance of an atmosphere, it was found that even objects as shocking different as a hammer and a feather, do indeed accelerate at the same rate, landing in the dust at precisely the same instant. In fact, I have repeated the experiment myself stood on a desk in class with smaller objects and surrounded by bemused students, who unfamiliar with the principle, are reliably astonished; since intuitively we all believe that the heavier weights fall faster.

But my real point is this: Galileo’s thought experiment invokes a different Biblical reference. It is also a parable of sorts, reminding us all not to jump to unscientific assumptions and instead always “to do the maths”. But, in common with Newton’s apple it retells another myth from Genesis; in this case recalling the Tower of Babel, an architectural endeavour conceived when the people of the world had been united and hoped to build a short-cut to heaven. Afterwards, God decided to punish us all (as He likes to do) with a divide and conquer strategy; our divided nations further confused by a multiplicity of languages. But then along came Galileo to unite us again with his own gift, the application of a new universal language called mathematics. As he wrote:

Philosophy is written in this grand book, which stands continually open before our eyes (I say the ‘Universe’), but cannot be understood without first learning to comprehend the language and know the characters as it is written. It is written in mathematical language, and its characters are triangles, circles and other geometric figures, without which it is impossible to humanly understand a word; without these one is wandering in a dark labyrinth. 13

*

Thomas Hobbes was very well studied in the works of Galileo, and on his travels around Europe in the mid 1630s he may very well have visited the great man in Florence. 14 In any case, Hobbes fully adopts Galileo’s mechanistic conception of the universe and draws what he sees as its logical conclusion, interpolating from what is true for external nature and determining that this must also be true of human nature.

All human actions, Hobbes says, whether voluntary or involuntary, are the direct outcomes of physical bodily processes occurring inside our organs and muscles. 15 Of the precise mechanisms, he ascribes the origins to “insensible” actions that he calls “endeavours”; something he leaves for physiologists to study and comprehend. 16

Fleshing out this bio-mechanical model, Hobbes next explains how all human motivations – which he calls ‘passions’ – that must also function on the basis of these material processes are thereby reducible to forces of attraction and repulsion; in his own terms “appetites” and “aversions”. 17 Just like elaborate machines, Hobbes says, humans too operate in accordance with responses that entail either the automatic avoidance of pain or the increase of pleasure; the will being merely the overarching passion of all these lesser appetites.

Thus, having presented this strikingly modern conception of life as a whole and human nature in particular, which he has determined is inherently ‘selfish’ since concerned only with improving its own situation, Hobbes next considers what he calls “the natural condition of mankind” (or ‘state of nature’) which leads him to consider why “there is always war of everyone against everyone”:

Whatsoever therefore is consequent to a time of War, where every man is Enemy to every man; the same is consequent to the time, wherein men live without other security, than what their own strength, and their own invention shall furnish them withall. In such condition, there is no place for Industry; because the fruit thereof is uncertain; and consequently no Culture of the Earth; no Navigation, nor use of the commodities that may be imported by Sea; no commodious Building; no Instruments of moving, and removing such things as require much force; no Knowledge of the face of the Earth; no account of Time; no Arts; no Letters; no Society; and which is worst of all, continual fear, and danger of violent death; And the life of man, solitary, poor, nasty, brutish, and short. 18

According to Hobbes, this ‘state of nature’ becomes inevitable whenever our laws and social conventions cease to function and no longer protect us from our otherwise fundamentally rapacious selves. Once civilisation gives way to anarchy, then anarchy, according to Hobbes, is hell because our automatic drive to improve our own situation comes into direct conflict with every other human individual. And to validate his claim, Hobbes reminds us of the fastidious counter measures everyone takes to defend themselves against their fellows:

It may seem strange to some man, that has not well weighed these things; that Nature should thus dissociate, and render men apt to invade, and destroy one another: and he may therefore, not trusting to this Inference, made from the Passions, desire perhaps to have the same confirmed by Experience. Let him therefore consider with himself, when taking a journey, he arms himself, and seeks to go well accompanied; when going to sleep, he locks his doors; when even in his house he locks his chests; and this when he knows there be Laws, and public Officers, armed, to revenge all injuries shall be done him; what opinion he has of his fellow subjects, when he rides armed; of his fellow Citizens, when he locks his doors; and of his children, and servants, when he locks his chests. Does he not there as much accuse mankind by his actions, as I do by my words? 19

Not that Hobbes is making a moral judgment, since he regards all nature, drawing no distinctions for human nature, as equally compelled by the self-same ‘passions’ and in this ongoing war of all on all, objectively sees the world as value neutral. As he continues:

But neither of us accuse mans nature in it. The Desires, and other Passions of man, are in themselves no Sin. No more are the Actions, that proceed from those Passions, till they know a Law that forbids them; which till Laws be made they cannot know: nor can any Law be made, till they have agreed upon the Person that shall make it. 20

All’s fair in love and war because fairness isn’t the point. According to Hobbes, what matters are the consequences of actions, and this again is a strikingly modern stance. Finally, Hobbes wishes only to ameliorate the flaws he perceives in human nature, in particular selfishness, by constraining behaviour in accordance with what he deduces to be ‘laws of nature’: precepts and general rules found out by reason. This, says Hobbes, is the only way to overcome what is otherwise man’s sorry state of existence in which a perpetual war of all against all ensures everyone’s life is “nasty, brutish and short”. Thus to save us all from our ‘state of nature’, as he calls it, he demands that we conform to his more reasoned ‘laws of nature’.

In short, not only does Hobbes’ prognosis speak to the urgency of securing a social contract, but his whole thesis heralds our bio-mechanical conception of life and of the evolution of life. Indeed, following from the tremendous successes of the physical sciences, Hobbes’ radical faith in materialism, which would then have seemed shocking to many, has slowly come to seem quite commonsensical; so much so that it led philosopher Karl Popper to coin the phrase “promissory materialism”: adherents to the physicalist view relegating all concerns about gaps in understanding as just problems to be worked out in future – just as Hobbes does, of course, when he delegates the task of comprehending all human actions and “endeavours” to the physiologists.

*

But is it really is the case, as Hobbes declares, that individuals are controlled only by laws and social contracts. If so, then we might immediately wonder why acts of indiscriminate murder and rape are such comparatively rare crimes given that they are the toughest of all crimes to foil or to solve. In fact most people, most of the time, appear to prefer not to commit everyday atrocities, and it would be odd to suppose that they refrain purely because they fear arrest and punishment. Everyday experience tells us instead that most people simply don’t have very much inclination for committing violence or other serious criminal intent.

Moreover, if we look for supporting evidence of Hobbes’ conjecture then we can actually find an abundance that refutes him. We know for instance that the appalling loss of life in the trenches should have been far greater still was it not for a very deliberate lack of aim amongst the combatants. And this lack of zeal for killing even during the heat of battle turns out to be the norm as US General S. L. A. Marshall learned from firsthand accounts gathered at the end of World War II when he was tasked with debriefing thousands of returning GIs in order to learn more about their combat experiences. 21 What he actually discovered was almost incredible: not only had three-quarters of combatants never fired at the enemy even when under direct fire themselves, but amongst those who did only two-percent actually shot to kill.

Nor is this a modern phenomenon. At the end of Battle of Gettysburg during the American Civil War, the Union Army collected up the tens of thousands of weapons and discovered that the vast majority were loaded. More than half of the rifles had multiple loads – one had 23 loads packed all the way up the barrel. 22 Many of these soldiers had never actually pulled the trigger; the majority preferring to feign combat rather literally than fire off shots.

Indeed it transpires that contrary to the depictions of battles in Hollywood movies, by far the majority of men take no pleasure at all in killing one another. Modern military training from Vietnam onwards has developed methods to compensate for the natural lack of bloodlust: heads are shaven, identities stripped, and conscripts are otherwise desensitized, turning men into better machines for war. But then, if there is one day in history more glorious than any other then surely it has to be the Christmas Armistice of 1914. The bloodied and muddied troops huddling for warmth in no-man’s land, sharing food, singing carols together, and playing the most beautiful game of football ever played: such outpourings of sanity in the face of lunacy that no movie screenplay could invent such a scene that did not appear impossibly sentimental and clichéd.

*

In his autobiography Hobbes famously relates that his mother’s shock at hearing the news of the Spanish Armada led to his premature birth, saying: “my mother gave birth to twins: myself and fear.” Doing his best to avoid getting caught up in the English Civil War, Hobbes certainly did live through exceptionally fearful times, which accounts for why his entire political theory is a response to fear with a tolerance for tyranny. Because Hobbes understood clearly that the power to protect is derived from the power to terrify; indeed to kill. In fact, Hobbes manages to conceive of a system of government whose authority is sanctified by terrifying its own subjects to consent in their own subjugation. On the same basis when a highwayman demands “your money or your life?” then if you agree you have entered into a Hobbesian contract! This is government by protection racket; his keenness for an overarching unassailable but benign dictator perhaps best captured by the absolute power he grants the State right down to the foundational level of determining what is moral:

I observe the Diseases of a Common-wealth, that proceed from the poison of seditious doctrines; whereof one is, “That every private man is Judge of Good and Evil actions.” This is true in the condition of mere Nature, where there are no Civil Laws; and also under Civil Government, in such cases as are not determined by the Law. But otherwise, it is manifest, that the measure of Good and Evil actions, is the Civil Law… 23

Remember that for Hobbes every action proceeds from a mechanistic cause and so the very concept of ‘freedom’ had struck him merely as a logical fallacy – and as someone who once had a bitter mathematical dispute with Oxford professor John Wallis after Hobbes erroneously claimed to be able to square the circle 24, his dismissal of ‘freedom’ is certainly fitting:

[W]ords whereby we conceive nothing but the sound, are those we call Absurd, insignificant, and Non-sense. And therefore if a man should talk to me of a Round Quadrangle; or Accidents Of Bread In Cheese; or Immaterial Substances; or of A Free Subject; A Free Will; or any Free, but free from being hindred by opposition, I should not say he were in an Error; but that his words were without meaning; that is to say, Absurd. 25

According to Hobbes then, freedom reduces absurdity – a round quadrangle! – which immediately opens the door to totalitarian rule: and no thinker was ever so willing as Hobbes to sacrifice freedom for the sake of security.

But Hobbes is mistaken once again, as one now famous experiment first carried out by psychologist Stanley Milgram – and since repeated many times – amply illustrates. For those unfamiliar with Milgram’s experiment, here is the set up:

Volunteers are invited to what they are told is a scientific trial investigating the effects of punishment on learning. Having been separated into groups, they are then assigned the roles either of teachers and learners. At this point, the learner is strapped into a chair and fitted with electrodes before in an adjacent room the teacher is given control of apparatus that enables him or her to deliver electric shocks. In advance of this, the teachers are given a low voltage sample shock just to give them a taste of the punishment they are about to inflict.

The experiment then proceeds with the teacher administering electric shocks of increasing voltage which he or she must incrementally adjust to punish wrong answers. As the scale on the generator approaches 400V, a marker reads “Danger Severe Shock” and beneath the final switches there is simply XXX. Proceeding beyond this level evidently runs the risk of delivering a fatal shock, but in the experiment participants are encouraged to proceed nonetheless.

How, you may reasonably wonder, could such an experiment have been ethically sanctioned? Well, it’s a deception. All of the learners are actors, and their increasingly desperate pleading is just scripted as are their screams. Importantly, however, the true participants (who are all assigned as ‘teachers’) are led to believe the experiment and the shocks are real.

The results – repeatable ones, as I say – are extremely alarming: two-thirds of the subjects go on to deliver what they are told are potentially fatal shocks. In fact, the experiment is continued until a teacher has administered three shocks at 450V level, by which time the actor playing the learner has already stopped screaming and must therefore be presumed either unconscious or dead. “The chief finding of the study and the fact most urgently demanding explanation”, Milgram wrote later, is that:

Ordinary people, simply doing their jobs, and without any particular hostility on their part, can become agents in a terrible destructive process. Moreover, even when the destructive effects of their work become patently clear, and they are asked to carry out actions incompatible with fundamental standards of morality, relatively few people have the resources needed to resist authority. 26

Milgram’s experiment has sometimes been presented as proof of our innate human capacity for cruelty and for doing evil. But this was neither the object of his study nor the conclusion Milgram makes. The evidence instead leads him to conclude that the vast majority take no pleasure from inflicting suffering or harm to others, but that they will do it when placed under a certain kind of duress and especially when an authority figure is instructing them to:

Many of the people were in some sense against what they did to the learner, and many protested even while they obeyed. Some were totally convinced of the wrongness of their actions but could not bring themselves to make an open break with authority. They often derived satisfaction from their thoughts and felt that – within themselves, at least – they had been on the side of the angels. They tried to reduce strain by obeying the experimenter but “only slightly,” encouraging the learner, touching the generator switches gingerly. When interviewed, such a subject would stress that he “asserted my humanity” by administering the briefest shock possible. Handling the conflict in this manner was easier than defiance. 27

Milgram thought that it is this observed tendency for compliance amongst ordinary people that enabled the Nazi’s crimes and that led to the Holocaust. Milgram’s study also helps to account for why those WWI soldiers, even after sharing food and songs with the enemy, returned ready to fight on in the hours, days, weeks and years that followed the Christmas Armistice. Disobedience was often severely punished, of course, with the ignominy of a court martial before execution by firing squad, but authority alone is generally enough to ensure compliance. Most people will just follow orders.

In short, what Milgram’s study shows is that Hobbes’ solution is, at best, deeply misguided, because it is authoritarianism (his remedy) that leads ordinary humans to commit some of the greatest atrocities. So Milgram offers us a way of considering Hobbes from a top down perspective: addressing the issue of how obedience to authority influences human behaviour.

But what about the bottom up view? After all, this was Hobbes’ favoured approach, since he very firmly believed (albeit incorrectly) that his own philosophy was solidly underpinned by mathematics – his grandest ambition had been to derive a social philosophy that followed logically and directly from the theorems of Euclid. Thus, according to Hobbes’ derived but ‘promissory materialism’, which sees Nature as wholly mechanistic and reduces actions to impulse, all animal behaviours – including human ones – are fully accountable and ultimately determined by, to apply a modern phrase, ‘basic instincts’. Is this true? What does biology have to say on the matter, and most specifically, what are the findings of those who have closely studied animal behaviour?

*

This chapter is concerned with words rather than birds…

So writes British ornithologist David Lack who devoted much of his life to the study of bird behaviour, conducting field work for four years while he also taught at Dartington Hall School in Devon; his spare-time spent observing populations of local robins; his findings delightfully written up in a seminal work titled straightforwardly The Life of the Robin. The passage I am about to quote follows on from the start of chapter fifteen in which he presents a thoughtful aside under the heading “A digression upon instinct”. It goes on:

A friend asked me how swallows found their way to Africa, to which I answered, ‘Oh, by instinct,’ and he departed satisfied. Yet the most that my statement could mean was that the direction finding of migratory birds is part of the inherited make-up of the species and is not the result of intelligence. It says nothing about the direction-finding process, which remains a mystery. But man, being always uneasy in the presence of the unknown, has to explain it, so when scientists abolish the gods of the earth, of lightning, and of love, they create instead gravity, electricity and instinct. Deification is replaced by reification, which is only a little less dangerous and far less picturesque.

Frustrated by the types of misunderstanding generated and perpetuated by misuse of the term ‘instinct’, Lack then ventures at length into the variety of ambiguities and mistakes that accompany it both in casual conversation or academic contexts; considerations that lead him to a striking conclusion:

The term instinct should be abandoned… Bird behaviour can be described and analysed without reference to instinct, and not only is the word unnecessary, but it is dangerous because it is confusing and misleading. Animal psychology is filled with terms which, like instinct, are meaningless, because so many different meanings have been attached to them, or because they refer to unobservables or because, starting as analogies, they have grown into entities. 28

When I first read Lack’s book I quickly fell under the spell of his lucid and nimble prose and marvelled at how the love for his subject was infectious. As ordinary as they may seem to us, robins live surprisingly complicated lives, and all of this was richly told, but what stood out most was Lack’s view on instinct: if its pervasive stink throws us off the scent in our attempts to study bird behaviour, then how much more alert must we be to its bearing on perceived truths about human psychology? Lack ends his own brief digression with a germane quote from philosopher Francis Bacon that neatly considers both:

“It is strange how men, like owls, see sharply in the darkness of their own notions, but in the daylight of experience wink and are blinded.” 29

*

The wolves of childhood were creatures of nightmares. One tale told of a big, bad wolf blowing your house down to eat you! Another reported a wolf sneakily dressing up as an elderly relative and climbing into bed. Just close enough to eat you! Still less fortunate was the poor duck in Prokofiev’s enchanting children’s suite Peter and the Wolf, swallowed alive and heard in a climatic diminuendo quaking from inside his belly. When I’d grown a little older, I also came to hear about stories of werewolves that sent still icier dread coursing down my spine…

I could go on and on with similar examples because wolves are invariably portrayed as rapacious and villainous throughout folkloric traditions across the civilised world of Eurasia, which is actually quite curious when you stop to think about it. Curious because wolves are not especially threatening to humans and wolf attacks are comparatively rare occurrences – while other large animals including bears, all of the big cats, sharks, crocodiles, and even large herbivores like elephants and hippos, pose a far greater threat to us. To draw an obvious comparison, polar bears habitually stalk humans, and yet rather than being terrifying we are taught to see them as cuddly. Evidently, our attitudes towards the wolf have been shaped, therefore, by factors other than the observed behaviour of wolves themselves.

So now let us consider the rather extraordinary relationship our species actually has with another large carnivore: man’s best friend and cousin of the wolf, the dog – and incidentally, dogs kill (and likely have always killed) a lot more people than wolves.

The close association between humans and dogs is incredibly ancient. Dogs are very possibly the first animal humans ever domesticated, becoming so ubiquitous that no society on earth exists that hasn’t adopted them. This adoption took place so long ago in prehistory that conceivably it may have played a direct role in the evolutionary development of our species; and since frankly we will never know the answers here, I feel free to speculate a little. So here is my own brief tale about the wolf…

One night a tribe was sat around the campsite finishing off the last of their meal as a hungry wolf secretly watched on. A lone wolf, and being a lone wolf, she was barely able to survive. Enduring hardship and eking out a precarious existence, this wolf was also longing for company. Drawn to the smell of the food and the warmth of the fire, this wolf tentatively entered the encampment and for once wasn’t beaten back with sticks or chased away. Instead one of the elders at the gathering tossed her a bone to chew on. The next night the wolf returned, and the next, and the next, until soon she was welcomed permanently as one of the tribe: the wolf at the door finding a new home as the wolf by the hearth.

As a story, it sounds plausible enough that something like it may have happened countless times perhaps and in many locations. Having enjoyed the company of the wolf, the people of the tribe later adopting her cubs (or perhaps it all began with cubs). In any case, as the wolves became domesticated they changed, and within just a few generations of selective breeding, had been fully transformed into dogs.

The rest of the story is more or less obvious too. With dogs, our ancestors enjoyed better protection and could hunt more efficiently. Dogs run faster, have far greater endurance, keener hearing and smell. Soon they became our fetchers and carriers too; our dogsbodies. Speculating a little further, our symbiotic relationship might also have opened up the possibility for evolutionary development at a physiological level. Like cave creatures that lose pigmentation and in which eyesight atrophies to favour greater tactile sense or sonar 30, we likewise might have reduced acuity in those senses we needed less, as the dogs compensated for our loss, which might then have reset our brains to other tasks. Did losses in our faculties of smell and hearing enable more advanced dexterity and language skills? Did we perhaps also lose our own snarls to replace them with smiles?

I shan’t say much more about wolves, except that we know from our close bond with dogs that they are affectionate and loyal creatures. So why did we vilify them as the “big, bad wolf”? My hunch is that they represent symbolically, something we have lost, or perhaps more pertinently, that we have repressed in the process of our own domestication. In a deeper sense, this psychological severance involved our alienation from all of nature. It has caused us to believe, like Hobbes, that all of nature is nothing but rapacious appetite, red in tooth and claw, and that morality must therefore be imposed upon it by something other; that other being human rationality.

Our scientific understanding of wolf behaviour has been radically overturned. Previously accepted beliefs that wolves compete for dominance by becoming alpha males or females turn out to be largely untrue. Or at least this happens only if unrelated wolves are kept in captivity. In all cases where wolves are studied in their natural environment, the so-called ‘alpha’ wolves are just the parents – in other words, wolves form families just like we do:

*

One school views morality as a cultural innovation achieved by our species alone. This school does not see moral tendencies as part and parcel of human nature. Our ancestors, it claims, became moral by choice. The second school, in contrast, views morality as growing out of the social instincts that we share with many other animals. In this view, morality is neither unique to us nor a conscious decision taken at a specific point in time: it is the product of gradual social evolution. The first standpoint assumes that deep down we are not truly moral. It views morality as a cultural overlay, a thin veneer hiding an otherwise selfish and brutish nature. Perfectibility is what we should strive for. Until recently, this was the dominant view within evolutionary biology as well as among science writers popularizing this field. 31

These are the words of Dutch primatologist Frans de Waal, who became one of the world’s leading experts in chimpanzee behaviour. Based on his studies, de Waal applied the term “Machiavellian intelligence” to describe the variety of cunning and deceptive social strategies used by chimps. A few years later, however, de Waal came across their and our pygmy cousins the bonobos that were also captive in a zoo in Holland, and says they had an immediate effect on him:

“[T]hey’re totally different. The sense you get looking them in the eyes is that they’re more sensitive, more sensual, not necessarily more intelligent, but there’s a high emotional awareness, so to speak, of each other and also of people who look at them.” 32

Sharing a common ancestor with bonobos and chimps, humans are in fact equally closely-related to both species, and interestingly when de Waal was asked do you think we’re more like bonobo or chimp he replied:

“I would say there are people in this world who like hierarchies, they like to keep people in their place, they like law enforcement, and they probably have a lot in common, let’s say, with the chimpanzee. And then you have other people in this world who root for the underdog, they give to the poor, they feel the need to be good, and they maybe have more of this kinder bonobo side to them. Our societies are constructed around the interface between those two, so we need both actually.” 33

De Waals and others who have studied primates are often astonished by the kinship with our own species. When we look deep into the eyes of chimps, gorillas, or even those of our dogs, we find ourselves reflected in every way. It’s not hard to fathom where morality came from, and the ‘veneer theory’ of Hobbes reeks of a certain kind of religiosity, infused with a deep insecurity born of the hardship and terrors of civil strife.

*

New scientific studies are proving that primates, elephants, and other mammals including dogs also show empathy, cooperation, fairness and reciprocity. That morality is an aspect of nature. Here Frans de Waal shares some surprising videos of behavioral tests that show how many of these moral traits all of us share:

*

II       Between two worlds

I was of three minds,

Like a tree

In which there are three blackbirds

— Wallace Stevens 35

*

Of all the creatures on earth, apart from a few curiosities like the kangaroo and giant pangolin, or some species of long-since extinct dinosaurs, only the birds share our bipedality. The adaptive advantage of flight is so self-evident that there’s no need to ponder why the forelimbs of birds morphed into wings, but the case for humans is more curious. Why it was that about four million years ago, a branch of hominids chose to stand on two legs rather than four, enabling them to move quite differently from our closest living relatives (bonobos and chimps) with all of the physiological modifications this involved, still remains a mystery. But what is abundantly clear and beyond all speculation is that this single evolutionary change freed up our hands for purposes no longer restricted by their formative locomotive demands, and that having liberated our hands, not only did we become supreme manipulators of tools, but this sparked a parallel growth in intelligence, causing us to become supreme manipulators per se – the very etymological root of the word coming from ‘man-’ meaning ‘hand’ of course.

With our evolution as manual apes, humans also became constructors, and curiously here is another trait that we have in common with many species of birds. That birds are able to build elaborate structures to live in is indeed a remarkable fact, and that they necessarily achieve this by organising and arranging the materials using only their beaks is surely more remarkable again. Storks with their ungainly bills somehow manage to arrange large piles twigs so carefully that their nests often overhang impossibly small platforms like the tips of telegraph poles. House martins construct wonderfully symmetrical domes just by patiently gluing together globules of mud. Weaver birds, a range of species similar to finches, build the most elaborate nests of all, and quite literally weave their homes from blades of grass. How they acquired this ability remains another mystery, for though recent studies have found that there is a degree of learning involved in the styles and manner of construction, this general ability of birds to construct nests is an innate one. According to that throwaway term, they do it ‘by instinct’. By contrast, in one way or another, all human builders must be trained. As with so much about us, all our constructions are therefore cultural artefacts.

*

With very few exceptions, owls have yellow eyes. Cormorants instead have green eyes. Moorhens and coots have red eyes. The otherwise unspectacular satin bowerbird has violet eyes. Jackdaws sometimes have blue eyes. Blackbirds have extremely dark eyes – darker even than their feathers – jet black pearls set within a slim orange annulus which neatly matches their strikingly orange beaks. While eye colour is common to birds within each species, the case is clearly different amongst humans, where eye colour is one of a multitude of variable physical characteristics including natural hair and skin colour, facial characteristics, and height. Nonetheless, as with birds and other animals where there is significant uniformity, most of these colourings and other identifying features are physical expressions of the individual’s genetic make-up or genotype; an outward expression of genetic inheritance known technically as the phenotype.

Interestingly, for a wide diversity of species, there is an inheritance not only of morphology and physiology but also of behaviour. Some of these behavioural traits may then act in turn to shape the creature’s immediate environment – so the full phenotypic expression is often observed to operate outside and far beyond the body of the creature. These ‘extended phenotypes’ as Dawkins calls them are discovered within such wondrous but everyday structures as spider’s webs, delicate tube-like homes formed by caddis fly larvae, the larger scale constructions of beaver’s dams and of course bird’s nests. It is reasonable therefore to speculate on whether the same evolutionary principle applies to our human world.

What, for instance, of our own houses, cars, roads, bridges, dams, fortresses, cathedrals, systems of knowledge, economies, music and other works of art, languages…? Once we have correctly located our species as just another of amongst many, existing at a different tip of an otherwise unremarkable branch of our undifferentiated evolutionary tree of life, why wouldn’t we judge our own designs as similarly latent expressions of human genes interacting with their environment? Indeed, Dawkins addresses this point directly and points out that tempting as it may be, such broadening of the concept of phenotype stretches his ideas too far, since, to offer his own example, scientific justification must then be sought for genetic differences between the architects of different styles of buildings! 36

In fact, the distinction here is clear: artefacts of human conception which can be as wildly diverse as Japanese Noh theatre, Neil Armstrong’s footprints on the moon, Dadaist poetry, recipes for Christmas pudding, TV footage of Geoff Hurst scoring a World Cup hat-trick, and as mundane as flush toilets, or rarefied as Einstein’s thought experiments, are all categorically different from such animal artefacts as spider’s webs and beaver’s dams. They are patterns of culture not nature. Likewise, all human behaviour right down to the most ephemeral including gestures, articulations and tics, is profoundly patterned by culture and not fully shaped only by pre-existing and underlying patterns within our human genotypes.

Vocabulary – another human artefact – makes this plain. We all know that eggs are ‘natural’ whereas Easter eggs are distinguishable as ‘artificial’, and that the eye is ‘natural’ while cameras are ‘technological’ with both of our antonyms deriving roots in words for ‘art’. Which means that while ‘nature’ is a strangely slippery noun that in English points to a whole host of interrelated objects and ideas, it is found nonetheless that throughout other languages equivalent words do exist to distinguish our manufactured worlds – of arts and artifice – from the surrounding physical world comprised solely of animals, plants and landscapes. A reinvention of this same word-concept that occurs for a simple yet important reason: the difference it labels is inescapable.

*

As a species, we are incorrigibly anthropomorphising; constantly imbuing the world with our own attributes and mores. Which brings up a related point: what animal besides the human is capable of reimagining things in order to make them conform to any preconceived notion of any kind? Dogs may mistake us as other dogs – although I doubt this – but still we are their partners within surrogate packs, and thus, in a sense, surrogate dogs. But from what I know of dogs, their world altogether more direct. Put simply it is… stick chasing… crap taking… sleep sleeping… or (best of all) going for a walk, which again is more straightforwardly being present on an outdoor exploration! In short, dogs live so close to the passing moment, because they have nowhere else to live. Yet humans mostly cannot. Instead we drift in and out of our past or in anticipation of our future. Recollections and goals fill our thoughts repeatedly and it is exceedingly difficult to attend fully to the present.

Moreover, for us the world is nothing much without other humans. Without culture, any world worthy of the name is barely conceivable at all, since humans are primarily creatures of culture. Yes, there would still be the wondrous works of nature, but no art beyond, and no music except for the occasional bird-song and the wind in the trees: nothing but nothing beyond the things-in-themselves that surround us, and without other humans, no need to communicate our feelings about any of this. In fact, there could be no means to communicate at all, since no language could ever form in such isolation. Instead, we would float through a wordless existence, which might be blissful or grindingly dull, but either way our sense impressions and emotions would remain unnamed.

So it is extremely hard to imagine any kind of world without words, although such a world quite certainly exists. It exists for animals and it exists in exceptional circumstances for humans too. The abandoned children who have been nurtured by wild animals (very often wolves) provide an uneasy insight into this world beyond words. So too, for different reasons, do a few of the profound and congenitally deaf. On very rare occasions, these children have gone on to learn how to communicate, and when this happens, what they tell us is just how important language is.

*

In his book Seeing Voices, neurologist Oliver Sacks, describes the awakening of a number of such remarkable individuals. One such was Jean Massieu. Almost without language until the age of fourteen, Massieu had become a pupil at Roch-Ambroise Cucurron Sicard’s pioneering school for the deaf. Astonishingly, he had eventually become eloquent in both sign language and written French.

From Sicard’s own description, Sacks considers Massieu’s steep learning curve, and sees how similar it is to Sack’s own experience with a deaf child. By attaching names to objects in pictures that Massieu had drawn, Sicard managed to open the young man’s eyes. Labels that, to begin with, left his pupil “utterly mystified”, but then suddenly Massieu had “got it”. And Sacks describes how he had understood not merely the abstract connection between the pencil lines of his own drawing and the seemingly incongruous additional strokes of his tutor’s labels, but, just as immediately, he had recognised the value of such a tool: “… from that moment on, the drawing was banished, we replaced it with writing.”

The most magical part of Sack’s retelling comes in the description of Massieu and Sicard’s walks together through the woods. “He didn’t have enough tablets and pencils for all the names with which I filled his dictionary, and his soul seemed to expand and grow with these innumerable denominations…” Sicard later wrote.

Massieu’s epiphany brings to mind the story of Adam with his naming of all the animals in Eden, and Sacks tells us:

“With the acquisition of names, of words for everything, Sicard felt, there was a radical change in Massieu’s relation to the world – he had become like Adam: ‘This newcomer to earth was a stranger on his own estates, which were restored to him as he learned their names.’” 37

It is this gift for language that most obviously sets us most apart from other creatures. Not that humans invented language from scratch, of course, since it grew up both with us and within us: one part phenotype and one part culture. It evolved within other species too, but for reasons unclear, we excelled, and as a consequence became adapted to live in two worlds, or as Aldous Huxley preferred to put it: we have become “amphibian”, in that we simultaneously occupy “the given and the home-made, the world of matter, life and consciousness and the world of symbols.” 38

Using words helps us to relate the present to the past. We reconstruct it or perhaps reinvent it. Likewise with words we envisage a future. This moves us outside Time. It makes us feel at home, helps us to heal past wounds and to prepare for future events. Correspondingly, it also detaches us from the present.

For whereas many living organisms exist entirely within their immediate physical reality, human beings occupy a parallel ideational space where we are almost wholly embedded in language. Now think about that for a moment… no really do!

Stop reading this.

Completely ignore this page of letters, and silence your mind.

Okay, close your eyes and turn your attention to absolutely anything you like and then continue reading…

So here’s my question: when you were engaged in your thoughts, whatever you thought about, did you use words at all? Very likely you literally “heard” them: your inner voice filling the silence in its busy, if generally unobtrusive and familiar way. Pause again and now contemplate the everyday noise of being oneself.

Notice how exceedingly difficult it is to exist if only for a moment without any recourse to language.

Perhaps what Descartes should have said is: I am therefore I think!

For as the ‘monkey mind’ goes wandering off, instantly the words have crept back into our mind, and with our words comes this detachment from the present. Every spiritual teacher knows this, of course, recognising that we cannot be wholly present to the here and now while our mind darts off to visit memories, wishes, opinions, descriptions, concepts and plans: the same memories, wishes, opinions, descriptions, concepts and plans that gave us an evolutionary advantage over our fellow creatures. He also understands that the real art of meditation cannot involve any direct attempt to silence our excitable thoughts, but only to ignore them.

It is evident therefore how in this essential way we are indeed oddly akin to amphibious beings since we occupy and move between two distinct habitats. Put differently, our sensuous, tangible outside world of thinginess (philosophers sometimes call this ‘sense data’) is totally immersed within the inner realms of language and symbolism. So when we observe a blob with eight thin appendages we probably see something spider-like. If we hate spiders then we are very likely to recoil from it. If we have a stronger aversion then we will recoil even after we are completely sure that it’s just a picture of a spider or, in extreme cases, a tomato stalk. On such occasions, our feelings of fear or disgust arise not as the result of failing to distinguish the likeness of a spider from a real spider, but from the power of our own imagination: we literally jump at the thought of a spider.

Moreover, words are sticky. They collect together in streams of association and these mould our future ideas. Religion = goodness. Religion = stupidity. If we hold the first opinion then crosses and pictures of saints will automatically generate a different affect than if we hold the latter. Or how about replacing the word ‘religion’ with say ‘patriotism’: obviously our perception of the world alters in a different way. In fact, just as the pheromones in the animal kingdom cause the direct transmission of behavioural effects to members of a species, the language secreted by humans is likewise capable of directly impacting the behaviour of others.

It has become our modern tendency automatically to suppose that the arrow which connects these strikingly different domains points unerringly in one direction: that language primarily describes the world, whereas the world as such is relatively unmoved by our descriptions of it. This is basically the presumed scientific arrangement. By contrast, any kind of magical reinterpretation of reality involves a deliberate reversal of the direction of the arrow such that all symbols and language are treated as potent agents that might actively cause change within the material realm. Scientific opinion holds that this is false, and yet, on a deeply personal level, language and symbolism not only comprise the living world, but do quite literally shape and transform it. As Aldous Huxley writes:

“Without language we should merely be hairless chimpanzees Indeed, we should be something much worse. Possessed of a high IQ but no language, we should be like the Yahoos of Gulliver’s Travels—creatures too clever to be guided by instinct, too self-centred to live in a state of animal grace, and therefore condemned to remain forever, frustrated and malignant, between contented apehood and aspiring humanity. It was language that made possible the accumulation of knowledge and the broadcasting of information. It was language that permitted the expression of religious insight, the formulation of ethical ideals, the codification of laws. It was language, in a word, that turned us into human beings and gave birth to civilization.” 39

*

As I look outside my window I see a blackbird sitting on the TV aerial of a neighbouring rooftop. This is what I see, but what does the blackbird see? Obviously I cannot know for certain though merely in terms of what he senses, we know that his world is remarkably different from ours. For one thing, birds have four types of cone cells in the retinas of their eyes while we have only three. Our cone cells collect photons centred on red, green and blue frequencies and different combinations generate a range of colours that can be graphically mapped as a continuously varying two-dimensional plain of colours, however if we add another colour receptor then the same mapping requires an additional axis that extends above the plain. For this reason we might justifiably say that the bird sees colours in ways that differ not merely by virtue of the extent of the detectable range of frequencies, but that a bird’s vision involves a range of colour combinations of a literally higher dimension.

Beyond these immediate differences in sense data, there is another way in which a bird’s perceptions – or more strictly speaking its apperceptions – are utterly different from our own, for though the blackbird evidently sees the aerial, it does not recognise it as such. Presumably it sees nothing beyond a convenient metal branch to perch upon decked with unusually regular twigs. This is not to disparage the blackbird for its lesser intelligence, but to respect the fact that even the most intelligent all blackbirds is incapable of knowing more, since this is all any bird can ever understand about the aerial.

No species besides our own is capable of discovering why the aerial was actually put there, or how it is connected to an elaborate apparatus that turns the invisible signals it captures into pictures and patterns of sounds, leave aside gathering the knowledge of how metal can be manufactured by smelting rocks or the still more abstruse science of electromagnetism.

My point here is not to disparage the blackbird’s inferior intellect, since it very possibly understands things that we cannot; but to stress how we are unknowingly constrained in ways we very likely share with the bird. As Hamlet cheeks his friend: “There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy.”

Some of these things – and especially the non-things! – may slip us by forever as unknown unknowns purely by virtue of their inherently undetectable nature. Others may be right under our nose and yet, just like the oblivious bird perched on its metal branch who can never consider reasons for why it is there, we too may lack any capacity even to understand that there is any puzzle at all.

*

I opened the chapter with a familiar Darwinian account of human beings as apex predators struggling for survival on an ecological battlefield; perpetually fighting over scraps, and otherwise competing over a meagre share of strictly limited resources. It is a vision of reality solidly founded upon an overarching belief in scientific materialism, and although a rather depressing vision, it is certainly the prevailing orthodoxy – the Weltanschauung of our times – albeit seldom expressed so antiseptically as it might be.

Indeed, to boil this down further, as doctrinaire materialist hardliners really ought to insist, we might best comprehend ourselves as biological robots. Why robots? Because according to the doctrine we are genetically coded not for experiences, or even merely for survival, but solely for reproductive success – and evolved to function for just such time as to fulfill this primary objective. Our death is then as inconsequential as it is inevitable.

Indeed, propagation of every species goes on blindly until such time as the species as a whole inevitably becomes extinct. If this process is extended by technological means beyond even the death of the earth and solar system, then it will end when the entire universe succumbs to its own overarching and insignificant end. No amount of space colonisation will save us here.

More nakedly told, it is not merely that, as Nietzsche famously lamented, “God is dead”, which has some upsides, but, that while richly animated, there is nothing going on whatsoever besides machine process, anywhere in this universe or the next. This reduction of the cosmos to machine process is Hobbes’ vision in a nutshell too.

In common with the old religions, the boundaries of this new mechanistic belief system extend boundless and absolute and thereby encompass whatever remnants of any god or gods we might try to salvage. There remains no location for any god within, or even the apparatus to exercise free will. Virtue, compassion and love are all epiphenomenal illusions. Redemption comes in the form only of a compensatory genetic subroutine compelling us to carry on regardless of the painful irrelevance of our human situation.

Unsurprisingly, we seldom reflect on the deep existential ramifications of our given materialist mythos, which is, for the most part, unconsciously inculcated; and almost no-one lives a life in strict nihilistic accord. Instead, we mostly bump along trying to be good people (a religious hangover presumably), with an outlook that approximates the one most succinctly expressed by Morty Smith: “Nobody exists on purpose, nobody belongs anywhere, everybody’s gonna die. Come watch TV.” 39a

*

III      Blinded by history

“All history is nothing but a continuous transformation of human nature”

— Karl Marx 40

*

History, someone once joked, is just one damn thing after another! A neat one-liner, disassembling history, as it does, into its component and frequently terrible events, which then follow in sequence with little more intent than the random footsteps of the drunkard. Progress may be admitted in both cases, of course, for in spite of deficiencies in our sense of direction we generally make it home.

However, to view history in such a wholly disjointed way is also to desiccate it, although such a vulgar reductio ad absurdum is also the reason, of course, the joke is amusing. For why bother studying history at all when it makes so little sense? History, thus reduced, is surely bunk, and yet history at school has traditionally been taught very much like this: as just one thing after another…

Real historians make their living by joining up the dots instead, and attempting to put flesh back on the bones by reconstructing the past much like palaeontologists reconstruct dinosaurs. But here again there are dangers. After all when you’ve only got bones you’ve got to add the muscle and skin to your tyrannosaurus rex, and these have to be included on the basis of what you know about living, or at least, less extinct creatures. So when I was still a child, I learnt about an enormously long, herbivorous monster called the brontosaurus, whereas, as it now transpires, no such creature ever walked the Earth… at least not quite such a creature. Its discoverer, Othniel Charles Marsh, in his rush to establish a new species, had accidentally got the bones jumbled up. Worse than this, Marsh, having excavated an almost complete skeleton, though one lacking just a skull, had creatively added a composite head constructed from finds at different locations. As it transpires then, the brontosaurus that he thought he’d discovered was just an adult specimen of an already classified group, the apatosaurus.

What applies to reconstructions in palaeontology also applies, at least in general terms, to reconstructions of human history: the difference being that whereas palaeontologists rely on fossil records, historians pieces together the surviving records of a different kind: books, documents, diaries, and during more recent times, photographs and audio-visual recordings. When detailing and interpreting events beyond living memory (which is rather short) the historian then has to rely solely on documentary sources, since there is literally nothing else. This magnifies the difficulty faced by the historian, since, unlike bones and rocks, human records can frequently distort the truth (either wilfully or by accidents of memory).

How, then, does a scrupulous historian know which records to trust, especially if he encounters records that are in direct contradiction? How to ascribe greater reliability to some records over others? Or to determine whether any newly unearthed record is reliable, unreliable, authentic or just a hoax? Well, here s/he must become a detective, and just as a police detective relies upon cross-examination to check facts and corroborate evidence from witnesses, so the diligent historian makes thorough cross-checks against his alternative sources. There is, of course, an ineluctable circularity to all this.

In 1983, when the Hitler Diaries turned up out of the blue, they were quickly authenticated by three different expert historians, Hugh Trevor-Roper, Eberhard Jäckel and Gerhard Weinberg. The diaries were shortly afterwards proven to be forgeries, and soon afterwards totally discredited by means of actual forensic analysis. Handwriting turned out to be the biggest give-away. But then Hitler had been dead a mere half a century, well within living memory, and so there were ample handwritten documents to compare his words against. Such unassailable forensic evidence is obviously the exception rather than the rule for the greatest tracts of history.

Historians have their work cut out, since getting the basic facts straight is just the start of the process. If History is to be a living subject then they must try not to leave out too much of the warm, moist uncertainty of the real lives that made it, even though the greater part of most past lives must inevitably be lost in history’s creases, whilst any History told as just one damn thing after another is History shrivelled up to the driest of husks. Indeed, as archaeologist and historian John Romer once elegantly put it: “History is only myth: stories trying to make sense of reality” 41

*

Two decades ago, I embarked an adventure to the USA. I was travelling with Neil, a friend and post-graduate colleague, to the International Conference on Asteroids, Comets and Meteors in Flagstaff, Arizona. We were wined and dined and given tours of the Grand Canyon and Meteor Crater. It was to be a most splendid jolly!

After the conference, we also took a tour a little further into the great continent. We hired a car and headed west on Route 66, only reaching our final destination, San Francisco, after a solid week of driving. Along the way, we had stopped to admire the great Hoover Dam, Las Vegas, Death Valley, Los Angeles, the giant redwoods and the towering rocks of Monument Valley which form such a spectacular backdrop to so many Westerns. En route we had also encountered the occasional roadside stalls where the Native Americans who sold trinkets to get by would try to entice passing trade with off-road signs and promises of dinosaur footprints.

On one of our excursions we visited that most famous of petrified forests, with its fossilised trees strewn like ancient bronze-casts, and then nearby, we wandered the ruined remains of human settlements. The ruins had signs too, ones that told us the houses were believed to have been built about six hundred years old or so, or, as the notes put it: “prehistoric”. Being Europeans we laughed, but of course we shouldn’t have. The idea that a mere six hundred years old could be designated “prehistoric” was not another fine example of dumb-ass American thinking, but a straightforward fact: history, as I said above, being a discipline that arises from documentation. Automatically, therefore, we, meaning all modern people, have, to put matters mildly, an historical bias.

Let’s be clear: Christopher Columbus did not discover America. For one thing, there were millions of people already living there. But Columbus wasn’t even the first European to sail to the ‘New World’. That honour more likely goes to Erik Thorvaldsson better known as Erik the Red, the Viking explorer credited in the Icelandic sagas with founding the first settlement in Greenland. Nor was Columbus the first European ever to set foot on continental American soil. The plaudits here should go instead to Thorvaldsson’s son, Lief Erikson, who according to the sagas established a Norse settlement in Vinland, now called Newfoundland. This all took place a full five centuries before the voyage of Genoese pretender Columbus.

What then did Columbus bring to our story, if not discovery? Well, the answer can be read in his lines of his captain’s log. This is what he writes about his first encounter with the Arawak Indians who inhabited the archipelago known today as the Bahamas:

They go as naked as when their mothers bore them, and so do the women, although I did not see more than one young girl. All I saw were youths, none more than thirty years of age. They are very well made, with very handsome bodies, and very good countenances… They neither carry nor know anything of arms, for I showed them swords, and they took them by the blade and cut themselves through ignorance… They should be good servants and intelligent, for I observed that they quickly took in what was said to them, and I believe they would easily be made Christians, as it appeared to me that they had no religion.

On the next day, Columbus then writes:

I was attentive, and took trouble to ascertain if there was gold. I saw that some of them had a small piece fastened in a hole they have in the nose, and by signs I was able to make out that to the south, or going from an island to the south, there was a king who had great cups full, and who possessed a great quantity.

The following day, a Sunday, Columbus decided to explore the other side of the island, and once again was welcomed by the villagers. He writes:

I saw a piece of land which appeared like an island, although it is not one, and on it there were six houses. It might be converted into an island in two days, though I do not see that it would be necessary, for these people are very simple as regards the use of arms, as your Highnesses will see from the seven that I caused to be taken, to bring home and learn our language and return; unless your Highnesses should order them all to be brought to Castile, or to be kept as captives on the same island; for with fifty men they can all be subjugated and made to do what is required of them. 42

Having failed in his quest for gold, Columbus subsequent expeditions sought out a different cargo to bring back to Spain. In 1495, they corralled 1,500 Arawak men, women and children in pens and selected the fittest five hundred specimens for transportation. Two hundred died onboard the ships and the survivors were all sold in slavery. Unfortunately for Columbus, however, and by turns for the native people of the Caribbean, this trade in humans was insufficiently profitable to pay back his investors, and so Columbus adopted a different strategy and intensified his search for gold again.

In Haiti, where he believed the precious metal lay in greatest abundance, Columbus soon demanded that everyone over the age of fourteen must find and exchange a quarterly tribute for a copper token. Failure to comply was severely punished with the amputation of limbs; victim left to bleed to death, and those who tried out of desperation to escape hunted down with dogs and then summarily executed.

Bartolome de las Casas, a young priest who had arrived to participate in the conquest and was indeed for a time a plantation owner, afterwards became an outspoken critic and reported on the many atrocities he witnessed. 43 In his own three-volume chronicle, History of the Indies, las Casas later wrote:

The Indians were totally deprived of their freedom and were put into the harshest, fiercest, most horrible servitude and captivity which no one who has not seen it can understand. Even beasts enjoy more freedom when they are allowed to graze in the field. 44

*

Napoleon has been attributed with the utterance that “History is written by the winners” or alternatively, “What is History but a fable agreed upon” 45, and for one with such a prodigious record both of winning and “making history”, who doubts that he knew whereof he spoke. Strange, therefore, the little attention paid to Napoleon’s straight-talking maxim. How instead we eagerly absorb the authorised versions of our histories, trusting that by virtue of scholastic diligence and impartiality, these reconstructions of the past represent a near facsimile to the actuality of the real events. But then, with regards to the centuries-long fractious infighting between the European monarchies, we are privy to the accounts of both adversaries. So here we generally have – at the very least – two sides to every tale of every conflict, scandal and criminal act. In stark contrast, of course, when the British and the other European powers first sailed to those unconquered lands soon after to be collectively known as “the colonies”, only one side of the story remains extant.

For during the period of the last five hundred years or so, times when western records have been most replete, a world once teeming with a diversity of alternative cultures, has been slowly wiped away: the people of these forgotten worlds either annihilated or wholly assimilated by the great European powers. Our rather homogeneous culture, by the terror of cannons and on other occasions by the softer coercions of the sermons of missionaries, has thereby steadily erased and replaced the heterogeneous confusion sometimes as swiftly as it was encountered. Defeated cultures, if not entire indigenous populations, not just swept aside and defeated, but utterly and irreversibly deleted.

Oral traditions leave little if anything in the way of an historical trace, and so back in the fifteenth century, America was indeed “prehistoric”; its history having first been established only when first the alien invaders (as the first Europeans must have appeared to the wide eyes of the native peoples they were about to overwhelm – as creatures from another world) stepped ashore. As in the Americas, so too in Australia and the other so-called “new worlds”, where, of the novelties we brought, perhaps the most significant was History itself.

When relying upon evidence from History, it is important therefore, to continually bear in mind that throughout most regions of the world, throughout almost all of human time, people didn’t actually have any. That all of History begins only with writing, which was a largely Eurasian preoccupation. Thus History in most parts of the world only began with our arrival: its origins, an indirect consequence of conquest, oppression, exploitation and enslavement.

Pulitzer-prize winning journalist, author and activist Chris Hedges discusses the teaching of history as a form of indoctrination with Professor James W. Loewen, author of ‘Lies My Teacher Told Me: Everything Your American History Textbook Got Wrong’:

*

I could at this juncture attempt to set out to list all the barbarisms of history, although to do justice I would need to at least double the length of the current chapter. Just a few examples will more than serve the purpose of illustrating the point…

From the North came the longboats of the Vikings intent on rape and pillage; from the East, the marauding Mongol horde, and the butchery of tyrants such as Vlad the Impaler; in the Mediterranean South, we were once entertained by the sadistic spectaculars of the Roman circuses, and then afterwards the more ideologically entrenched, atrocities of the Spanish Inquisition. When the first Europeans explored the lands of the West, the ruthless conquistadors came face to face with the blood-curdling savagery of the Aztec and Mayan empires. Which was the more dreadful?

In former times, the Christians marched thousands of miles to slaughter innocents in the name of the Prince of Peace, and, in astonishingly recent times, other Christians dispatched heathens and heretics by drowning, burning and lynching, especially at the height of the witch craze that swept Europe and America well into the Enlightenment period.

Muslims, by comparison, have generally preferred to kill in the name of Jihad and Fatwa, or else to inflict judicial cruelties by means of stoning, flagellation, amputation and decapitation, all in strict accordance to their holy Sharia Law. But then the irreligious are no less diabolical, whether we consider Hitler and the Nazi death camps, or the Soviet gulags, or the killing fields of Cambodia, and Mao Tse-tung’s “Cultural Revolution” in China. Given how little time has passed since the decline of religion, the sheer number of victims tortured and murdered by these surrogate atheistic (or perhaps neo-pagan in the case of the Nazis) regimes is as gut-wrenching as it is perplexing.

Few have spoken with more forceful eloquence or erudition on the evils of religion than ardent atheist Christopher Hitchens. Sadly it was this same hatred of religion that in the end led Hitchens to join in the chorus calling for the neo-imperialist ‘war on terror’ and finally arguing the case for the ‘shock and awe’ bombing and subsequent invasion of Iraq at the cost of more than a million innocent lives in a 2003 collection of essays entitled A Long Short War: The Postponed Liberation of Iraq. One of Hitchens’ prime examples of religious authority making good people behave in morally repugnant ways is the barbarous practice of infant genital mutilation:

Britain itself witnessed centuries of religious intolerance, brutal repression and outright thuggery. Henry VIII, one of the most celebrated monsters in history, is chiefly remembered for his penchant for uxoricide, not to mention the land-grabbing and bloodletting of the English Reformation that followed from the convenience of his divorce from Catherine of Aragon. And like father, like daughter: this radical transformation of the sectarian landscape under Henry being partially undone by Bloody Mary’s reign of terror and her ultimately failed restoration of Catholicism (and had she been successful it is doubtful she would be remembered as “Bloody Mary”).

Meantime, the sudden rise and spread of the British and other European empires meant that such commonplace domestic atrocities could, during the next four hundred years, be committed as far afield as Africa, North and South America, India, China, and Australia. All of this facilitated by, and, in turn facilitating and encouraging, the international trade in human slaves. Of course, the European place in world history has been a repeatedly shameful one, but then man’s inhumanity to human has also been legitimised and justified for a hundred other reasons beneath dozens of alternative flags. According to historical records then, human nature is infernally bad, and incurably so.

Cruel, bellicose, sneaky, and selfish; we must plead guilty on all counts. We are perhaps the worst, though differing by degree from our fellow creatures. And here is something genuinely unique: many of us feel disgraced by our own diabolical behaviour. Somehow, we know that there is a better way to use our special talents. But then, what other creature could take such a detached position? Could actually aspire to be kinder, peaceful, and more selfless?

*

The French writer Voltaire is nowadays best remembered for his marvelous satire, Candide (1759), which he subtitled with characteristic irony: “or the Optimist”. A savage critique of the unenlightened politics and obscurantist metaphysics of his time, Candide is an historical fantasy, with many episodes in the book cleverly interwoven with factual events of the period. It is rightly celebrated, and I reference its central theme in the addendum below. A decade earlier, however, Voltaire had road-tested similar ideas, choosing not an historical backdrop, but one that we would today describe as science fiction. A forgotten classic, Voltaire’s Micromegas (1750) is a story about the adventures of two philosophical aliens. Here is a brief synopsis.

Micromegas, the eponymous hero, is a gigantic inhabitant of the star Sirius, who ventures to Earth, stopping off at Saturn along the way. Being many miles tall, the Saturnians who are themselves as tall as small hills, nevertheless appear to Micromegas as pigmies, and so his initial response is to deride them: “accustomed as he was at the sight of novelties, he could not for his life repress that supercilious and conceited smile which often escapes the wisest philosopher, when he [first] perceived the smallness of that globe, and the diminutive size of the inhabitants”. Eventually, however, and once the Saturnians ceased to be amazed by his gigantic presence, he befriends the secretary of the Academy of Saturn. Having discussed the comparative differences between their two worlds, Micromegas and the Saturnian resolve to set off on a grand tour of the Solar System. Shortly afterwards they arrive on Earth.

Upon landing, they decide to search around for evidence of intelligence but discover no signs of life at all except, eventually, for a whale, which the Saturnian catches between his fingers and shows to Micromegas, “who laughed heartily at the excessive smallness peculiar to the inhabitants of our globe”. As luck would have it, however, a ship of philosophers happens to be returning from a polar expedition, and aboard this ship, as the aliens soon encounter “a creature very different from the whale”.

Having established contact with the “intelligent atoms” aboard the ship, the alien philosophers are curious to learn about a life so “unencumbered with matter, and, to all appearance, little else than soul” conjecturing that such tiny earthlings must spend their lives “in the delights of love and reflection, which are the true enjoyments of the perfect spirit”. Of course, they are very quickly disabused of such idealist illusions by those on-board:

“We have matter enough,” said [one of the philosophers], “to do abundance of mischief, if mischief comes of matter; and too much understanding, if evil flows from understanding. You must know, for example, that at this very moment, while I am speaking, there are one hundred thousand animals of our own species, covered in hats, slaying an equal number of fellow-creatures who wear turbans; or else are slain by them; and this hath been nearly the case all over the earth from time immemorial…”

“The dispute is about a mud-heap, no bigger than your heal,” continued the philosopher. “It is not that any one of those millions who cut one another’s throats pretends to have the least claim to that clod; the question is to know, whether it shall belong to a certain person who is known by the name of Sultan, or to another whom (for what reason I know not) they dignify with the appellation Caesar. Neither the one nor the other has ever seen, or ever will see, the pitiful corner in question; and scarcely one of those wretches who slay one another hath ever beheld the animal on whose account they are mutually slain!”

Sadly, little has changed since Voltaire wrote his story more than two hundred and fifty years ago. 46

*

But now a related question: why did the Europe become such a dominant force in the first place? This, arguably, is the greatest, most important question in all of our History, though one that until contemporary times was met with the most hubristic of lame answers:

The white race is the most versatile, has the most initiative, a greater facility for organization, and a more practical outlook in life. This has led to its mastery of the material side of living, urged it to invention and discovery, and to the development of industry, commerce and science.

So begins an explication outlined under an horrifically racist heading “why is the white race dominant?” as quoted from a pre-war children’s ‘book of facts’ entitled How Much do You Know?; a copy of which I happen to own. The author’s deep-seated yet unconscious white supremacist mindset presumes such an excruciating air of colonial haughtiness, that immediately after the book summaries the other “races” as follows:

The black race, enervated by the heat of the tropics, has never shown great capacity for sustained or combined effort. The brown race, also found in hot climates, has produced the world’s main religions, and is excelled in artistic handicrafts. The yellow race is said still to have a slave mentality: the individual matters nothing, the community all. 47

When I showed this passage to my father he was rightly outraged. Those opinions were outdated and unacceptable when I was at school, he told me. But then my father went to school a full decade after the book’s publication. A world war had since been and gone. Perceptions and attitudes had evidently changed – greatly for the better.

And yet, if we hold our nose to the overwhelming stench of casual racism, there is within the same passage, one idea that might – if expressed more sensitively – resonate with a somewhat permissible and rather commonly held opinion that still abounds today:

It [the white race – Europeans] has had the advantage also of living for the most part in temperate climates, where the struggle for existence has been neither too difficult nor too easy.

In a sense, it was this very assumption that Jared Diamond attempted not so much to dispel, as to correct in his best-selling book, Guns, Germs and Steel. In pursuit of that end, he dedicated thirty years of life on the road, trying to understand precisely why Europe did come to dominate the world, and he makes the intriguing and largely convincing case that the roots to present global inequality were basically an outcome of freak circumstances and happenstance. Not simply “the advantage also of living for the most part in temperate climates”, although, according to Diamond at least, climate has had a vital part to play in the ascent of the West, but also due to other advantages conferred by location and historical timing.

His book begins by reminding us how the very origins of human civilisation in the Fertile Crescent of the Middle East depended upon the accidental occurrence of arable crops and animals suitable for domestication. These two factors opened the way to a land of plenty. For given that the rise of agriculture was inevitable, Diamond says, then since its origins so happened to occupy a central geographical location in the Eurasian landmass, which has the fortuitous geographical orientation in so much as this super-continent spreads out east and west, thus providing similar lengths of day, and of seasons and climates, then it was comparatively easy for these new modes of agriculture to propagate as the people slowly migrated. A led to B led to C if only because the rise of A, B and C was so perfectly compatible.

Thanks to the development of agriculture, the population enjoyed a surplus, and this in turn brought about the rise of trade, and no less importantly, of free-time. So the people in the new settlements would spend extended periods preoccupied with otherwise unproductive activities, such as making stylistic improvements to their houses and other amenities, rather than, as in former times, gathering nuts or trapping pigs. This new freedom resulted in the rise of new technologies which, with time to spare, could also then be refined – undoubtedly the most significant of which was the production of metals and development of metal-working skills. Plough shears that were later turned into swords.

Trade routes lead to the transmission of new ideas, and once the discovery of gunpowder in China reached the shores of the Middle East, then its military use was quickly perfected. It was thanks to the early invention of writing – which arose on a very few occasions worldwide, and just once outside of the super-continent of Eurasia with the development of Mayan Script in Mexico – that this steady transmission of ideas and innovations thereafter accelerated.

As a consequence, the Eurasian civilisations had everything in place to begin their takeover, and also a secret weapon in reserve which they weren’t even aware of – germs. Our 10,000 years of domestication of so many species had inadvertently equipped these Eurasian invaders with an arsenal of new biological agents: diseases they themselves had considerable immunity to: smallpox from cattle, chicken-pox and influenza from poultry, to name but three examples. Whereas in North and South America, many people did not live in such close proximity to domesticated animals, and so had neither immunity nor exotic infections of their own to spread. Conquests by war were thus very often followed by pandemics more devastating than even our swords and cannons – although more recently, once the genocidal effect of disease had been better understood, the contamination of Native Americans became chillingly deliberate. The rest is history… our history.

Following on the vanguard of conquerors and explorers, a variety of enterprising European settlers made land grabs for King and Country, and as the empires grew, so a few European superpowers came to dominance. According to Diamond’s version then, it was by virtue of the happenstance of circumstance, the stars very firmly in our favour, that these new kingdoms of the West were first won and then overrun.

The rise of agriculture, a fluke, and the inventions of the printing press and the gun, lucky but likely consequences, Diamond presents us with a timeline of evidence to show how European dominance had nothing to do with superior intelligence, or, even that less racist presupposition, superior ideology. We would have won with or without the Protestant work-ethic, and with or without the self-righteous and assertive arrogance that often comes with worship of a One True God; a god who permits unlimited belligerence for holy ends.

In reaching this conclusion, however, Diamond is surely being too much the professor of geography, the scientist, and the archaeologist, and not sufficiently the historian, because even his own evidence doesn’t entirely lend support to such an overarching claim. For when it came to Europe’s seizure of Africa, the tables were to some extent turned, the European settlers now highly susceptible to the ravages of tropical disease, and our advantages, including, of course, the superiority of our weaponry, more than ever buttressed by an unshakeable ideology: that pseudo-religio-scientific notion of racial superiority so imprinted on the minds of the colonisers. It is the European mindset that finally retilts the balance. For the natives needed “civilising”, and despite the ever-present dangers of famine and disease, more than enough Europeans were driven by the profit motive and a deep-seated belief in the virtue of “carrying the white man’s burden”.

*

Bruce Parry is an indigenous rights advocate, author, explorer and filmmaker. He has lived with some of the most isolated tribes in the world, learning from how they interact with each other and the planet. After much exploration, one of the things that has truly inspired Bruce is the idea of egalitarian living. In August 2019, Ross Ashcroft, host of RT’s ‘Renegade Inc.’ caught up with him to hear his ideas on how to we can rethink our leadership structures and muster the courage to look within so we are able to change the modern western narrative:

*

All of the stories we tell fall within two broad categories. First there are our quotidian tales of the everyday. What happened when and to whom. Loosely we might say that all of these are our ‘histories’ whether biographical, personal, anecdotal, or traditional histories that define nations, and where it may be noted the words ‘story’ and ‘history’ are synonymous in many languages. 48 But there are also stories of a second, more fundamental kind: those of fairytale, myth and allegory that sometimes arise as if spontaneously, and though deviating from the strict if mundane ‘truth of accountants’, are able to penetrate and bring to light otherwise occluded insights and wisdom.

Stories of the second kind have sprung forth in all cultures, often sharing common themes and characters. These include stories of creation; of apocalypse; of the wantonness of gods; of murder and revenge; of cosmic love and of battles between superheroes. Interestingly, the songlines of Australian aboriginals map their own stories of origin directly to the land. Less fantastical and wondrous, in the civilised world too, there are nationalistic versions of what might also be more loosely considered ‘songlines’. In England, for instance, we might trace the nation’s genealogy via Stonehenge, Runnymede, Sherwood Forest, Hastings, Agincourt, the white cliffs of Dover and Avalon (today called Glastonbury). Accordingly, Stonehenge tells us we are an ancient people; Runnymede that we are not slaves; Sherwood Forest that we are rebellious and cheer for the underdog; Hastings, Agincourt and the white cliffs of Dover that we are a warrior nation seldom defeated, in part because our isle is all but impregnable; while Avalon, to steal from Shakespeare, makes ours a “blessed plot”:

This royal throne of kings, this sceptred isle,
This earth of Majesty, this seat of Mars,
This other Eden, demi-paradise;
This fortress built by Nature for herself,
Against infection and the hand of war,
This happy breed of men, this little world,
This precious stone set in the silver sea,
Which serves it in the office of a wall,
Or as a moat defensive to a house,
Against the envy of less happier lands;
This blessed plot, this earth, this realm, this England… 49

So here we find history and myth entwined as unavoidably as if they were stories of a single kind. But then what is the past when it is not fully-fleshed and retold in stories? Unlike the rest of the extinct world, it cannot be preserved in jars of formaldehyde and afterwards pinned out on a dissecting table. To paraphrase George Orwell, the stories of our past are not just informed by the present, they are in part reconstituted from it, and thereafter those same stories ineluctably propel us into the future. Not that there is some future already fixed and inescapable, since we have no reason to presume it is, but that what unfolds is already prefigured in our stories, which then guide it like strange attractors, just as today’s world was prefigured by stories told yesterday. If things were otherwise, history would indeed be bunk – nothing more or less than a quaint curiosity. Instead it is an active creator, and all the more dangerous for that. 50

In 1971, Monty Python appeared in an hour-long May Day special showcasing the best of European TV variety. Python’s contribution was a six-minute piece describing traditional May Day celebrations in England, including the magnificent Lowestoft fish-slapping dance [at 2:30 mins]. It also featured as part of BBC2’s “Python Night” broadcast in 1999:

*

IV      Mostly Harmless

“Human nature is not of itself vicious”

— Thomas Paine 51

*

In the eyes of many today, it follows that since our evil acts far exceed our good deeds, and indisputably so given the innumerable massacres, pogroms, genocides and other atrocities that make up so much of our collective history, the verdict on ‘human nature’ is clear and unequivocal. With the evidence piled so precipitously against us as a species, we ought to plead guilty in the hope of leniency. However, and even though at first glance the case does indeed appear an open-and-shut one, this is not a full account of human nature. There is also the better half to being human, although our virtues are undoubtedly harder to appraise than our faults.

Firstly, we must deal with what might be called ‘the calculus of goodness’. I’ve already hinted at this but let me now be more explicit: Whenever a person is kind and considerate, the problem with ‘the calculus’ is how those acts of kindness are to be counted against prior acts of indifference or malevolence? Or to broaden this: how is any number of saints to make up for the actions of so many devils? Can the accumulation of lesser acts of everyday kindness in aggregation, ever fully compensate for a single instance of rape, torture or cold-blooded murder? Or, to raise the same issue on the larger stage again, how did the smallpox and polio vaccines, which undoubtedly saved a great deal of suffering and the lives of millions, compensate against the bombings of Guernica, Coventry, Dresden, Hiroshima and Nagasaki? For aside from the moral dubiousness of all such utilitarian calculations, the reality is that inflicting harm and causing misery is on the whole so much easier than manufacturing any equivalence of good.

And this imbalance is partly an unfortunate fact of life; a fact that new technologies can and will only exacerbate. So here is a terrible problem that the universe has foisted upon us. For destruction is, as a rule, always a much more likely outcome than creation. It happens all of the time. As things erode, decay, go wonky and simply give up the ghost. If you drop a vase onto a hard floor, then your vase will reliably shatter into a pile of shards, and yet, if you toss those same hundred shards back into the air they will never reform into a vase again. Or, as Creationists like to point out (entirely missing the bigger point that evolution is not a purely random process) no hurricane could ever blow the parts from a scrapyard together again to reform a Jumbo Jet. Destruction then – i.e., the turning of order into chaos – turns out to be the way our universe prefers to unwind. And it’s tough to fight against this.

The random forces of extreme weather, earthquakes, and fires, are inherently destructive, just because they are erratic and haphazard. So if destruction is our wish, the universe bends rather easily to our will; and this is the diabolical asymmetry underlying the human condition.

In short, it will always be far easier to kill a man than to raise a child to become a man. Killing requires nothing else than the sudden slash of a blade, or the momentary pull on a trigger; the sheer randomness of the bullet’s tumbling wound being more than enough to destroy life. As technology advances, the push of a button increases that same potentiality and enables us to flatten entire cities, nations, civilisations. Today we enjoy the means for mega-destruction, and what was unimaginable in Voltaire’s day becomes another option forever “on the table”, in part, as I say, because destruction is an easy opinion, comparatively speaking – comparative to creation, that is.

Nevertheless, our modern weapons of mass destruction have all been willfully conceived, and at great expense in terms both of time and resources, when we might instead have chosen to put such time and resources to a wholly profitable use, protecting ourselves from the hazards of nature, or else thoroughly ridding the world of hunger and disease, or by more generally helping to redress the natural though diabolical asymmetry of life. 52

Here then is a partial explanation for malevolent excesses of human behaviour, although I concede, an ultimately unsatisfactory one. For however easily we are enabled to harm others with soft bodies given that we live in such a world beset by sharp objects and less visible perils, we do nevertheless have the freedom to choose not to do so. To live and let live and to commit ourselves to the Golden Rule that we “do unto others as we would have others do unto us”. So my principle objection to any wholesale condemnation of our species will have little to do with the estranging and intractable universal laws of nature, however harshly those laws may punish our human condition; instead, it entails a defence founded on anthropocentric considerations.

For if human nature is indeed so fundamentally rotten, then what ought we to make of our indisputable virtues? Of friendship and love; to select a pair of shining examples. And what of the great social reformers and the peacemakers like Gandhi and Martin Luther King? What too of our most beautiful constructions in poetry, art and music? Just what are we to make of this better half to our human nature? And why did human beings formulate the Golden Rule in the first instance?

Of course, even apparent acts of generosity and kindness can, and frequently do have, unspoken selfish motivations, so the most cynical adherents of the ‘dark soul hypothesis’ go further again, reaching the conclusion that all human action is either directly or indirectly self-serving. That friendship, love, poetry and music, along with every act of philanthropy (which literally means “love of man”), are all in one way or another products of the same innate selfishness. According to such surprisingly widespread opinion, even at our finest and most gallant the underlying motivation is always reducible to “you scratch my back…”

Needless to say, all of human behaviour really can, if we choose, be costed in such a one-dimensional utilitarian terms. Every action evaluated on the basis of outcomes and measured in terms of personal gain, whether actual or perceived. Indeed, given the mountains of irrefutable evidence that people are all-too-often greedy, shallow, petty-minded and cruel, it is not irrational to believe that humans are invariably and unalterably out for themselves. It follows that kindness only ever is selfishness dressed up in mischievous disguise, and challenging such cynicism is far from easy and can feel like shouting over a gale. The abrupt answer here is that not all personal gain ought to be judged equivalently. Since even if our every whim were, in some ultimate sense, inseparable from, contingent upon, and determined by self-interest, then who is this “self” in which our interests are so heavily vested?

Does the interest of the self include the wants and needs of our family and friends, or even, in special circumstances, the needs of complete strangers, and if so, then do we still call it ‘selfish’? If we love only because it means we receive love in return, or for the love of God (whatever this means), or simply for the pleasure of loving, and if in every case this is deemed selfish, then by definition all acts have become selfish. The meaning of selfishness is thus reduced to nothing more than “done for the self”, which misses the point entirely that selfishness implies a deficiency in the consideration of others. Thus, if we claim that all human action is born of selfishness, as some do, we basically redefine and reduce the meaning of ‘selfish’.

Having said this, I certainly do not wish, however tempting it may be, to paint a false smile where the mouth is secretly snarling. There is nothing to be usefully gained by naivety or sentimentality when it comes to gauging estimates of human nature. Nonetheless, there is an important reason to make a case in defence of our species, even if our defence must be limited to a few special cases. For if there is nothing at all defensible about ‘human nature’ it is hard to see past a paradox, which goes as follows: if human beings are innately and thus irredeemably bad (in accordance with our own estimation obviously), then how can our societies, with structures that are unavoidably and unalterably human, be anywise superior to the ‘human nature’ that designs them, and thus inherently and unalterably bad also. After all, ex nihilo nihil fit – nothing comes from nothing. This is, if you like, the Hobbesian Paradox. (And I shall return to it shortly.)

*

There have been many occasions when writing this book has felt to me a little like feeling around in the dark. Just what is it that I am so urgently trying to say? That feeling has never been more pronounced than when working on this chapter and the one ensuing. For human nature is a subject that leads into ever more divergent avenues and into deeper and finer complexities. What does it even mean to delve into questions about ‘human nature’? Already this presumes some general innate propensity that exists and provides a common explanation for all human behaviour. But immediately, this apparently simple issue brings forth a shifting maze of complications.

Firstly, there is the vital but unresolved debate over free will as opposed to determinism, which at one level is the oldest and most impenetrable of all philosophical problems. All attempts to address this must already presuppose sound concepts of the nature of Nature and of being. However, once we step down to the next level, as we must, we find no certain answers are provided by our physical sciences, which basically posit determinism from the outset in order to proceed.

Then there is a related issue of whether as biological organisms, humans are predominantly shaped by ‘nature or nurture’. In fact, it has become increasingly clear that the question itself is subtly altering, since it becomes evident that the dichotomy is a false one. What can be said with certainty is that inherited traits are encouraged, amplified, altered and sometimes prohibited by virtue of our environment due to processes occurring both at biological and social levels. Beyond this, nature and nurture cannot be so easily disentangled.

The tree grows and develops in accordance not merely with biochemical instructions encoded within its seed but in response to the place where that seed germinates, whether under full sunlight or deep shade, whether its roots penetrate rich or impoverished soil, and in accordance with temporal variations in wind and rainfall. We too are shaped not only as the flukes of genealogy, but by adapting moment by moment to environmental changes from the very instant our father’s sperm penetrated and merged with our mother’s egg. We are no more reducible to Dawkins’ ‘lumbering robots’, those vehicles “blindly programmed to preserve the selfish molecules known as genes” 53 that bloodlessly echo Hobbes, than we are to the ‘tabula rasa’ of Aristotle, Locke, Rousseau and Sartre. Yet somehow this argument lurches on, at least in the public consciousness, always demanding some kind of binary answer as though this remains a possibility.

As for the question of free will or determinism at a cosmic level, my personal belief is the one already presented in the book’s introduction, although to make matters absolutely unequivocal allow me to proffer my equivalent to Pascal’s famous wager: that one ought to live without hesitation as though free will exists, because in the case you are right, you gain everything, whereas if you lose, you lose nothing. Moreover, the view that we are without agency and altogether incapable of shaping our future involves a shallow pretence that also seeks to deny personal responsibility; it robs us of our dignity and self-respect, and disowns the god that dwells within.

As for proof of this faculty, I have none, and the best supporting evidence is that on occasions when I have most compellingly perceived myself as a thoroughly free agent in the world, there has spontaneously arisen a corresponding anxiety: the sense that given one’s possession of such an extravagant gift involves the acknowledgment of the sheer enormity of one’s responsibility. An overwhelming feeling that freedom comes with an excessively heavy price attached.

Indeed, my preferred interpretation of the myth of Eve’s temptation in the Garden of Eden follows from this: that the eating of “the apple” – i.e., the fruit of the tree of the knowledge of good and evil – miraculously and instantly gave birth to free will and conscience as one, with each sustaining the other (like the other snake, Ouroboros, perpetually eating its own tail). It follows that The Fall is nothing besides our human awakening to the contradistinction of good and evil actions, and thus interpreted, this apprehension of morality is simply the contingent upshot of becoming free in a fully conscious sense. 54

Indeed, we might justifiably wonder upon what grounds the most dismal critiques of human nature are founded, if not for the prior existence of a full awareness of moral failings that is itself another component aspect and expression of that same nature. Or, as French writer La Rochefoucauld put it in one of his most famous and eloquent maxims: “Hypocrisy is the homage which vice renders to virtue.” 55 That is, whenever the hypocrite says one thing then does another, he does it because he recognises his own iniquity but then feigns a moral conscience to hide his shame. Less succinctly, it might be restated that acting with good conscience is hard-wired and for most people (sociopaths presumably excluded) doing otherwise automatically involves us in compensatory acts of dissemblance, denial and in self-delusion also.

We have no reason to say humans are wholly exceptional in possessing a conscience, of course, although it seems that we are uncommonly sensitive when it comes to detecting injustice, and the reason is perhaps because (admittedly, this a hunch) we are uniquely gifted empathisers. Unfortunately, such prodigious talent for getting into the minds of others is one that also makes our species uniquely dangerous.

James Hillman was an American psychologist, who studied at, and then guided studies for, the C.G. Jung Institute in Zurich. In the following interview he speaks about how we have lost our connection to the cosmos and consequently our feelings for the beauty in the world and with it our love for life:

*

The Enlightenment struck many blows, one of which effectively killed God (or at least certain kinds of Theism). In the process, it more inadvertently toppled the pedestal upon which humanity had earlier placed itself, as Darwinianism slowly but inevitably brought us all back down to earth with a bump. No longer the lords of creations, still the shibboleth of anthropocentrism is much harder to shake.

Hobbes convinced us that ‘human nature’ is dangerous because it is Nature. Rousseau then took the opposing view arguing that our real problems actually stem from not behaving naturally enough. His famous declaration that “Man is born free, and everywhere he is in chains” forms the opening sentence of his seminal work The Social Contract; the spark that had helped to ignite revolutions across Europe. 56 Less than a century later, Marx and Engels concluded The Communist Manifesto, echoing Rousseau with the no less famous imperative often paraphrased: “Workers of the world unite! You have nothing to lose but your chains” 57

In the place of freedom and perhaps out of a desperate sense of loss, we soon recreated ourselves as gods instead and then set about constructing new pedestals based on fascist and Soviet designs. But finally, the truth was out. Humans make terrible gods. And as we tore down the past, remembering in horror the death camps and the gulags, we also invented new stories about ourselves.

In the process, the post-Hobbesian myth of ‘human nature’ took another stride. Rather than being on a level with the rest of creation and mechanically compelled to lust for power and material sustenance like all animals, our species was recast once again as sui generis in a different way. Beyond the ability to wield tools, and to manipulate the world through language and indeed by virtue of culture more generally, we came to the conclusion that the one truly exceptional feature of humans – the really big thing that differentiates ‘human nature’ from the whole of the rest of nature – was our species outstanding tendency to be rapacious and cruel. Thanks to our peculiar desire for self-aggrandisement, this has become the latest way we flatter ourselves.

It is sometimes said, for instance, that humans are the only creatures that take amusement from in cruelty. Indeed, at first glance this sounds like a perfectly fair accusation, but then just a little consideration finds it to be false. Take the example of the well-fed cat that is stalking the bird: does it not find amusement of a feline kind in its hunt?  When it toys with a cornered mouse, meting out a slow death from the multiple blows of its retractable claws, is it not enjoying itself? And what other reason can explain why that killer whales will often toss a baby seal from mouth to mouth – shouldn’t they just put it out of its misery?

Ah yes, comes the rejoinder, but still we are the only creatures to engage in full-scale warfare. Well, again, yes and no. The social insects go to war too. Chemical weapons are deployed as one colony defends itself from the raids of an aggressor. When this is granted, here’s the next comeback: ah, but we bring malice aforethought. The social insects are merely acting in response to chemical stimuli. They have pheromones for war, but no savage intent.

This brings us a little closer to home – too close perhaps – since it is well documented that chimpanzees gang up to fight against a rival neighbouring troop. How is this to be differentiated from our own outbreaks of tribal and sectarian violence?

That chimpanzees are capable of malice aforethought has long been known too. Indeed, they have observed on occasions to bring a weapon to the scene of the attack. But then, you might expect our immediate evolutionary cousins to share a few of our vices! However, in the 1970s, primatologist Jane Goodall was still more dismayed when she saw how the wild chimps she was studying literally descended into a kind of civil war: systematically killing a group of ‘separatists’ one-by-one and apparently planning their campaign in advance. 57a So yes, without any doubt, humans are best able of all creatures to act with malice aforethought, yet even in this we are apparently not alone.

Okay then… and here is the current fashion in humanity’s self-abasement… we are the only creatures that deliberately destroy their own environment. But again, what does this really mean? When rabbits first landed in Australia (admitted introduced by humans), did they settle down for a fair share of what was available? When domestic cats first appeared in New Zealand (and sorry to pick on cats again), did they negotiate terms with the flightless birds? And what of the crown of thorns starfish that devours the coral reefs, or of the voracious Humboldt squid swarming in some parts of our oceans and consuming every living thing in sight? Or consider this: when the continents of North and South America first collided and a land bridge allowed the Old World creatures of the North to encounter the New World creatures of the South, the migration of the former caused mass extinction of the latter. The Old World creatures being better adapted to the new circumstances simply ate the competition. There was not a man in sight.

In short, Nature’s balance is not maintained thanks to the generosity and co-operation between species: this is a human conceit. Her ways are all-too often cruel. Foxes eat rabbits and in consequence their populations grow and shrink reciprocally. Where there is an abundance of prey the predators thrive, but once numbers reach a critical point that feast becomes a famine, which restores the original balance. This is how ‘Nature’s balance’ is usually maintained – just as Malthus correctly describes (more below). But modern humans have escaped this desperate battle for survival, and by means of clever artificial methods, enable our own populations to avoid both predation and famine; an unprecedented situation that really does finally set us apart from all of our fellow species.

*

When Donald, son of psychologists, Winthrop and Luella Kellogg, turned ten-months old, his parents took the extraordinary decision of adopting Gua, a seven and a half-month female chimp to bring up in their home as a surrogate sibling. It was the 1930s and this would be a pioneering experiment in primate behaviour; a comparative study that caused some deal of dismay in academia and amongst the public. But irrespective of questions of ethics and oblivious to charges of sensationalism, the Kelloggs proceeded and Donald and Gua finally lived together for nine months.

They soon developed a close bond. Although younger, Gua was actually more mature than Donald both intellectually and emotionally. Being protective, she would often hug him to cheer him up. Her development was remarkably swift, and she quickly learned how to eat with a spoon and to drink from a glass. She also learned to walk and to skip – obviously not natural behaviours for a chimp – as well as to comprehend basic words; all of this before Donald had caught up.

This comparative developmental study had to be cut short, however, because by the age of two, Donald’s behaviour was becoming disconcertingly apelike. For one thing, he was regressing back to crawling. He had also learned to carry things in his mouth, picking up crumbs with his lips and one day chewing up a shoe, and far more than ordinary toddlers, he took delight in climbing the furniture and trees. Worse still, his language skills were seriously delayed and by eighteen-months he knew just three words, so that instead of talking he would frequently just grunt or make chimp-like gesticulations instead. The story ends tragically, of course, as all of the concerns over ethics became confirmed. Gua died of pneumonia less than a year after the study was curtailed and she had been abandoned by the Kelloggs family. Donald committed suicide later in life when he was 43 years old.

This is a sad story and by retelling it I am in no way endorsing the treatment of Donald and Gua. No such experiment should ever have been conducted, but it was, and the results are absolutely startling nonetheless. Instead of “humanizing the ape”, as the Kelloggs hoped to achieve, the reverse had been occurring. What they had proved inadvertently is that humans are simply more malleable than chimps, or for that matter any other creature on earth. It is humans that learn best by aping and not the other way around.

*

However much we may try to refine our search for answers, it is actual difficult to get beyond the most rudimentary formulation which ponders upon whether ‘human nature’ is for the most part good or bad. Rephrased, as it often is, this same inquiry generally receives one of four responses that can be summarised as follows: –

i) that human nature is mostly good but corruptible;

ii) that human nature is mostly bad but can be corrected;

iii) that human nature is mostly bad but with flaws that can be ameliorated – rather than made good; or,

iv) most misanthropically, that human nature is atrocious, and irredeemably so, but that’s life.

The first is the Romanticism of Rousseau, whereas the third and fourth hinge around the cynicism of Hobbes. Whereas Hobbes had regarded the ‘state of nature’ as the ultimate threat, Rousseau implores us instead to return to a primitive state of authentic innocence. And it is these extremes of Hobbes and Rousseau that still prevail, informing the nuclear-armed policy of Mutual Assured Destruction on the one hand, and the counterculture of The New Age on the other. Curiously, both peer back distantly to Eden and reassess The Fall from different vantages too. Although deeply unreligious, Hobbes holds the more strictly Christian orthodox view. As undertaker and poet Thomas Lynch laid it out:

[T]he facts of the matter of human nature – we want, we hurt and hunger, we thirst and crave, we weep and laugh, dance and desire more and more and more. We only do these things because we die. We only die because we do these things. The fruit of the tree in the middle of Eden, being forbidden, is sexy and tempting, tasty and fatal.

The fall of Man and Free Market Capitalism, no less the doctrines of Redemptive Suffering and Supply and Demand are based on the notion that enough is never enough… A world of carnal bounty and commercial indifference, where men and women have no private parts, nor shame nor guilt nor fear of death, would never evolve into a place that Darwin and Bill Gates and the Dalai Lama could be proud of. They bit the apple and were banished from it. 58

Forever in the grip of the passions, our ‘appetites’ and ‘aversions’, these conjoined and irrepressible Hobbesian forces of attraction and repulsion continually incite us. In our desperation to escape we flee blindly from our fears, yet remaining hopeful always of entirely satisfying our desires. It’s pain and pleasure all the way: sex and death! And I imagine if you had asked Hobbes whether without the apple “we’d still be blissfully wandering about naked in paradise”, as Dudley Moore put it to Peter Cook’s Devil in the marvelous Faustian spoof Bedazzled, you’d very likely get a similar reply to the one Cook gave him: “they [Adam and Eve] were pig ignorant!” 59

However, the Genesis myth although a short story, in fact takes place as two very distinct acts: and only the first part is concerned with temptation, whereas the denouement is centred on shame. So let’s consider shame for a moment, because shame appears to be unique as an emotion, and though we habitually confuse it with guilt – since both are involved in reactions to conscience – shame has an inescapable social quality. To summarise this, guilt involves what you do, while shame is intrinsically bound up with your sense of self. So guilt leads us to make apologies, a healthy response for wrongdoing, whereas you cannot apologise for being bad.

adam and eve expulsion from eden

Detail from ‘The Expulsion from the Garden of Eden’ (Italian: Cacciata dei progenitori dall’Eden), a fresco by the Italian Early Renaissance artist Masaccio, ca. 1427. Based on image from Wikimedia Commons.

The American academic Brené Brown describes shame as “the intensely painful feeling or experience of believing that we are flawed and therefore unworthy of love and belonging” 60 and says imagine how you would feel if you were in a room with all the people you most loved but when you walked out you began to hear the worst things imaginable about you; so bad that you don’t think you’ll ever be able to walk back into the room to face everyone again.

In fact, shame is ultimately tied up with fears of being unworthy, unloveable, and of abandonment that we learn to feel as infants, when isolation and rejection are actual existential threats. So it triggers instinctual responses that humans probably evolved in order to avoid being rejected and ostracised by the group, when this again involved an actual existential threat. Shame is an overwhelming feeling accompanied by lots of physiological sensations such as blushing, the tightening of the chest, feelings of not being able to breathe, and a horrible doubt that also runs to the pit in your stomach. It is really no exaggeration to say that shame feels like death. While guilt leads us to make apologies, a healthy response for wrongdoing, you cannot apologise for being bad.

Moreover, and unlike our other emotions, shame can be a response to just about anything: our appearance, our own attention-seeking, when we get too boisterous, too over-excited, talking too much (especially about oneself); or when we retreat into isolation, feeling shy and avoidant; or feeling inauthentic, fake; or for being taken advantage of; or conversely being unable to drop our armour, and being judgmental and quick to anger; or just for a lack of ability, skills, or creativity; our failure to communicate properly, including being able to speak up or speak honestly; or when we are lazy, or weak, with low energy or lack of motivation, perhaps sexually; or finally – not that my list is in anyway exhaustive – shame can be triggered by anxiety, nervousness, defensiveness, when we display our weakness by blushing or showing other visual signs of nervousness or shame. Note the circularity.

Strangely, we can even feel shame without recognising the symptoms, and this may again generate escalating confusion and a terrifying sense of spiralling: a fear that we won’t survive the feeling itself. In fact, shame and fear have a co-existent relationship such that we can alternate between both, and both may leave terrible psychological scars; some of parts of us becoming repressed; others forming a mask – becoming conscious and unconscious aspects (a topic I return to consider in the next chapter).

Interestingly, Jean-Paul Sartre is often paraphrased saying “hell is other people”, which is then widely misinterpreted to mean that our relationships with others are invariably poisoned. In fact, what Sartre had meant is closer to the idea that hell is the judgment of our own existence in the eyes of other people, so then again, perhaps what he finally intended to say is “hell is our sense of rejection in the eyes of others”. If so, then he was right? 61

Seen in this way, the Rousseauian standpoint becomes intriguing. Is it possible that the root cause of all human depravity is finally shame? And if we could get beyond our shame, would this return to innocence throw open the gates to paradise once more?

In this chapter I have already tried to expose the chinks in their rather well-worn armour of Hobbesianism, because for the reasons expounded upon above, it has been collectively weighing us down. Hobbes’ adamancy that human nature is rotten to the core with its corollary that there is little that can be done about it, is actually rather difficult to refute; the measure of human cruelty vastly exceeding all real or apparent acts of generosity and kindness. But Hobbes’ account is lacking and what it lacks in abundance is any kind of empathy. Our capacity for empathy is, Brené Brown points out, obstructed primarily by shame. Why? Because empathy can only flourish where there is vulnerability and this is precisely what shame crushes.

So yes, we must concede that the little boy who pulls the legs off flies greatly amuses himself. There can be a thrill to malice, if of a rather shallow and sordid kind. But more happiness is frequently to found in acts of creation than in destruction; more fulfillment to helping than hindering; and there is far more comfort in loving than in hating. Even Hobbes, though ‘twinned with fear’, deep down must have known this.

Brené Brown has spent many decades researching shame, which she believes is an unspoken epidemic and the secret behind many forms of disruptive behaviour. An earlier TED talk on vulnerability became a viral hit. Here she explores what can happen when people confront their shame head-on:

*

On the whole, we are not very much into the essence of things these days. Essentialism is out and various forms of relativism are greatly in vogue. That goes for all things except perhaps our ‘human nature’, for which such an essence is very commonly presumed. Yet it seems to me that the closer one peers, the blurrier any picture of our human nature actually becomes; and the harder one tries to grasp its essence, the less tangible it is. In any case, each of the various philosophies that inform our modern ideas of ‘human nature’ are intrinsically tainted by prior, and in general, hidden assumptions, which arise from vestigial religious and/or political dogma.

For instance, if we take our cue from Science (most especially from Natural History and Biology) by seeking answers in the light of Darwin’s discoveries, then we automatically inherit a view of human nature sketched out by Malthus and Hobbes. Malthus who proceeded directly from (his own version of) God at the outset, and Hobbes, who in desperately trying to circumvent the divine, finished up constructing an entire political philosophy based on a notion barely distinguishable from Augustine’s doctrine of Original Sin. Meanwhile almost all of the histories that commonly inform our opinions about human nature are those written about and for the battle-hardened conquerors of empires.

But why suppose that there really is anything deserving the title ‘human nature’ in the first place, especially given what is most assuredly known about our odd species: that we are supremely adaptable and very much more malleable and less instinctive than all our fellow creatures. Indeed the composite words strike me as rather curious, once I can step back a little. After all, ‘human’ and ‘nature’ are not in general very comfortable bedfellows. ‘Human’ meaning ‘artificial’ and ‘nature’ meaning, well… ‘natural’… and bursting with wholesome goodness. Or else, alternatively, ‘human’ translating as humane and civilised, leaving ‘nature’ to supply synonyms for wild, primitive and untamed… and, by virtue of this, red in tooth and claw.

In short, the very term ‘human nature’ is surely an oxymoron, doubly so as we see above. The falsehood of ‘human nature’ concealing the more fascinating if unsettling truth that in so many respects humans conjure up their nature in accordance with how we believe ourselves to be, which rests in turn on what limits are set by our family, our acquaintances and the wider culture. Human nature and human culture are inextricable, giving birth to one another like the paradoxical chicken and egg. As Huxley writes:

‘Existence is prior to essence.’ Unlike most metaphysical propositions, this slogan of the existentialists can actually be verified. ‘Wolf children,’ adopted by animal mothers and brought up in animal surroundings, have the form of human beings, but are not human. The essence of humanity, it is evident, is not something we are born with; it is something we make or grow into. We learn to speak, we accumulate conceptualized knowledge and pseudo-knowledge, we imitate our elders, we build up fixed patterns of thought and feeling and behaviour, and in the process we become human, we turn into persons. 62

Alternatively, we might give a nod to Aristotle who famously declared “man is by nature a political animal”, an assessment seemingly bound up in contradictions while yet abundantly true, and which he then expounds upon saying:

“And why man is a political animal in a greater measure than any bee or any gregarious animal is clear. For nature, as we declare, does nothing without purpose; and man alone of the animals possesses speech. The mere voice, it is true, can indicate pain and pleasure, and therefore is possessed by the other animals as well (for their nature has been developed so far as to have sensations of what is painful and pleasant and to indicate those sensations to one another), but speech is designed to indicate the advantageous and the harmful, and therefore also the right and the wrong; for it is the special property of man in distinction from the other animals that he alone has perception of good and bad and right and wrong and the other moral qualities, and it is partnership in these things that makes a household and a city-state.” 63

To end, therefore, I propose a secular update to Pascal’s wager, which goes as follows: if, and in direct contradiction to Hobbes, we trust in our ‘human nature’ and promote its more virtuous side, then we stand to gain amply in the circumstance that we are right to do so and at little cost, for if it turns out we were mistaken and ‘human nature’ is indeed intrinsically rotten to our bestial cores, our lot as a species is inescapably dreadful whatever we wish to achieve. For in the long run, as new technologies supply ever more creative potential for cruelty and destruction (including self-annihilation), what chance do we have to survive at all if we are so unwilling to place just a little trust in ourselves to do a whole lot better?

Next chapter…

*

Addendum: the Malthusian population bomb scare

Thomas Malthus was a man of many talents. A student of Cambridge University, where he had excelled in English, Latin, Greek and Mathematics, he later became a Professor of History and Political Economy and a Fellow of the Royal Society. There is, however, chiefly one subject above all others that Malthus remains closely associated with, and that is the subject of demography – human populations – a rather single-minded preoccupation that during his tenure as professor is supposed to have earned him the nickname “Pop” Malthus.

Malthus big idea was precisely this: that whereas human population increases geometrically, food production, upon which the growing population inevitably depends, can only increase in an arithmetic fashion. He outlines his position as follows:

I think I may fairly make two postulata. First, That food is necessary to the existence of man. Secondly, That the passion between the sexes is necessary and will remain nearly in its present state. These two laws, ever since we have had any knowledge of mankind, appear to have been fixed laws of our nature, and, as we have not hitherto seen any alteration in them, we have no right to conclude that they will ever cease to be what they now are… 64

Given that populations always grow exponentially whereas food production must inevitably be arithmetically limited, Malthus concludes that the depressing, but unassailable consequence is a final limit not simply to human population but to human progress and “the perfectibility of the mass of mankind”:

This natural inequality of the two powers of population and of production in the earth, and that great law of our nature which must constantly keep their effects equal, form the great difficulty that to me appears insurmountable in the way to the perfectibility of society. All other arguments are of slight and subordinate consideration in comparison of this. I see no way by which man can escape from the weight of this law which pervades all animated nature. No fancied equality, no agrarian regulations in their utmost extent, could remove the pressure of it even for a single century. And it appears, therefore, to be decisive against the possible existence of a society, all the members of which should live in ease, happiness, and comparative leisure; and feel no anxiety about providing the means of subsistence for themselves and families. 65

It’s a truly grim message, although in fairness to Malthus, the gloom is delivered in a lively and frequently entertaining style. That said, however, Malthus was wrong. Terribly wrong.

Firstly, he was wrong in terms of specifics, since he wildly over-estimated the rate of population growth 66, thereby exaggerating the number of future mouths needing to fed and, by extension, the amount of food needed to fill them. Obviously what Malthus was lacking here was actual available statistics, and it is perhaps not surprising therefore, that he later became one of the founder members of the Statistical Society in London 67: the first organisation in Britain dedicated to the collection and collation of national statistics. Charles Babbage, who is nowadays best remembered as the inventor of early calculating machines, known as “difference engines” – machines that helped to lead the way to modern computing – was another founder member of the group, and obviously took statistics very seriously indeed. He even once corrected the poet Alfred Tennyson in a letter as follows:

In your otherwise beautiful poem, one verse reads, ‘Every moment dies a man,/ Every moment one is born’: I need hardly point out to you that this calculation would tend to keep the sum total of the world’s population in a state of perpetual equipoise whereas it is a well-known fact that the said sum total is constantly on the increase. I would therefore take the liberty of suggesting that in the next edition of your excellent poem the erroneous calculation to which I refer should be corrected as follows: ‘Every moment dies a man / And one and a sixteenth is born.’ I may add that the exact figures are 1.167, but something must, of course, be conceded to the laws of metre. 68

It may be noted then, that such a rate of increase (presumably based on real statistics), although still exponential, is far below the presumed rates of growth in Malthus’s essay. But then Malthus’s estimate may be fairly excused; his famous essay having been first published about four decades before any statistics would have been available. Malthus was, however, also more fundamentally wrong in his thesis; for such catastrophic oscillations as he envisaged through cycles of overpopulation and famine are not the order of our times, and less so now than even during his own times of relatively small populations. In fact contrary to Malthus’ prophesies of doom, we have a great plenty of food to go around (lacking merely the political and economic will to distribute it fairly) 69, with official UN estimates indicating that we shall continue to have such abundance for the foreseeable future. 70

*

I can still recall when, as a sixth-former, I’d first heard about Malthus’ theory of population, and how it had sounded like altogether the daftest, most simplistic theory I’d ever come across – an opinion that remained for at least a few months before I’d heard about Abraham Maslow’s “hierarchy of needs” which I then considered still dafter and more simplistic again. In both cases, it was clear to me that supposition and conjecture is being presented as quasi-scientific fact. In Maslow’s case, with his hierarchical stacking of physical and psychological needs, it was also self-evident that no such ascending pyramid really existed anywhere outside of Maslow’s own imaginings. That you might just as well construct a dodecahedron of pleasures, or a chocolate cheesecake of motivational aspirations, as make-up any kind of pyramid of human needs.

I was judging his ideas unfairly, however, and in hindsight see I was prejudiced by my scientific training. As a student of Physics, Chemistry and Mathematics, I’d become accustomed to rigorously grounded theories in which predictions can and must be made and tested against actual data. But Maslow’s theory is not a theory of this kind. It is inherently nonrigorous, and yet it may still be valuable in another way. As a psychologist he had diverged from the contemporary practice of expanding the field purely on the basis of neuroses and complexes, and he sought instead, a more humanistic approach to analysing what he thought constituted healthy-mindedness. His main concern was how people might achieve “self actualization”. So his ‘theory’ is better understood and judged within this context, and the same goes for other nonrigorous formulations. 71

With Malthus, however, my irritation was coloured differently. His theory may have been simply an educated and carefully considered hunch, but it did at least present us with outcomes that could be scientifically reviewed. Plainly, however, all the available facts confounded his case absolutely.

After all, it had been two centuries since Malthus first conjectured on the imminence of food shortages, yet here we were, hurtling towards the end of the twentieth century, still putting too many leftovers in our bins. And though people living in the third world (as it was then called) were desperately poor and undernourished – as remains the case – this was already the consequence of our adopted modes of distribution rather than any consequence of insufficient production of food as such. Indeed, as a member of the EEC, the United Kingdom was responsible for its part in the storage of vast quantities of food and drink that would never be consumed: the enormous ‘mountains of cheese’ and the ‘lakes of milk and wine’ being such prominent features of the politico-economic landscape of my adolescence.

So where precisely did Malthus go wrong? In fact, both of his purportedly axiomatic postulates are unfounded. Regarding food production being an arithmetic progression, he completely failed to factor in the staggering ingenuity of human beings. He seems curiously oblivious to how, even at the turn of the nineteenth century when his essay was written, food production was already undergoing some dramatic technological shifts, including methods of selective breeding, and with the advent of mechanised farming equipment. The more recent developments of artificial fertilisers and pesticides have enabled cultivation of far greater acreage, with crop yields boosted far in excess of any arithmetic restriction. With the latest “green technologies” permitting genetic manipulation, the amounts of food we are able to produce might be vastly increased again, if this is what we should chose to do – and I do not say that we should automatically resort to such radical and potentially hazardous new technologies, only that there are potential options to forestall our supposed Malthusian fate.

Meanwhile, on the other side of Malthus’s inequality, we see that his estimates of rates of population growth were wrong for different but perhaps related reasons. Again, he underestimates our adaptive capability as a species, but here the error is born out of an underlying presumption; one that brings me right back to the question of ‘human nature’.

*

Perhaps the most interesting and intriguing part of Malthus’ famous essay are not the accounts of his discredited formulas that illustrate the mismatch between population growth and food production, but the concluding pages. Here are chapters not about geometric and arithmetic progressions, nor of selected histories to convince us of the reality of our predicament, nor even of the various criticisms of progressive thinkers who he is at pains to challenge – no, by far the most interesting part (in my humble opinion) are the final chapters where he enters into discussion of his real specialism, which was theology. For Reverend Malthus was first and foremost a man of the cloth, and it turns out that his supposed axiomatic propositions have actually arisen from his thoughts about the nature of God, of Man, of the Mind, and of Matter and Spirit. 72, 73

In short, Malthus argues here that God fills us with needs and wants in order to stimulate action and develop our minds; necessity being such a constant and reliable mother of invention. And Malthus draws support from the enlightenment philosophy of empiricist and humanist John Locke:

If Locke’s idea be just, and there is great reason to think that it is, evil seems to be necessary to create exertion, and exertion seems evidently necessary to create mind.” This given, it must follow, Malthus says, that the hardships of labour required for survival are “necessary to the enjoyment and blessings of life, in order to rouse man into action, and form his mind to reason. 74

Whilst adding further that:

The sorrows and distresses of life form another class of excitements, which seem to be necessary, by a peculiar train of impressions, to soften and humanize the heart, to awaken social sympathy, to generate all the Christian virtues, and to afford scope for the ample exertion of benevolence.

The perennial theological “problem of evil” is thus surmountable, Malthus says, if one accepts “the infinite variety of forms and operations of nature”, since “evil exists in the world not to create despair, but activity.” In other words, these things are sent to try us, or rather, because Malthus is very keen to distance himself from more traditional Christian notions of reward and punishment, “not for the trial, but for the creation and formation of mind”. Without pain and distress there would be no pricks to kick against, and thus no cause to perfect ourselves. This, at least, is Malthus’ contention.

In this he echoes a theodicy already well developed by one of the true Enlightenment geniuses, Gottfried Wilhelm Leibniz. Best remembered now as the independent discoverer of calculus, unaware of Newton’s parallel development, Leibniz also left us an astonishing intellectual legacy with published articles on almost every subject including politics, law, history and philosophy. In a collection of essays from 1710, and in making his own case for the goodness of God, it was Leibniz who first described our world as “the best of all possible worlds”. 75

Famously, Voltaire stole Leibniz’s aphorism and, by reworking it into the central motif of his marvellous satire Candide (written 1759), invested it with characteristically biting irony. In Candide’s adventures, Voltaire turns the phrase into the favourite maxim and motto of his learned companion and teacher Dr Pangloss. The Panglossian faith an unimpeachable acceptance of the divine and cosmic beneficence to be maintained in spite of every horror and irrespective of all disasters they witness and that befall them. Shipwrecks, summary executions, and even being tortured by the Inquistion; all is justifiable in this best of all possible worlds. For Malthus, although writing half a decade after Voltaire’s no-nonsense lampooning, an underpinning belief in a world that was indeed “the best of all possible worlds” remained central to his thesis; Malthus even declaring with Panglossian optimism that:

… we have every reason to think that there is no more evil in the world than what is absolutely necessary as one of the ingredients in the mighty process [of Life]. 76

So what does all of this mean for Malthus’s God? Well, God is mysterious and ultimately unfathomable, because “infinite power is so vast and incomprehensible an idea that the mind of man must necessarily be bewildered in the contemplation of it.” This accepted, Malthus then argues that we do have clues, however, for understanding God through objective analysis of his handiwork, by “reason[ing] from nature up to nature’s God and not presum[ing] to reason from God to nature.”

Yes, says Malthus, we might fancy up “myriads and myriads of existences, all free from pain and imperfection, all eminent in goodness and wisdom, all capable of the highest enjoyments, and unnumbered as the points throughout infinite space”, but these are “crude and puerile conceptions” born of the inevitable and unassailable ignorance and bewilderment we have before God. Far better then, to:

“… turn our eyes to the book of nature, where alone we can read God as he is, [to] see a constant succession of sentient beings, rising apparently from so many specks of matter, going through a long and sometimes painful process in this world, but many of them attaining, ere the termination of it, such high qualities and powers as seem to indicate their fitness for some superior state. Ought we not then to correct our crude and puerile ideas of infinite Power from the contemplation of what we actually see existing? Can we judge of the Creator but from his creation?”

So God, at least according to Rev. Malthus, is to be understood directly through Nature – an idea that is bordering on the heretical. But what of the Principle of Population? How does this actually follow from the Malthusian “God of nature” 77 ?

Here we must remind ourselves again that what nowadays are sometimes called our instinctual drives, and what Malthus describes as “those stimulants to exertion which arise from the wants of the body”, are to Malthus but necessary evils. They are evils but with a divine purpose, and this purpose alone justifies their existence. In particular, those wants of the body which Malthus coyly refers to as “the passion between the sexes” are, in this scheme, the necessary means for the human race to perpetuate itself. With sex directly equated to procreation.

On the face of it then, Malthus must have been entirely ignorant of the sorts of sexual practices that can never issue progeny. (To rework a line from Henry Ford) sex might be any flavour you like, so long as it is vanilla! More likely, however, he dismissed any such ‘contraceptive’ options not because of ignorance but on the grounds of his deep-seated Christian morality. Rum and the lash, in moderation possibly, but sodomy… we are British!

If Malthus could be brought forward to see the western world today, what he’d find would doubtless be a tremendous shock in many ways. Most surprisingly, however, he would discover a culture where ‘the passions’ are endlessly titillated and aroused, and where “the wants of the body” are very easily gratified. Quite aside from the full-frontal culture shock, Malthus would surely be even more astonished to hear that our libidinous western societies have solved his supposedly insoluble population problem; our demographics flattening off, and our numbers in a slow but annual decline.

Malthus had argued very strongly against the poor laws, calling for their eventual abolition. He firmly believed that all kinds of direct intervention only encouraged a lack of moral restraint which was the underlying root to all the problems. He earnestly believed that it would be better to let nature take care of these kinds of social diseases. Yet we can now see that one solution to his population problem has been the very thing he was fighting against. That the populations in our modern societies have stabilised precisely because of our universal social welfare and pension systems: safety nets that freed us all from total reliance upon the support of our children in old age.

We also see that as child mortality has markedly decreased, parents have little reason to raise such large families in the first instance. And that once more people – women especially – won access to a basic education, the personal freedom this affords gave them further opportunity and better reason to plan ahead and settle for smaller families. It is thanks to all of these social changes, combined with the development of the contraceptive pill, that “the passion between the sexes” has been more or less surgically detached from population growth.

Making life tougher, Malthus reasoned, would be the bluntest tool for keeping down the numbers, especially of the lower classes. Yet if he landed on Earth today, he would discover irrefutable proof that the exact opposite is the case. That where nations are poorest, populations are rising fastest. There is much that Malthus presumed to be common sense but that, in fact, turns out to be false. 78

*

Please note that for the purposes of ‘publishing’ here I have taken advantage of the option to incorporate hypertext links and embed videos – in order to distinguish additional commentary from the original text all newly incorporated text has been italised.

*

1 From Prince Hamlet’s monologue to Rosencrantz and Guildenstern in Hamlet Act II, Scene 2. In fuller context:

What a piece of work is a man! How noble in reason, how infinite in faculty! In form and moving how express and admirable! In action how like an angel, in apprehension how like a god! The beauty of the world. The paragon of animals. And yet, to me, what is this quintessence of dust? Man delights not me. No, nor woman neither, though by your smiling you seem to say so.

2  Quote taken from the Introduction to The Naked Ape written by Desmond Morris, published in 1967; Republished in: “The Naked Ape by Desmond Morris,” LIFE, Vol. 63, Nr. 25 (22 Dec. 1967), p. 95.

3 Stanley Kubrick speaking in an interview with Eric Norden for Playboy (September 1968)

4 “It takes all the running you can do, to keep in the same place.”

5 The original script for the 2001 also had an accompanying narration which reads:

“By the year 2001, overpopulation has replaced the problem of starvation but this is ominously offset by the absolute and utter perfection of the weapon.”

“Hundreds of giant bombs had been placed in perpetual orbit above the Earth. They were capable of incinerating the entire earth’s surface from an altitude of 100 miles.”

“Matters were further complicated by the presence of twenty-seven nations in the nuclear club.”

6 From the Stanley Kubrick interview with Playboy magazine (1968). http://dpk.io/kubrick

7 From the chapter on “Generation” from Zoonomia; or the Laws of Organic Life (1994) written by Erasmus Darwin http://www.gutenberg.org/files/15707/15707-h/15707-h.htm#sect_XXXIX

8

In October 1838, that is, fifteen months after I had begun my systematic inquiry, I happened to read for amusement Malthus On Population, and being well prepared to appreciate the struggle for existence which everywhere goes on from long-continued observation of the habits of animals and plants, it at once struck me that under these circumstances favourable variations would tend to be preserved, and unfavourable ones to be destroyed. The results of this would be the formation of a new species. Here, then I had at last got a theory by which to work; but I was so anxious to avoid prejudice, that I determined not for some time to write even the briefest sketch of it.

From Charles Darwin’s autobiography (1876), pp34–35

9 Bellum omnium contra omnes, a Latin phrase meaning “the war of all against all”, is the description that Thomas Hobbes gives to human existence existing in “the state of nature” that he describes in first in De Cive (1642) and  later in Leviathan (1651). The Latin phrase occurs in De Cive:

“… ostendo primo conditionem hominum extra societatem civilem, quam conditionem appellare liceat statum naturæ, aliam non esse quam bellum omnium contra omnes; atque in eo bello jus esse omnibus in omnia.”

“I demonstrate, in the first place, that the state of men without civil society (which state we may properly call the state of nature) is nothing else but a mere war of all against all; and in that war all men have equal right unto all things.”

In chapter XIII of Leviathan, Hobbes more famously expressly the same concept with these words:

Hereby it is manifest that during the time men live without a common Power to keep them all in awe, they are in that condition which is called War; and such a war as is of every man against every man.[…] In such condition there is no place for Industry, because the fruit thereof is uncertain: and consequently no Culture of the Earth; no Navigation, nor use of the commodities that may be imported by Sea; no commodious Building; no Instruments of moving and removing such things as require much force; no Knowledge of the face of the Earth; no account of Time; no Arts; no Letters; no Society; and which is worst of all, continual Fear, and danger of violent death; And the life of man solitary, poor, nasty, brutish, and short.

10 The glee with which my old professor had jokingly dismissed Galileo was undisguised, and he was quick to add that he regarded Galileo’s reputation as greatly inflated. What other physicist, he inquired of us, is remembered only by their first name? With hindsight, I can’t help wondering to what he was alluding? It is mostly kings and saints (and the convergent category of popes) who we find on first-name historical terms. The implication seems to be that Galileo has been canonised as our first secular saint (after Leonardo presumably). Interestingly, and in support of this contention, Galileo’s thumb and middle fingers plus the tooth and a vertebra (removed from his corpse by admirers during the 18th century) have recently been put on display as relics in the Galileo Museum in Florence.

11 Alexander Pope (1688–1744): ‘Epitaph: Intended for Sir Isaac Newton’ (1730)

12 The famous quote comes from letter Newton sent to fellow scientist Robert Hooke, in which about two-thirds of the way down on the first page he says “if I have seen further, it is by standing on the shoulders of giants.” It has been suggested that this remark was actually intended as a snide dig at Hooke, a rival who Newton was continually in dispute with and who was known for being rather short in physical stature.

13 From Il Saggiatore (1623) by Galileo Galilei. In the original Italian the same passage reads:

La filosofia è scritta in questo grandissimo libro, che continuamente ci sta aperto innanzi agli occhi (io dico l’Universo), ma non si può intendere, se prima non il sapere a intender la lingua, e conoscer i caratteri ne quali è scritto. Egli è scritto in lingua matematica, e i caratteri son triangoli, cerchi ed altre figure geometriche, senza i quali mezzi è impossibile intenderne umanamente parola; senza questi è un aggirarsi vanamente per un oscuro labirinto

14

Hobbes and the earl of Devonshire journeyed to Italy late in 1635, remaining in Italy until the spring of 1636 when they made their way back to Paris. During this tour of Italy Hobbes met Galileo, although the dates and details of the meeting are not altogether clear. In a letter to Fulgenzio Micanzio from 1 December, 1635, Galileo reports that “I have had many visits by persons from beyond the alps in the last few days, among them an English Lord who tells me that my unfortunate Dialogueis to be translated into that language, something that can only be considered to my advantage.” The “English Lord” is almost certainly Devonshire, and the projected English translation of the Dialogue is presumably the work of Dr. Joseph Webb mentioned in Hobbes’s February, 1634 letter to Newcastle. It is therefore likely that Hobbes met Galileo in December of 1635, al-though Hobbes was not otherwise known to be in Florence until April of 1636. Aubrey reports that while in Florence Hobbes “contracted a friend-ship with the famous Galileo Galileo, whom he extremely venerated and magnified; and not only as he was a prodigious witt, but for his sweetness of nature and manners”. Legend even has it that a conversation with Galileo in 1635 or 36 inspired Hobbes to pursue the goal of presenting moral and political philosophy in a rigorously geometrical method, although the evidence here is hardly compelling.

From a paper entitled Galileo, Hobbes, and the Book of Nature by Douglas M. Jesseph, published in Perspectives on Science (2004), vol. 12, no. 2 by The Massachusetts Institute of Technology. It is footnoted with the following disqualifier:

The evidence, such as it is, comes from the eighteenth century historian of mathematics Abraham Kästner, who reported “John Albert de Soria, former teacher at the university in Pisa, assures us it is known through oral tradition that when they walked togeteher at the grand-ducal summer palace Poggio Imperiale, Galileo gave Hobbes the first idea of bringing moral philosophy to mathematical certainty by treating it according to the geometrical method”. Schumann dismisses the tale as “certainly false,” basing this judgment on a variety of evidence, including the fact that Soria himself expressed skepticism about the story.

https://watermark.silverchair.com/106361404323119871.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAAo4wggKKBgkqhkiG9w0BBwagggJ7MIICdwIBADCCAnAGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMsyC-rL3cNaA4jxGKAgEQgIICQUv8KppqEobaooIWAp4zOmspRnjPLemQuJPq9SdYMkXz9MdidZukWj-XPLej4xmVxFg9w13iQjQ6vJBkevCQSAHpI7Ltsdr5Cq9OtusB7kZ72Z2ERWX8aW6-6nXgo5VX3pcUKwR8rfd6uRrDRlT-av27Qg3Gr2yE5bitEnOuljPtwnYeI9ZAAwbu6d9Ncg7_W5_StRVBELTJ7QTjzjsM9Dx0B64IGa9o0L7hTPdc6PkOUK23g6D4dCZ2kN2Qn3fEh-Uwkkm_iYO2DrOqUQM_dkkcjpRGJDrSvUrMpOSpVBPh7V2vz8TzaE_8D3300Zm_f8pkiNKBrqPJ1ghe3b7VmfPj9-foW_4rZCNN2SkcosyNg1988UWm155UoesLrh8NZUm3sxgVnkPafBIx7xmHGdcVmxpQHCH-8Ahju5_VvOx-LfSCbkdc1zFG0Qs-jH4ecrL9ESPQGDhRCUwjtnsCuuC8gjM6UFXl9Fd8bzrdTvVukzlOYEleSlWc-mStmEsiGZ85dPSCKMrv3-jYiXk6k5JvtoFQvYquvcN_krLTYLw0tjzlO-b_0zvRzWWVQnrnjNDkkLWFCAKkDqAIK8OhLfafzHfXenkgvjhcV4Ba1XWp0a5Ji8THKrPO1S3Sa65xm_jgTmlPVVJ69Ar2GWAFBveO6DLy79G6KRKFtE-K9908bmblJzHAUqkI1btDuOIcXCbZy2tFnDj1Dk3lcSuUtJrVeUCsGCFynA8AiN16CTvKUZx3XJvdzv6XGyfE-5n_BE0

15

There be in Animals, two sorts of Motions peculiar to them: One called Vital; begun in generation, and continued without interruption through their whole life; such as are the Course of the Blood, the Pulse, the Breathing, the Concoctions, Nutrition, Excretion, &c; to which Motions there needs no help of Imagination: The other in Animal Motion, otherwise called Voluntary Motion; as to Go, to Speak, to Move any of our limbs, in such manner as is first fancied in our minds. That Sense is Motion in the organs and interior parts of man’s body, caused by the action of the things we See, Hear, &c

Quote from, Leviathan (1651), The First Part, Chapter 6, by Thomas Hobbes (with italics and punctuation as in the original but modern spelling). https://www.gutenberg.org/files/3207/3207-h/3207-h.htm#link2H_PART1

16

[A]lthough unstudied men, do not conceive any motion at all to be there, where the thing moved is invisible; or the space it is moved in, is (for the shortness of it) insensible; yet that doth not hinder, but that such Motions are. For let a space be never so little, that which is moved over a greater space, whereof that little one is part, must first be moved over that. These small beginnings of Motion, within the body of Man, before they appear in walking, speaking, striking, and other visible actions, are commonly called ENDEAVOUR.

Ibid.

17

This Endeavour, when it is toward something which causes it, is called APPETITE, or DESIRE; the later, being the general name; and the other, oftentimes restrained to signify the Desire of Food, namely Hunger and Thirst. And when the Endeavour is fromward [i.e., distant from] something, it is generally called AVERSION. These words Appetite, and Aversion we have from the Latin; and they both of them signify the motions, one of approaching, the other of retiring. […]

Of Appetites, and Aversions, some are born with men; as Appetite of food, Appetite of excretion, and exoneration, (which may also and more properly be called Aversions, from somewhat they feel in their Bodies;) and some other Appetites, not many. The rest, which are Appetites of particular things, proceed from Experience, and trial of their effects upon themselves, or other men. For of things we know not at all, or believe not to be, we can have no further Desire, than to taste and try. But Aversion we have for things, not only which we know have hurt us; but also that we do not know whether they will hurt us, or not.

Ibid.

18 Quote from, Leviathan (1651), The First Part, Chapter 8, by Thomas Hobbes (with italics and punctuation as in the original but modern spelling).

19 Ibid.

20 Ibid.

21 S. L. A. Marshall findings were complied in a seminal work titled Men Against Fire (1947).

22

In the aftermath of the Battle of Gettysburg, the Confederate Army was in full retreat, forced to abandon all of its dead and most of its wounded. The Union Army and citizens of Gettysburg had an ugly cleanup task ahead of them. Along with the numerous corpses littered about the battlefield, at least 27,574 rifles (I’ve also seen 37,574 listed) were recovered. Of the recovered weapons, a staggering 24,000 were found to be loaded, either 87% or 63%, depending on which number you accept for the total number of rifles. Of the loaded rifles, 12,000 were loaded more than once and half of these (6,000 total) had been loaded between three and ten times. One poor guy had reloaded his weapon twenty-three times without firing a single shot.

From On Killing: The Psychological Cost of Learning to Kill in War and Society (1996) by Dave Grossman

23 The same passage concludes:

Another doctrine repugnant to Civil Society, is, that “Whatsoever a man does against his Conscience, is Sin;” and it dependeth on the presumption of making himself judge of Good and Evil. For a man’s Conscience, and his Judgement is the same thing; and as the Judgement, so also the Conscience may be erroneous. Therefore, though he that is subject to no Civil Law, sinneth in all he does against his Conscience, because he has no other rule to follow but his own reason; yet it is not so with him that lives in a Common-wealth; because the Law is the public Conscience, by which he hath already undertaken to be guided.

Quote from, Leviathan (1651), The Second Part, Chapter 29, by Thomas Hobbes (with italics and punctuation as in the original but modern spelling).

24 Hobbes had actually tried to found his entire philosophy on mathematics but in characteristically contrarian fashion was also determined to prove that mathematics itself was also reducible to materialistic principles. This meant rejecting an entire tradition that began with Euclid and that continues today and which recognises the foundations of geometry lie in abstractions such as points, lines and surfaces. In response to Hobbes, John Wallis, Oxford University’s Savilian Professor of Geometry and founding member of the Royal Society, had publicly engaged with the “pseudo-geometer” in a dispute that raged from 1655 until Hobbes’s death in 1679. To illustrate the problem with Hobbes various “proofs” of unsolved problems including squaring the circle (all of which were demonstrably incorrect), Wallis had asked rhetorically: “Who ever, before you, defined a point to be a body? Who ever seriously asserted that points have any magnitude?”

You can read more about this debate in a paper published by The Royal Society titled Geometry, religion and politics: context and consequences of the Hobbes–Wallis dispute written by Douglas Jesseph, published October 10, 2018. https://doi.org/10.1098/rsnr.2018.0026

25 Quote from, Leviathan (1651), The First Part, Chapter 5, by Thomas Hobbes (with italics and punctuation as in the original but modern spelling).

26 From The Perils of Obedience  (1974) by Stanley Milgram, published in Harper’s Magazine. Archived from the original on December 16, 2010. Abridged and adapted from Obedience to Authority.

27 Ibid.

28 From The Life of the Robin, Fourth Edition (1965), Chapter 15 “A Digression on Instinct” written by David Lack.

29 From Historia Vitae et Mortis by Sir Francis Bacon (‘History of Life and Death’, 1623).

30 Morphological changes such as albinism and loss of sight are common to all cave-dwelling species including invertebrates, fish and also birds. It is presumed that these changes have come about because they save energy and thus confer an evolutionary advantage although biologists find it difficult to explain loss of pigmentation since there seems to be very little energy saved in this way.

31 From a Tanner Lecture on Human Values entitled Morality and the Social Instincts: Continuity with the Other Primates delivered by Frans B. M. Waal at Princeton University on November 19–20, 2003.

The abstract begins:

The Homo homini lupus [“Man is wolf to man.”] view of our species is recognizable in an influential school of biology, founded by Thomas Henry Huxley, which holds that we are born nasty and selfish. According to this school, it is only with the greatest effort that we can hope to become moral. This view of human nature is discussed here as “Veneer Theory,” meaning that it sees morality as a thin layer barely disguising less noble tendencies. Veneer Theory is contrasted with the idea of Charles Darwin that morality is a natural outgrowth of the social instincts, hence continuous with the sociality of other animals. Veneer Theory is criticized at two levels. First, it suffers from major unanswered theoretical questions. If true, we would need to explain why humans, and humans alone, have broken with their own biology, how such a feat is at all possible, and what motivates humans all over the world to do so. The Darwinian view, in contrast, has seen a steady stream of theoretical advances since the 1960s, developed out of the theories of kin selection and reciprocal altruism, but now reaching into fairness principles, reputation building, and punishment strategies. Second, Veneer Theory remains unsupported by empirical evidence.

https://tannerlectures.utah.edu/_documents/a-to-z/d/deWaal_2005.pdf

32 Quote from a NOVA interview, “The Bonobo in All of UsPBS from January 1, 2007.

33 Quote from a NOVA interview, “The Bonobo in All of UsPBS from January 1, 2007.

35 The second stanza of Wallace Steven’s poem Thirteen Ways of Looking at a Blackbird.

36 As he explained in an interview published in the Royal Society of Biology journal The Biologist Vol 60(1) p16-20. https://www.rsb.org.uk/biologist-interviews/richard-dawkins

37 Extracts taken from Chapter 2, pp 45-48, “Seeing Voices” by Oliver Sacks, first published 1989, Picador.

38 Aldous Huxley in the Foreword of ‘The First and Last Freedom’ by Jiddu Krishnamurti.

In his collection of essays Adonis and the Alphabet (1956), the first chapter titled “The Education of an Amphibian” begins as follows:

Every human being is an amphibian— or, to be more accurate, every human being is five or six amphibians rolled into one. Simultaneously or alternately, we inhabit many different and even incommensurable universes. To begin with, man is an embodied spirit. As such, he finds himself infesting this particular planet, while being free at the same time to explore the whole spaceless, timeless world of universal Mind. This is bad enough; but it is only the beginning of our troubles. For, besides being an embodied spirit, each of us is also a highly self-conscious and self-centred member of a sociable species. We live in and for ourselves; but at the same time we live in and, somewhat reluctantly, for the social group surrounding us. Again, we are both the products of evolution and a race of self-made men. In other words, we are simultaneously the subjects of Nature and the citizens of a strictly human republic, which may be anything from what St Paul called ‘no mean city’ to the most squalid of material and moral slums.

39 Also from the first chapter titled “The Education of an Amphibian” of Aldous Huxley’s collection of essays Adonis and the Alphabet (1956).

39a Quote taken from “Rixty Minutes”, Episode 8, Season 1, of adult cartoon Rick and Morty first broadcast by the Cartoon Network on March 17, 2014.

40 The quote is directly addressed to political philosopher and anarchist Pierre-Joseph Proudhon in Chapter 2: “The Metaphysics of Political Economy”; Part 3: “Competition and Monopoly” of Karl Marx’s The Poverty of Philosophy, a critique of the economic and philosophical doctrine of Proudhon, first published in 1847. In full the quote reads:

“M. Proudhon does not know that all history is nothing but a continuous transformation of human nature.”

https://www.marxists.org/archive/marx/works/1847/poverty-philosophy/

41 Quote taken from Episode 3 of Romer’s Egypt first broadcast on BBC TV in 1982.

42 From Christopher Columbus’s log for Friday, Saturday and Sunday October 12 –14, 1492. https://www.americanjourneys.org/pdf/AJ-062.pdf

43 The following are separate entries:

“With my own eyes I saw Spaniards cut off the nose and ears of Indians, male and female, without provocation, merely because it pleased them to do it. …Likewise, I saw how they summoned the caciques and the chief rulers to come, assuring them safety, and when they peacefully came, they were taken captive and burned.”

“They laid bets as to who, with one stroke of the sword, could split a man in two or could cut off his head or spill out his entrails with a single stroke of the pike.”

“They took infants from their mothers’ breasts, snatching them by the legs and pitching them headfirst against the crags or snatched them by the arms and threw them into the rivers, roaring with laughter and saying as the babies fell into the water, ‘Boil there, you offspring of the devil!’”

“They attacked the towns and spared neither the children nor the aged nor pregnant women nor women in childbed, not only stabbing them and dismembering them but cutting them to pieces as if dealing with sheep in the slaughter house.”

“They made some low wide gallows on which the hanged victim’s feet almost touched the ground, stringing up their victims in lots of thirteen, in memory of Our Redeemer and His twelve Apostles, then set burning wood at their feet and thus burned them alive.”

From the History of the Indies (1561) by Bartolome de las Casas.

44 Ibid.

45 As with many of the best known quotes, the first appears to be misattributed and the second is very possibly the reworking of an utterance by Voltaire. While it is true that Napolean is reported as once saying in conversation: “What then is, generally speaking, the truth of history? A fable agreed upon,” the phrase certainly predates him. The first quote “History is written by the winners” can however be traced to the pen of George Orwell from one of a series of articles published by the Tribune under the title “As I please”, in which he wrote:

During part of 1941 and 1942, when the Luftwaffe was busy in Russia, the German radio regaled its home audience with stories of devastating air raids on London. Now, we are aware that those raids did not happen. But what use would our knowledge be if the Germans conquered Britain?  For the purpose of a future historian, did those raids happen, or didn’t they? The answer is: If Hitler survives, they happened, and if he falls they didn’t happen. So with innumerable other events of the past ten or twenty years. Is the Protocols of the Elders of Zion a genuine document? Did Trotsky plot with the Nazis? How many German aeroplanes were shot down in the Battle of Britain? Does Europe welcome the New Order? In no case do you get one answer which is universally accepted because it is true: in each case you get a number of totally incompatible answers, one of which is finally adopted as the result of a physical struggle. History is written by the winners. [bold emphasis added]

46 All excerpts taken from Candide and Other Tales written by Voltaire, translated by T. Smollett, revised by James Thornton, published by J. M. Dent & Sons Ltd, London , first published 1937. Incidentally, my own personal copy of this book was saved from the flames of my parent’s wood-burning stove after I discovered it hidden amongst hundreds of old textbooks and destined to become fuel for their central heating system.

47 All excerpts taken from How Much do You Know? (p. 215) Published by Odhams Press Limited, Long Acre, London. WC2 Date of publication unknown but definitely pre-WWII on basis of, for example, the question “what territory did Germany lose after the World War?” (on p. 164)

48 For instance, in German, Geschichte, in Russian история, and in French histoire.

49 Quote from William Shakespeare’s The Tragedy of King Richard the Second, Act II, Scene 1, spoken by John of Gaunt.

50 In their book Trump and the Puritans (published in 2020), authors James Roberts and Martyn Whittock point to the remarkable coincidence that on almost precisely the 400th anniversary of the landing of the Mayflower at Plymouth Rock, if Donald Trump is to be re-elected it in 2020, then it will be thanks to not only to his strong base amongst Christian Right but down to a more of pervasive and enduring belief in Manifest Destiny, American exceptionalism, the making of the New Jerusalem and “the city on the hill” that can be traced all the way back to the Pilgrim Fathers.

Speaking with host Afshin Rattansi on RT’s Going Underground, Martyn Whittock outlined this thesis, which offers a convincing account for  why so many American Christians support Trump despite his non-religious character traits, and also why there is greater support for Israel amongst Christian evangelicals than American Jews:

51 The quote is taken from Chapter 4: “Of Constitutions”; Part 2 of Thomas Paine’s Rights of Man, a defence of the French Revolution against charges made by Edmund Burke in his Reflections on the Revolution in France (1790). Rights of Man was first published in two parts in 1791 and 1792 respectively.

In fuller context, Paine writes:

Man will not be brought up with the savage idea of considering his species as his enemy, because the accident of birth gave the individuals existence in countries distinguished by different names; and as constitutions have always some relation to external as well as to domestic circumstances, the means of benefitting by every change, foreign or domestic, should be a part of every constitution. We already see an alteration in the national disposition of England and France towards each other, which, when we look back to only a few years, is itself a Revolution. Who could have foreseen, or who could have believed, that a French National Assembly would ever have been a popular toast in England, or that a friendly alliance of the two nations should become the wish of either? It shows that man, were he not corrupted by governments, is naturally the friend of man, and that human nature is not of itself vicious.

http://www.gutenberg.org/files/3742/3742-h/3742-h.htm

52 The Second Law of Thermodynamics can be stated in a variety of different ways but is probably best known as follows: “ that the total entropy of any isolated macroscopic system must always decrease.” Where entropy is the precise measure of something that can be loosely described as the total microscopic disorder within the system. The second law has many implications. Firstly, there is insistence upon a direction whenever any system changes, with order changing into increasingly to disorder. This itself implies an irreversibility to events and suggests a propelling “arrow of time”. The Second Law also prohibits the possibility for any kind of perpetual motion, which by extension, sets a limit to the duration of the universe as a whole, since the universe can also be considered as an isolated thermodynamic system, and is therefore, and as a whole, subject to the Second Law. For this reason the universe is now expected to end in a cosmic whimper, known in Physics as “the heat death of the universe” – with all parts having reached a very chilly thermodynamic equilibrium. It almost seems then that the Second Law of Thermodynamics might be the physical axis about which the diabolical asymmetry of destruction over creation is strung. Just how any universe of intricate complexity could ever have formed in the first instance is mysterious enough, and though the Second Law of Thermodynamics does not prohibit all orderly formation, so long as the pockets of order are counterbalanced by regions of increasing chaos, the law does maintain that the overall tendency is always towards disorder. Form it did, of course, which perhaps implies the existence of an as yet undiscovered but profoundly forceful creative principle – something that may prove to be nothing more or less than another law of thermodynamics.

Here is physicist Richard Feynman wondering about the physical cause of irreversibility and what it tells us about the past:

53

We are survival machines – robot vehicles blindly programmed to preserve the selfish molecules known as genes. This is a truth which still fills me with astonishment.

From The Selfish Gene by Richard Dawkins.

54 This variant on the myth, with its rather Buddhist overtones, does at least account for God’s rage and instant reaction. For according to Genesis, God thereafter says, to no-one in particular: “… the man is become as one of us [sic], to know good from evil.” Our expulsion from the Garden of Eden is not simply His punishment for our disobedience (which is, of course, the doctrine the church authorities are keen to play up), but a safeguard to protect and secure His own divine monopoly. God fearing that left alone in paradise we might now, and as the same passage goes on to elucidate, “take also of the tree of life, and eat, and live for ever.”

Extracts taken from Genesis 3:22. The full verse is as follows: “And the Lord God said, Behold, the man is become as one of us, to know good and evil: and now lest he put forth his hand, and take also of the tree of life, and eat, and live for ever:”

55L’hypocrisie est un hommage que le vice rend à la vertu.” – François de La Rochefoucauld, Maximes (1665–1678), 218.

Alternative translation: “Hypocrisy is a tribute vice pays to virtue.”

56

L’homme est né libre, et partout il est dans les fers. Tel se croit le maître des autres, qui ne laisse pas d’être plus esclave qu’eux.

Translated by G. D. H. Cole (1913) as: “Man is born free; and everywhere he is in chains. One thinks himself the master of others, and still remains a greater slave than they.”

From Part I, Chapter 1 of Du contrat social ou Principes du droit politique [trans: Of The Social Contract, Or Principles of Political Right ] (1762) by Jean-Jacques Rousseau. is a book in which Rousseau theorized about the best way to establish a political community.

57 Translated by Samuel Moore in cooperation with Frederick Engels (1888):

The proletarians have nothing to lose but their chains. They have a world to win. Working Men of All Countries, Unite!

From Section 4, paragraph 11 of Das Manifest der Kommunistischen Partei [trans: The Communist Manifesto] (1848) by Karl Marx and Friedrich Engels

57a This was first observed by primatologist Jane Goodall when she observed what happened after the splintering of a community of chimpanzees in Gombe Stream National Park in Tanzania. Over the next four years the adult males of the separatists were systematically killed one-by-one by members of the remaining original group. Jane Goodall was profoundly disturbed by this revelation and wrote in her memoir Through a Window: My Thirty Years with the Chimpanzees of Gombe:

For several years I struggled to come to terms with this new knowledge. Often when I woke in the night, horrific pictures sprang unbidden to my mind—Satan [one of the apes], cupping his hand below Sniff’s chin to drink the blood that welled from a great wound on his face; old Rodolf, usually so benign, standing upright to hurl a four-pound rock at Godi’s prostrate body; Jomeo tearing a strip of skin from Dé’s thigh; Figan, charging and hitting, again and again, the stricken, quivering body of Goliath, one of his childhood heroes.

58 From “Bible Studies” published in Thomas Lynch’s collection of essays titled Bodies in Motion and At Rest (2011).

59

Stanley Moon [Dudley Moore]: If it hadn’t been for you… we’d still be blissfully wandering about naked in paradise.

George Spiggott aka The Devil [Peter Cook]: You’re welcome, mate. The Garden of Eden was a boggy swamp just south of Croydon. You can see it over there.

Stanley Moon: Adam and Eve were happy enough.

The Devil: I’ll tell you why… they were pig ignorant.

From the 1967 British comedy Bedazzled, directed and produced by Stanley Donen, screenplay by Peter Cook.

Transcript is available here: https://www.scripts.com/script.php?id=bedazzled_3792&p=11

60 From an article titled “shame v. guilt’ by Brené Brown, published on her own website on January 14, 2013. https://brenebrown.com/blog/2013/01/14/shame-v-guilt/

61 The quote comes from Sartre’s play No Exit [French: Huis clos] first performed in 1944. Three characters find themselves trapped and forever waiting in a mysterious room which depicts the afterlife. The famous phrase “L’enfer, c’est les autres” or “Hell is other people” is a reference to Sartre’s idea that seeing oneself as apprehended by and thus the object of another person’s view of conscious awareness involves a perpetual ontological struggle.

It seems that Sartre offered his own clarification, saying:

“Hell is other people” has always been misunderstood. It has been thought that what I meant by that was that our relations with other people are always poisoned, that they are invariably hellish relations. But what I really mean is something totally different. I mean that if relations with someone else are twisted, vitiated, then that other person can only be hell. Why? Because … when we think about ourselves, when we try to know ourselves … we use the knowledge of us which other people already have. We judge ourselves with the means other people have and have given us for judging ourselves.

The quote above is from a talk that preceded a recording of the play issued in 1965. http://rickontheater.blogspot.com/2010/07/most-famous-thing-jean-paul-sartre.html

62 Quote from the Aldous Huxley’s collection of essays Adonis and the Alphabet (1956), Chapter 2 titled “Knowledge and Understanding”.

63 Aristotle, Politics, Book 1, section 1253a

64 From “An Essay on the Principle of Population: as it affects the future improvement of society with remarks on the speculations of Mr. Godwin, M. Condorcet, and other writers” by Thomas Robert Malthus (1798), chapter 1.

65 Ibid.

66 “Taking the population of the world at any number, a thousand millions, for instance, the human species would increase in the ratio of — 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, etc. and subsistence as — 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, etc. In two centuries and a quarter, the population would be to the means of subsistence as 512 to 10: in three centuries as 4096 to 13, and in two thousand years the difference would be almost incalculable, though the produce in that time would have increased to an immense extent.” is a prediction taken from chapter 2 of “An Essay on the Principle of Population…” by T. Malthus (1798). Okay then, here’s the maths: Malthus is assuming a population exponentially doubling in 25 years (every generation). In two and a quarter centuries this would allow 9 generations, so 2 to the power of 9 increase, which represents a 512-fold increase as he correctly claims. Well, what actually happened? At the time of Thomas Malthus, Britain also conducted its first census recording in 1801 a population of 8,308,000 (which is thought likely to have been an under-estimate). Meanwhile, the world population is estimated to have just reached around 1 billion (precisely as Malthus estimates). So then, according to Malthus calculations, the population of Britain should now be more than 4 billion! (which is approaching close to the current global population) Taking the same approach, the population of the world should now have exploded past half a trillion! This is at the extreme upper limit of estimates for the Earth’s carrying capacity: “The estimates of the Earth’s carrying capacity range from under 1 billion to more than 1,000 billion persons. Not only is there an enormous range of values, but there is no tendency of the values to converge over time; indeed, the estimates made since 1950 exhibit greater variability than those made earlier.” from UN World Population Report 2001, p.30.

67 Now known as The Royal Statistic Society (after receiving Royal Charter in 1887)

68 Letter sent to Tennyson in response to his poem “Vision of Sin” published 1842. The exact details of this letter seem to vary according to sources. In another version he signs off saying, “Strictly speaking, the actual figure is so long I cannot get it into a line, but I believe the figure 1 1/16 will be sufficiently accurate for poetry.”

69

After 30 years of rapid growth in agricultural production, the world can produce enough food to provide every person with more than 2 700 Calories per day level which is normally sufficient to ensure that all have access to adequate food, provided distribution is not too unequal.

From report of World Food Summit of FAO (Rome 13-17 November 1996), entitled Food for All.

70

“[However,] the slowdown [of worldwide agricultural production] has occurred not because of shortages of land or water but rather because demand for agricultural products has also slowed. This is mainly because world population growth rates have been declining since the late 1960s, and fairly high levels of food consumption per person are now being reached in many countries, beyond which further rises will be limited.” – “This study suggests that world agricultural production can grow in line with demand, provided that the necessary national and international policies to promote agriculture are put in place. Global shortages are unlikely, but serious problems already exist at national and local levels and may worsen unless focused efforts are made.” – “Agricultural production could probably meet expected demand over the period to 2030 even without major advances in modern biotechnology.”

Extracts from the Executive Summary of the FAO summary report World agriculture: towards 2015/2030, published in 2002.

71 Maslow’s ideas have fallen by the wayside, which is a pity because his study of human need was a worthwhile project. Maslow’s reductionism is wrong, but perhaps by considering a more intricate and dynamic interconnectedness between human needs, his theory can be usefully revised. The trouble with Maslow is any insistence on hierarchy, something that other academics, and especially those working in the social sciences, are inclined to mistake as a kind of verified truth. Just calling an idea, ‘a theory’, doesn’t make it so, certainly not in any rigorous sense, but those not trained in the hard sciences are often inclined to treat speculative formulations as though they are fully-fledged theories. This is grave and recurring error infuriates many people, myself included, and especially those who have received specialist scientific training.

72 All subsequent passages and quotations in this chapter are also taken from “An Essay on the Principle of Population: as it affects the future improvement of society with remarks on the speculations of Mr. Godwin, M. Condorcet, and other writers” by Thomas Robert Malthus (1798), chapters 18 and 19.

73 His ideas on these daunting topics are rather cleverly-conceived, unusual if not wholly original, and tread a line that is unorthodox and close to being heretical. So it’s really in these closing chapters that Malthus is most engaging and most at ease. Here, for example, is the Malthusian take on mind and matter:

It could answer no good purpose to enter into the question whether mind be a distinct substance from matter, or only a finer form of it. The question is, perhaps, after all, a question merely of words. Mind is as essentially mind, whether formed from matter or any other substance. We know from experience that soul and body are most intimately united, and every appearance seems to indicate that they grow from infancy together… As we shall all be disposed to agree that God is the creator of mind as well as of body, and as they both seem to be forming and unfolding themselves at the same time, it cannot appear inconsistent either with reason or revelation, if it appear to be consistent with phenomena of nature, to suppose that God is constantly occupied in forming mind out of matter and that the various impressions that man receives through life is the process for that purpose. The employment is surely worthy of the highest attributes of the Deity.

Having safely negotiated the potential minefield of Cartesian dualism, Malthus now applies himself to the tricky problem of evil, and its relationship to “the wants of the body”:

The first great awakeners of the mind seem to be the wants of the body… The savage would slumber for ever under his tree unless he were roused from his torpor by the cravings of hunger or the pinchings of cold, and the exertions that he makes to avoid these evils, by procuring food, and building himself a covering, are the exercises which form and keep in motion his faculties, which otherwise would sink into listless inactivity. From all that experience has taught us concerning the structure of the human mind, if those stimulants to exertion which arise from the wants of the body were removed from the mass of mankind, we have much more reason to think that they would be sunk to the level of brutes, from a deficiency of excitements, than that they would be raised to the rank of philosophers by the possession of leisure.

74 Malthus, aware of the dangers of over-generalisation, adds a little later that:

There are undoubtedly many minds, and there ought to be many, according to the chances out of so great a mass, that, having been vivified early by a peculiar course of excitements, would not need the constant action of narrow motives to continue them in activity.” Saying later again that: “Leisure is, without doubt, highly valuable to man, but taking  man as he is, the probability seems to be that in the greater number of instances it will produce evil rather than good.

75Essais de Théodicée sur la bonté de Dieu, la liberté de l’homme et l’origine du mal ” (more simply known as Théodicée) which translates from French as “Essays of theodicy on the goodness of God, the freedom of man and the origin of evil”.

76 Malthus also offers us reasons to be cheerful and indeed grateful for our world of apparent imperfection:

Uniform, undiversified perfection could not possess the same awakening powers. When we endeavour then to contemplate the system of the universe, when we think of the stars as the suns of other systems scattered throughout infinite space, when we reflect that we do not probably see a millionth part of those bright orbs that are beaming light and life to unnumbered worlds, when our minds, unable to grasp the immeasurable conception, sink, lost and confounded, in admiration at the mighty incomprehensible power of the Creator, let us not querulously complain that all climates are not equally genial, that perpetual spring does not reign throughout the year, that all God’s creatures do not possess the same advantages, that clouds and tempests sometimes darken the natural world and vice and misery the moral world, and that all the works of the creation are not formed with equal perfection. Both reason and experience seem to indicate to us that the infinite variety of nature (and variety cannot exist without inferior parts, or apparent blemishes) is admirably adapted to further the high purpose of the creation and to produce the greatest possible quantity of good.

77

This view of the state of man on earth will not seem to be unattended with probability, if, judging from the little experience we have of the nature of mind, it shall appear upon investigation that the phenomena around us, and the various events of human life, seem peculiarly calculated to promote this great end, and especially if, upon this supposition, we can account, even to our own narrow understandings, for many of those roughnesses and inequalities in life which querulous man too frequently makes the subject of his complaint against the God of nature.

Taken from Chapter 18. Ibid.

78 There are of course modern reinventions of the Malthusian message, which are still play a significant role in our current political debate. These depend on extending Malthus’ idea into considerations of resource shortages of other kinds such as energy (and after all, food is the primary form of energy for human beings) and water. This however is an area that I wish to save for future writing.

Leave a comment

Filed under « finishing the rat race »

the life lepidopteran

The following article is an Interlude between Parts I and II of a book entitled Finishing The Rat Race

All previously uploaded chapters are available (in sequence) by following the link above or from category link in the main menu, where you will also find a table of contents and a preface on why I started writing it.

*

“Once upon a time, I, Chuang Chou, dreamt I was a butterfly, fluttering hither and thither, to all intents and purposes a butterfly. I was conscious only of my happiness as a butterfly, unaware that I was Chou. Soon I awaked, and there I was, veritably myself again. Now I do not know whether I was then a man dreaming I was a butterfly, or whether I am now a butterfly, dreaming I am a man.”

— Chuang Tzu 1

*

Before proceeding further, I’d like to tell a joke:

A man walks into a doctor’s.

“Doctor, Doctor, I keep thinking I’m a moth,” the man says.

The doctor gives him a serious look. “Sorry, but I am not strictly qualified to help you” he replies, rubbing his chin earnestly before adding after a momentary pause, “You really need to see a psychiatrist.”

“Yes,” says the man, “but your light was on.”

*

There can be no doubting that each of us acts to a considerable extent in accordance to mental processes that are distantly beyond and often alien to our immediate conscious awareness and understanding. For instance, in general we draw breath without the least consideration, or raise an arm, perhaps to scratch ourselves, with scarcely a thought and zero comprehension of how we actually moved our hand and fingers to accomplish the act. And this everyday fact becomes more startling once we consider how even complex movements and sophisticated patterns of behaviour seem to originate without full conscious direction or awareness.

Consider walking for instance. After admittedly painstaking practice as infants, we soon become able to walk without ever thinking to swing our legs. Likewise, if we have learnt to drive, eventually we are able to manoeuvre a large vehicle with hardly more conscious effort that we apply to walking. The same is true for most daily tasks which are performed no less thoughtlessly and that, in spite of the intricacies, we often find boring and mundane. For instance, those who have been smokers may be able to perform the rather complicated art of rolling a cigarette without pausing from conversation. Indeed, deep contemplation will probably leave us more bewildered than anything by the mysterious coordinated manipulation of all eight fingers and opposing thumbs.

Stranger still is that our ordinary conversational speech proceeds before we have formed the fully conscious intent to utter our actual words! When I first heard this claim, it struck me as so unsettling that I automatically rejected it outright in what ought perhaps to be called a tongue-jerk reaction. (Not long afterwards I was drunk enough to stop worrying about the latent implications!) For considered dispassionately, it is self-evident that there isn’t remotely sufficient time to construct each and every utterance consciously and in advance of the act of speaking; so our vocal ejaculations (as they once were unashamedly called) are just that – they are thrown out! Still further proof is provided by instances when gestures or words emerge in direct conflict to our expressed beliefs and ideas. Those embarrassing occasions when we blurt out what we know must never be spoken we call Freudian slips (and more on Freud below).

More positively, and especially when we enter ‘the zone’, each of us is able to accomplish complex physical acts – for instance throwing, catching, or kicking a ball – and again before any conscious thought arises to do so. Those who have played a sport long enough can probably recall many joyous moments when they have marvelled not only at their own impossible spontaneity, but the accompanying accuracy, deftness, nimbleness, and on very rare occasions even of enhanced physical strength. Likewise, urges, feelings, fears and sometimes the most profound insights will suddenly spring forth into “the back of our minds”, as if from nowhere. And as a consequence, this apparent nowhere acquired a name: coming to be known as “the preconscious”, “the subconscious” and more latterly, “the unconscious”.

What this means, of course, is that “I” am not what I ordinarily think I am, but in actuality a lesser aspect of a greater being who enjoys remarkable talents and abilities beyond what are ordinarily thought “my own” since they lie outside “my” immediate grasp. In this way, we all have hidden depths that can and do give rise to astonishment, although for peculiar reasons of pride, we tend in general to feign ignorance of this everyday fact.

*

The person most popularly associated with the study of the human unconscious is Sigmund Freud, a pioneer in the field but by no means a discoverer. In fact philosopher and all-round genius Gottfried Leibniz is someone with a prior claim to the discovery; making the suggestion that our conscious awareness may be influenced by “insensible stimuli” that he called petites perceptions 1. Another giant of German philosophy, Immanuel Kant, also subsequently proposed the existence of ideas lurking of which we are not fully aware, while admitting the apparent contradiction inherent in such a conjecture:

“To have ideas, and yet not be conscious of them, — there seems to be a contradiction in that; for how can we know that we have them, if we are not conscious of them? Nevertheless, we may become aware indirectly that we have an idea, although we be not directly cognizant of the same.” 2

Nor is it the case that Freud was first in attempting any kind of formal analysis of the make-up and workings of the human psyche as an entity. Already in 1890, William James had published his own ground-breaking work Principles of Psychology, and though James was keen to explore and outline his principles for human psychology by “the description and explanation of states of consciousness”, rather than to plunge more deeply into the unknown, he remained fully aware of the potentiality of unconscious forces and made clear that any “‘explanation’ [of consciousness] must of course include the study of their causes, conditions and immediate consequences, so far as these can be ascertained.” 3

*

William James’ own story is both interesting and instructive. As a young man he had been at somewhat of a loss to decide what to do with himself. Having briefly trained as an artist, he quickly realised that he’d never be good enough and became disillusioned with the idea, declaring that “there is nothing on earth more deplorable than a bad artist”. He afterwards retrained in chemistry, enrolling at Harvard in 1861 (a few months after the outbreak of the American Civil War), but restless again, twelve months or so later, transferred to biology. Still only twenty-one, James soon felt that he was running out of options, writing in a letter to his cousin:

“I have four alternatives: Natural History, Medicine, Printing, Beggary. Much may be said in favour of each. I have named them in the ascending order of their pecuniary invitingness. After all, the great problem of life seems to be how to keep body and soul together, and I have to consider lucre. To study natural science, I know I should like, but the prospect of supporting a family on $600 a year is not one of those rosy dreams of the future with which the young are said to be haunted. Medicine would pay, and I should still be dealing with subjects which interest me – but how much drudgery and of what an unpleasant kind is there!”

Three years on, James then entered the Harvard Medical School, where he quickly became disillusioned. Certain that he no longer wished to become a practicing doctor, and being more interested in psychology and natural history than medicine, a fresh opportunity arose, and he soon set sail to the Amazon in hopes of becoming a naturalist. However, the expedition didn’t work out well either. Fed up with collecting bugs and bored with the company of his fellow explorers, to cap everything, he fell quite ill. Although desperate to return home, he was obliged to continue, and, slowly he regained his strength, deciding that in spite of everything it had been a worthwhile diversion; no doubt heartened too by the prospect of finally returning home.

It was 1866, when James next resumed medical studies at Harvard although the Amazon adventure had left him physically and (very probably) psychologically weakened; a continuing sickness that forced James to break off from his studies yet again. Seeking rest and recuperation, for the next two years James sojourned in Europe, where, to judge from his own accounts, he again experienced a great deal of isolation, loneliness and boredom. Returning to America at the end of 1868 – now approaching twenty-seven years old – he picked up his studies at Harvard for the last time, successfully passing his degree to become William James M.D. in 1869.

Too weak to find work anyway, James had stayed resolute in his unwillingness to become a practicing doctor. So for a prolonged period, he did nothing at all, or next to nothing. Three years passed when, besides the occasional publication of articles and reviews, he devoted himself solely to reading books or thinking thoughts, and often quite gloomy ones. Suddenly, one day, he then had a semi-miraculous revelation: a very dark revelation that made him exceedingly aware not only of his own mental fragility, but the likely prognosis:

“Whilst in this state of philosophic pessimism and general depression of spirits about my prospects, I went one evening into the dressing room in the twilight… when suddenly there fell upon me without any warning, just as if it came out of the darkness, a horrible fear of my own existence. Simultaneously there arose in my mind the image of an epileptic patient whom I had seen in the asylum, a black-haired youth with greenish skin, entirely idiotic, who used to sit all day on one of the benches, or rather shelves, against the wall, with his knees drawn up against his chin, and the coarse gray undershirt, which was his only garment, drawn over them, inclosing his entire figure. He sat there like a sort of sculptured Egyptian cat or Peruvian mummy, moving nothing but his black eyes and looking absolutely non-human. This image and my fear entered into a species of combination with each other. That shape am I, I felt, potentially. Nothing that I possess can defend me against that fate, if the hour for it should strike for me as it struck for him. There was such a horror of him, and such a perception of my own merely momentary discrepancy from him, that it was as if something hitherto solid within my breast gave way entirely, and I became a mass of quivering fear. After this the universe was changed for me altogether. I awoke morning after morning with a horrible dread at the pit of my stomach, and with a sense of the insecurity of life that I never knew before, and that I have never felt since. It was like a revelation; and although the immediate feelings passed away, the experience has made me sympathetic with the morbid feelings of others ever since.” 4

Having suffered what today would very likely be called ‘a nervous breakdown’, James was forced to reflect on the current theories of the mind. Previously, he had accepted the materialist ‘automaton theory’ – that our ability to act upon the world depends not upon conscious states as such, but upon the brain-states that underpin and produce them – but now he felt that if true this meant he was personally trapped forever in a depression that could only be cured by the administering of some kind of physical remedy. However, no such remedy was obtainable, and so he was forced instead to tackle his disorder by means of further introspection and self-analysis.

James read more and thought more since there was nothing else he could do. Three more desperately unhappy years would pass before he had sufficiently recuperated to rejoin the ordinary world, accepting an offer to become lecturer in physiology at Harvard. But as luck would have it, teaching suited James. He enjoyed the subject of physiology itself, and found the activity of teaching “very interesting and stimulating”. James had, for once, landed on his feet, and his fortunes were also beginning to improve in other ways.

Enjoying the benefits of a steady income for the first time in his life, he was soon to meet Alice Gibbons, the future “Mrs W.J.” They married two years later in 1878. She was a perfect companion – intelligent, perceptive, encouraging, and perhaps most importantly for James, an organising force in his life. He had also just been offered a publishing contract to write a book on his main specialism, which was by now – and in spite of such diversity of training – most definitely psychology. With everything now in place, James set to work on what would be his magnus opus. Wasting absolutely no time whatsoever, the opening chapters were drafted while still on their honeymoon together.

“What is this mythological and poetical talk about psychology and Psyche and keeping back a manuscript composed during honeymoon?” he wrote in jest to the taunts of a friend, “The only psyche now recognized by science is a decapitated frog whose writhings express deeper truths than your weak-minded poets ever dreamed. She (not Psyche but the bride) loves all these doctrines which are quite novel to her mind, hitherto accustomed to all sorts of mysticisms and superstitions. She swears entirely by reflex action now, and believes in universal Nothwendigkeit. [determinism]” 5

It would take James more than a decade to complete what quickly became the definitive university textbook on the subject, ample time for such ingrained materialist leanings to have softened. For the most part sticking to what was directly and consciously known to him, his attempts to dissect the psyche involved much painstaking introspection of what he famously came to describe as his (and our) “stream of consciousness”. Such close analysis of the subjective experience of consciousness itself had suggested to James the need to distinguish between “the Me and the I” as separate component parts of what in completeness he called “the self”. 6 In one way or another, this division of self into selves, whether these be consciously apprehensible or not, has remained a theoretical basis of all later methods of psychoanalysis.

There is a joke that Henry James was a philosopher who wrote novels, whereas his brother William was a novelist who wrote philosophy. But this does WJ a disservice. James’ philosophy, known as pragmatism, is a later diversion. Unlike his writings about psychology, which became the standard academic texts, as well as popular best-sellers (and what better tribute to James’s fluid prose); his ideas on pragmatism were rather poorly received (they have gained more favour over time). But then James was a lesser expert in philosophy, a situation not helped by his distaste for logical reasoning; and he would be better remembered for his writings on psychology, a subject in which he excelled. Freud’s claim to originality is nothing like as foundational.

James was at the vanguard during the period psychology irreparably pulled apart from the grip philosophy had held on it (which explains why James was notionally Professor of Philosopher at the time he was writing), and as it was grafted back to form a subdiscipline of biology. For this reason, and regardless that James remained as highly critical of the developing field of experimental psychology; as he was too of the deductive reasoners on both sides of the English Channel – the British Empiricists of Locke and Hume, and the continental giants Leibnitz, Kant and Hegel – to some of his contemporaries, James’ view appeared all too dangerously materialistic. If only they could have seen how areas of psychology were to so ruinously develop, they would have appreciated that James was, as always, a moderate.

*

While James had remained an academic throughout his whole life, Freud, though briefly studying zoology at the University of Vienna, with one month spent unsuccessfully searching for the gonads of the male eel 7, and another spell doing neurology, decided then to return to medicine and open his own practice. He had also received expert training in the new-fangled techniques of hypnosis.

‘Hypnosis’ comes from the Greek hupnos and means, in effect, “artificial sleep”. To induce hypnosis, the patient’s conscious mind needs to be distracted briefly, and achieving this opens up regions of the mind beyond the usual conscious states. The terms “sub-conscious” and “unconscious” had been in circulation already and prior to the theories of Freud or James. And whether named or not, mysterious evidence of the unconscious had always been known. Dreams, after all, though we consciously experience them, are neither consciously conceived nor willed. They just pop out from nowhere – or from “the unconscious”.

From his clinical experiences, Freud soon discovered what he believed to be better routes to the unconscious than hypnosis. For instance, he found that it was just as effective to listen to his patients, or if their conscious mind was unwilling to give up some of its defences – as it commonly was – then to encourage their free association of words and ideas. He also looked for unconscious connections within his patients’ dreams, gradually uncovering, what he came to believe were the deeply repressed animalistic drives that govern the patient’s fears, attitudes and behaviour. Having found the unconscious root to their problems, the patient could finally begin to grapple with these repressed issues at an increasingly conscious level. It was a technique that apparently worked, with many of Freud’s patients recovering from the worst effects of their neuroses and hysteria, and so “the talking cure” became a lasting part of Freud’s legacy. You lay on the couch, and just out of sight, Freud listened and interpreted.

But Freud also left a bigger mark, by helping to shape the way we see ourselves. The types of unconscious repression he discovered in his own patients, he believed were universally present, and through drawing directly on his experiences as doctor, he slowly excavated, as he found it, the entire human unconscious piece by piece. Two of these aspects he labelled as the ‘superego’ and the ‘id’: the one a seat of primal desires, the other a chastising moral guide – these are reminiscent of the squabbling devil-angel duo that pop up in cartoons, jostling for attention on opposite shoulders of the character whenever he’s plunged into a moral quandary. 8

In a reboot of philosopher Arthur Schopenhauer’s concept of blind and insatiable ‘will’, Freud proposed the existence of the libido: a primary, sexual drive that ceaselessly operates beneath our conscious awareness, prompting desires for pleasure and avoidance of pain irrespective of consequence and regardless to whether these desires conflict with ordinary social conventions. In concert with all of this, Freud discerned a natural process of psychological development 9 and came to believe that whenever this development is arrested or, more generally, whenever normal appetites are consciously repressed, then lurking deep within the unconscious, such repressed but instinctual desires will inevitably and automatically resurface in more morbid forms. This, he determined, the common root cause of all his patient’s various symptoms and illnesses.

Had Freud stopped there, his contribution to psychology would have been fully commendable, for there is tremendous insight in these ideas. He says too much no doubt (especially when it comes to the specifics of human development), but he also says something that needed to be said very urgently: that if you force people to behave against their natures you will make them sick. So it seems a pity that Freud carried some of the ideas a little too far.

Let’s take the ‘Oedipus complex’, which of the many Freudian features of our supposed psychological nether regions, is without doubt the one of greatest notoriety. The myth of Oedipus is enthralling; the eponymous hero compelled to deal with fate, misfortune and prophesy. 10 Freud finds in this tale, a revelation of deep and universal unconscious repression, and though plausible and intriguing, his interpretation basically narrows its far grander scope:

“[Oedipus’s] destiny moves us only because it might have been ours – because the Oracle laid the same curse upon us before our birth as upon him. It is the fate of all of us, perhaps, to direct our first sexual impulse towards our mother and our first hatred and our first murderous wish against our father. Our dreams convince us that this is so.”11

Freud generally studied those with minor psychological problems (and did not deal with cases of psychosis), determining on the basis of an unhappy few, what he presumed true for healthier individuals too, and this is perhaps a failure of all psychoanalytic theories. For though it may seem odd that he came to believe in the universality of the Oedipus Complex, who can doubt that his clients didn’t suffer from something like it? Who can doubt that Freud didn’t suffer the same dark desires? Perhaps, he also felt a ‘castration anxiety’ as a result of the Oedipal rivalry he’d had with his own father. Maybe he actually experienced ‘penis envy’, if not of the same intensity as he said he detected in his female patients, but of a compensatory masculine kind! After all, such unconscious ‘transference’ of attitudes and feelings from one person to another – from patient onto the doctor, or vice versa in this relevant example – is another concept that Freud was first to identify and label.

*

Given the strait-laced age in which Freud had fleshed out his ideas, the swiftness with which these theories received widespread acceptance and acclaim seems surprising, although there are surely two good reasons why Freudianism took hold. The first is straightforward: that society had been very badly in need of a dose of Freud, or something very like Freud. After such excessive prudishness, the pendulum was bound to swing the other way. But arguably the more important reason – indeed the reason his theories have remained influential – is that Freud picked up the baton directly from where Darwin left off. By restricting his explanations to biological instincts and drives, Freudianism has the mantle of scientific legitimacy, and this is a vital determining factor that helped to secure its prominent position within the modern epistemological canon.

Following his precedent, students of Freud, most notably Carl Jung and Alfred Adler, also drew on clinical experiences with their own patients, but gradually came to the conclusion, for different reasons, that Freud’s approach was too reductionist, and that there is considerably more to a patient’s mental well-being than healthy appetites and desires, and thus more to the psychological underworld than solely matters of sex and death.

Where Freud was a materialist and an atheist, Jung went on to incorporate aspects of the spiritual into his extended theory of the unconscious, though he remained respectful to biology and keen to anchor his own theories upon an evolutionary bedrock. Jung nevertheless speculates following a philosophical tradition that owes much to Immanuel Kant, while also drawing heavily on personal experience, and comes to posit the existence of psychical structures he calls ‘archetypes’ operating again at the deepest levels within a collective unconscious; a shared characteristic due to our common ancestry.

Thus he envisions ‘the ego’ – the aspect of our psyche we identify as “I” – as existing in relation to an unknown and finally unknowable sea inhabited by autonomous entities which have their own life. Jung actually suggests that Freud’s Oedipus complex is just one of these archetypes, while he finds himself drawn by the bigger fish of the unconscious beginning with ‘The Shadow’ – what is hidden and rejected by the ego – and what he determines are the communicating figures of ‘Animus/Anima’ (or simply ‘The Syzygy’) – a compensatory masculine/feminine unconscious presence within, respectively, the female and male psyche – that prepare us for incremental and never-ending revelations of our all-encompassing ‘Self’.

This lifelong psychical development, or ‘individuation’, was seen by Jung as an inherently religious quest and he is unapologetic in proclaiming so; the religious impulse being a product too of human evolutionary development along with opposable thumbs and upright posture. More than a mere vestigial hangover, religion is, Jung says, fundamental to the deep nature of our species.

Unlike Freud, Jung was also invested in understanding how the human psyche varies greatly from person to person, and to these ends introduced new ideas about character types, adding ‘introvert’ and ‘extrovert’ to the psychological lexicon to draw a division between individuals characterised either by primarily subjective or objective orientations to life – an introvert himself, Jung was able to observe such a clear distinction. Meanwhile, greatly influenced by Friedrich Nietzsche’s “will to power”, Adler switched attention to issues of social identity and specifically to why people felt – in very many cases quite irrationally – inferior or superior amongst their peers. These efforts culminated in the development of his theory of the ‘inferiority complex’ – which might also be thought of as an aspect of the Jungian ‘Shadow’.

These different schools of psychoanalysis are not irreconcilable. They are indeed rather complimentary in many ways: Freud tackling the animal craving and want of pleasure; Jung looking for expression above and beyond what William Blake once referred to as “this vegetable world”; and Adler delving most directly into the mud of human relations, the pervasive urge to dominate and/or be submissive, and the consequences of personal trauma associated with interpersonal and societal inequalities.

Freud presumes that since we are biological products of Darwinian evolution, then our minds have been evolutionarily pre-programmed. Turning the same inquiry outward, Jung goes in a search of common symbolic threads within mythological and folkloric traditions, enlisting these as evidence for the psychological archetypes buried deep within us all. And though Jung held no orthodox religious views of his own, he felt comfortable drawing upon religious (including overtly Christian) symbolism. In one of his most contemplative passages, he wrote:

Perhaps this sounds very simple, but simple things are always the most difficult. In actual life it requires the greatest art to be simple, and so acceptance of oneself is the essence of the moral problem and the acid test of one’s whole outlook on life. That I feed the beggar, that I forgive an insult, that I love my enemy in the name of Christ—all these are undoubtedly great virtues. What I do unto the least of my brethren, that I do unto Christ.

But what if I should discover that the least amongst them all, the poorest of all beggars, the most impudent of all offenders, yea the very fiend himself—that these are within me, and that I myself stand in need of the alms of my own kindness, that I myself am the enemy who must be loved—what then? Then, as a rule, the whole truth of Christianity is reversed: there is then no more talk of love and long-suffering; we say to the brother within us “Raca,” and condemn and rage against ourselves. We hide him from the world, we deny ever having met this least among the lowly in ourselves, and had it been God himself who drew near to us in this despicable form, we should have denied him a thousand times before a single cock had crowed. 12

Of course, “the very fiend himself” is the Jungian ‘Shadow’, the contents of which without recognition and acceptance then inevitably remain repressed, causing these unapproachable and rejected aspects of our own psyche to be projected out on to the world. ‘Shadow projection’ onto others fills the world with enemies of our own imagining; and this, Jung believed, was the root of nearly all evil. Alternatively, by taking Jung’s advice and accepting “that I myself am the enemy who must be loved”, we come back to ourselves in wholeness. It is only then that the omnipresent threat of the Other diminishes, as the veil of illusion forever separating the ego and reality is thinned. And Jung’s psychological reunification also grants access to previously concealed strengths (the parts of the unconscious discussed at the top), further enabling us to reach our fullest potential. 13

Today there are millions doing “shadow work” as it is now popularly known: self-help exercises often combined with traditional practices of yoga, meditation or the ritual use of entheogens: so here is a new meeting place – a modern mash-up – of religion and psychotherapy. Quietly and individually, a shapeless movement has arisen almost spontaneously as a reaction to the peculiar rigours of western civilisation. Will it change the world? For better or worse, it already has.

Alan Watts who is best known for his Western interpretations of Eastern spiritual traditions and in particular Zen Buddhism and Daoism, here reads this same influential passage from one of Jung’s lectures in which he speaks of ending “the inner civil war”:

*

Now what about my joke at the top? What’s that all about? Indeed, and in all seriousness, what makes it a joke at all? Well, not wishing to delve deeply into theories of comedy, there is one structure that arises repeatedly and nearly universally: that the punch line to every joke relies on some kind of unexpected twist on the set up.

To illustrate the point, let’s turn to the most hackneyed joke of all: “Why did the chicken cross the road?” Here we find an inherent ambiguity that lies within use of the word ‘why’ and this is what sets up the twist. However, in the case of the joke about the psychiatrist and the man who thinks he’s a moth, the site of ambiguity isn’t so obvious. But here the humour I think comes down to alternative and finally conflicting notions of ‘belief’.

A brief digression then: What is belief? To offer a salient example, when someone tells you “I believe in God”, what are they intending to communicate? No less importantly, what would you take them to mean? Put differently, atheists will very often say “I don’t believe in anything” – so again, what are they (literally) trying to convey here? And what would a listener take them to mean? Because in all these instances the same word is used to describe similar but distinct attitudinal relationships to reality, when it is all-too-easy to presume that everyone is using the word in precisely the same way. But first, we must acknowledge that the word ‘belief’ actually carries two quite distinct meanings.

According to the first definition, it is “a mental conviction of the truth of an idea or some aspect of reality”. Belief in UFOs fits this criterion, as does a belief in gravity and that the sun will rise again tomorrow. How about belief in God? When late in life Jung was asked if he believed in God, he replied straightforwardly “I know”. 14 Others reply with the same degree of conviction if asked about angels, fairies, spirit guides, ghosts or the power of healing and crystals. As a physicist, I believe in the existence of atoms, electrons and quarks – although I’ve never “seen one”, like Jung I know!

So belief in this sense is more often than not grounded in a person’s direct experience/s which obviously doesn’t go to validate the objective truth of their belief. He saw a ghost. She was healed by the touch of a holy man. We ran experiments to measure the charge on an electron. Again, in this sense I have never personally known of anyone who did not believe in the physical reality of a world of solid objects – for who doesn’t believe in the existence of tables and chairs? In this important sense everyone has many convictions about the truth of reality, and we surely all believe in something – this applies even in the case of the most hardline of atheists!

But there is also a second kind of belief: “of an idea that is believed to be true or valid without positive knowledge.” The emphasis here is on the lack of knowledge or indeed of direct experience. So this belief involves an effort of willing on the part of the believer. In many ways, this is to believe in make-believe, or we might just say “to make-believe”; to pretend or wish that something is real: the suspension of disbelief. I believe in unicorns…

As a child, all religion had been utterly mystifying, since what was self-evidently make-believe – for instance a “holy ghost” and the virgin birth! – for reasons I was unable to fathom, would be held by others as sacrosanct. Based on my casual encounters with Christians, it also seemed evident that the harder you tried to make-believe in this maddening mystification of being, the better a person it made you! So here’s the point: when someone tells you they believe in God, is this all they actually mean? That they are trying with tremendous exertion, although little conviction, to make-believe in impossibility?

Indeed, is this striving alone mistaken not only as virtuous but as actual believing in the first sense? Yes, quite possibly – and not only for religious types. Alternatively, it may be that someone truly believes in God – or whatever synonym they choose to approximate to ‘cosmic higher consciousness’ – with the same conviction that all physicists believe in gravity and atoms. They may come to know ‘God’, as Jung did.

Now back to the joke and apologies for killing it: The man complains that he feels like a moth and this is so silly that we automatically presume his condition is entirely one of make-believe. But then the twist, when we learn that his actions correspond to his belief, which means, of course, he has true belief of the first kind. Finally, here’s my hunch then for why we find this funny: it spontaneously reminds us of how true beliefs – rather than make-believe – both inform reality as we perceive it, and fundamentally direct our behaviour. Yet we are always in the process of forgetting altogether that this is how we live too, until abruptly the joke reminds us again – and in our moment of recollecting, spontaneously we laugh.

Which also raises a question: To what extent do beliefs of the second ‘make-believe’ kind determine our behaviour too? Especially when the twin definitions show just how easy it can be to get confused over beliefs. Because as Kurt Vonnegut wrote in the introduction to his cautionary novel Mother Night: “This is the only story of mine whose moral I know”, continuing: “We are what we pretend to be, so we must be careful about what we pretend to be.” 15

*

I would like to return now to an idea I earlier disparaged, Dawkins’s concept ‘memes’: ideas, stories, and other cultural fragments, the development and transmission of which can be considered similar to the mutation and survival of genes. In evoking this concept of memes, Dawkins had hoped to wrest human behaviour apart from the rest of biology in order to present an account of how it came to be that our species alone is capable of surpassing the hardwired instructions encoded in our genes. For Dawkins this entailed some fleeting speculation upon the origins of human culture set out in the final pages of his popular science book, The Selfish Gene. Others later picked up on his idea and have reworked it into a pseudo-scientific discipline known as memetics; something I have already criticised.

In fact, the notion of some kind of evolutionary force actively driving human culture occurred to authors before Dawkins. In The Human Situation, for example, Aldous Huxley outlined his own thoughts on the matter, while already making the significant point that such kinds of “social heredity” must be along Lamarckian rather than Darwinian lines:

“While it is clear that the Lamarckian conception of the inheritance of acquired characteristics is completely unacceptable, and untrue biologically, it is perfectly true on the social, psychological and linguistic level: language does provide us means for taking advantage of the fruits of past experience. There is such a thing as social heredity. The acquisitions of our ancestors are handed down to us through written and spoken language, and we do therefore enjoy the possibility of inheriting acquired characteristics, not through germ plasm but through tradition.”

Like Dawkins, Huxley recognised that culture was the singular feature distinguishing our species from others. Culture on top of nature, dictated by education, religious upbringing, class status, and so forth, establishes the social paradigms according to which individuals in general behave. However, in Huxley’s version, as in Dawkins, this is only metaphorically an evolutionary process, while both evidently regard the process of cultural development as most similar to evolution in one key respect: that it is haphazard.

Indeed, Dawkins and Huxley are similarly keen to stress that human culture is therefore a powerful but ultimately ambiguous force that brings about good and ill alike. As Huxley continues:

“Unfortunately, tradition can hand on bad as well as good items. It can hand on prejudices and superstitions just as effectively as it can hand on science and decent ethical codes. Here again we see the strange ambivalence of this extraordinary gift.” 16

We might carry also these ideas a little further by adding a very important determinant of individual human behaviour which such notions of ‘memetics’ have tended to overlook. For memes are basically ideas, and ideas are, by definition, a product and manifestation of conscious thought and transmission; whereas people, on the other hand, as I have discussed above, often behave in ways that are in conflict with their conscious beliefs and desires, which means to some extent, we act according to mental processes that are beyond or even alien to our immediate understanding.

Acknowledging the influence of the unconscious on our thoughts and behaviours, my contention here is straightforward enough and I think hard to dispute: that just as our conscious minds are moulded and differentiated by local customs and conventions; our unconscious minds are presumably likewise formed and diversified. That, to offer a more concrete example, the Chinese unconscious that was shaped and informed by almost three millennia of Daoism, Buddhism and Confucianism, is likely to be markedly different from the unconscious mind of anyone of us raised within the European tradition. Besides the variations due to religio-philosophical upbringing, divergence is likely to be further compounded due to the wide disparities in our languages, with dissimilarities in all elements from vocabulary, syntax and morphology down to the use of characters rather than letters.

Native tongue (or mother tongue) is a very direct and primary filter that not only channels what we are able to articulate, but governs what we are able to fully conceptualise or even to think at all. 17 It is perfectly conceivable therefore that anyone who learned to communicate first in Mandarin or Cantonese will be unconsciously differentiated from someone who learnt to speak English, Spanish or Arabic instead. 18 Indeed, to a lesser degree perhaps, all who speak English as a first language may have an alternate, if more subtly differentiated unconscious relationship to the world, from those whose mother tongue is say French or German. 19

So now I come back to the idea of memes in an attempt to resurrect it in an altered form. Like Dawkins original proposal, my idea is not rigorous or scientific; it’s another hunch: a way of referencing perhaps slight but characteristic differences in the collective unconscious between nations, tribes and also classes of society. Differences that then manifest perhaps as neuroses and complexes which are entirely planted within specific cultural identities – a British complex, for instance (and certainly we talk of having “an island mentally”). We might say therefore that alongside the transmission of memes, we also need to include the transmission of ‘dremes’ – cultural fragments from our direct social environment that are unconsciously given and received.

*

If this is accepted, then my further contention is that one such dreme has become predominant all around the world, and here I am alluding to what might be christened the ‘American Dreme’. And no, not the “American Dream”, which is different. The American Dream is in fact an excellent example of what Dawkin’s labelled a meme: a cultural notion that on this occasion encapsulates a collection of ideas about how life can and ought to be. It says that life should be better, richer and fuller for everyone. Indeed, it is written indelibly into the American constitution in the wonderful phrase: “Life, Liberty and the pursuit of Happiness.” Because the American Dream is inspiring and has no doubt been tremendous liberation for many; engendering technological progress and motivating millions with hopes that anyone living in “The Land of Opportunity” “can make it” “from rags to riches” – all subordinate memes to encapsulate different aspects of the fuller American Dream.

E pluribus unum – “Out of many one” – is the motto inscribed on the scroll held so firmly by the beak of the bald eagle on the Seal of the United States. 20  Again, it is another sub-meme at the heart of the American Dream meme: an emblematic call for an unbound union between the individual and collective; inspiring a loose harmony poetically compared to the relationship of flowers in a bouquet – thus, not a mixing-pot, but a richer mosaic that maintains the original diversity.

Underlying this American Dream, a related sub-meme, cherishes “rugged individualism”. The aspiration of individuals, not always pulling together, nor necessarily in one direction, but constantly striving upwards: pulling themselves up by their own bootstraps! Why? Because according to the dream at least, if you try hard enough, then you must succeed. And though this figurative pulling yourself up by your own bootstraps involves a physical impossibility that contravenes Newton’s Laws, even this does not detract from the idea. Believers in the American Dream apparently don’t notice any contradiction, despite the fantastical image of their central metaphor. The dream is buoyed so high on hope, when deep down most know it’s actually a fairy tale.

So finally there is desperation and a sickliness about the American Dream. A harsh reality in which “The Land of Opportunity” turns out to be a steep-sided pyramid spanned by labyrinthine avenues that mostly run to dead-ends. A promised land but one riven by chasms as vast as the Grand Canyon; disparities that grew out of historical failures: insurmountable gulfs in wealth and real opportunity across a population always beset by class and racial inequalities. Indeed, the underclass of modern America is no less stuck within societal ruts than the underclass of the least developed regions on earth, and in relative terms many are worse off. 21 “It’s called the American Dream”, said the late, great satirist George Carlin, “because you have to be asleep to believe it”.

In short, to keep dreaming the American Dream involves an unresting commitment. Its most fervent acolytes live in a perpetually suspended state of ignorance or outright denial; denial of the everyday miseries and cruelties that ordinary Americans daily suffer: the ‘American Reality’.

Graphic from page 56 of Jean Kilbourne’s book Can’t Buy My Love: How Advertising Changes the Way We Think and Feel (originally published in hardcover in 1999 as Deadly Persuasion: ‘Why Women and Girls Must Fight the Addictive Power of Advertising’). It was an ad for a German marketing firm, contained within a decades-old issue of the trade journal ‘Advertising Age’:

3 being humans image -MAGA advert

But just suppose for a moment that the American Dream actually did come true. That America somehow escaped from this lingering malaise and blossomed into a land of real freedom and opportunity for all as it always promised to be. Yet still an unassailable problem remains. For as with every ascent, the higher you reach the more precarious your position becomes: as apes we have never entirely forgotten how branches are thinner and fewest at the top of the tree.

Moreover, built into the American Dream is its emphasis on material enrichment: to rise towards the heavens therefore means riding up and up and always on a mountain of stuff. And, as you rise, others must, in relative terms, fall. Not necessarily because there isn’t enough stuff to go around, but because success depends upon holding ownership of the greatest share. Which means that as the American Reality draws closer to the American Dream (and it could hardly get much further away), creating optimal social mobility and realisable opportunities for all, then even given this best of all circumstances, the rise of some at the expense of others will cultivate anxious winners and a disadvantaged underclass for whom relative material gain of the winners comes at their own cost of bearing the stigma of comparative failure.

Why am I not nearer the top of the tree? In the greatest land on earth, why do I remain subservient to the gilded elites? Worries that nowadays plague the insomniac hours of many a hopeful loser; of those who landed up, to a large extent by accidental circumstance, in the all-too-fixed trailer parks of “The Land of the Free” (yet another sub-meme – ironically linked to the country with the highest incarceration rate on earth).

But worse, there is an inevitable shadow cast by the American Dream: a growing spectre of alienation and narcissism that abounds from such excessive emphasis on individual achievement: feelings of inferiority for those who missed the boat, and superiority, for those who caught the gravy train. Manipulation is celebrated. Machiavellianism, narcissism and psychopathy come to reign. This shadow is part of what we might call the ‘American Dreme’; an unconscious offspring that contains within it a truly abysmal contrast to the American Dream which bore it. A dreme, that being carried upon the coat-tails of the Dream, was spread far and wide by Hollywood, by Disney, radiated out in radio and television transmissions, and in consequence is now becoming the ‘Global Dreme’.

Being unconscious of it, however, we are mostly unaware of any affliction whatsoever; the dreme being insidious, and thus very much more dangerous than the meme. We might even mistake it for something else – having become such a pandemic, we might easily misdiagnose it as a normal part of ‘human nature’.

*

Here is Chris Hedges again with his own analysis of modern day consumerism, totalitarian corporate power and living in a culture dominated by pervasive illusion:

“Working for the American Dream”, first broadcast by the BBC in July 2018 and embedded below, is American comedian Rich Hall’s affectionate though characteristically sardonic portrait of the nation’s foundational and persistent myth:

*

And the joke was hilarious wasn’t it? No, you didn’t like it….? Well, if beauty is in the eye of the beholder, comedy surely lies in the marrow of the funny bone! Which brings me to ask why there is comedy? More broadly, why is there laughter – surely the most curious human reflex of all – or its very closely-related reflex cousin, crying. In fact, the emission of tears from the nasolacrimal ducts other than in response to irritation of our ocular structures and purely for reasons of joy or sorrow is a very nearly uniquely human secretomotor phenomenon. (Excuse my Latin!) 22

The jury is still out on evolutionary function of laughing and crying, but when considered in strictly Darwinian terms (as current Science insists), it is hard to fathom why these dangerously debilitating and a potentially life threatening responses ever developed in any species. It is acknowledged indeed that a handful of unlucky (perhaps lucky?) people have literally died from laughter. So why do we laugh? Why do we love laughter, whether ours or others, so much? Your guess is as good as mine, and, more importantly, as good as Darwin’s:

Many curious discussions have been written on the causes of laughter with grown-up persons. The subject is extremely complex. Something incongruous or unaccountable, exciting surprise and some sense of superiority in the laugher, who must be in a happy frame of mind, seems to be the commonest cause. 23

Less generously, Thomas Hobbes, who explained all human behaviour in terms of gaining social advantage, wrote that:

Joy, arising from imagination of a man’s own power and ability, is that exultation of the mind which is called glorying… Sudden Glory, is the passion which maketh those grimaces called LAUGHTER; and is caused either by some sudden act of their own, that pleaseth them; or by the apprehension of some deformed thing in another, by comparison whereof they suddenly applaud themselves. 24

And indeed, it is true that a great deal of laughter is at the expense of some butt of our joking, however not all mockery involves an inflicted party and there’s a great deal more to humour and laughter than merely ridicule and contempt. So Hobbes’ account is at best a very desiccated postulation for why humans laugh, let alone what constitutes joy.

Indeed, Hobbes’ reductionism is evidently mistaken and misinformed not only by his deep-seated misanthropy, but also by a seeming lack of common insight which leads one to suspect that when it came to sharing any jokes, he just didn’t get it. But precisely what didn’t he get?

Well, apparently he didn’t get how laughter can be a straightforward expression of joie de vivre. Too French I imagine! Or that when we apprehend anything, this momentarily snaps us from a prior state of inattention and on the occasion of and finding amusement in an abrupt, often fleeting, but totally fresh understanding, the revelation itself may elicit laughter (as I already outlined above). Or that it is simply impossible to laugh authentically or infectiously unless you not only understand the joke, but fully acknowledge it. In this way, humour, if confessional, can be liberating at a deeply personal level, or if satirical, liberating at a penetrating societal level. Lastly (in my necessarily limited rundown), humour serves as a wonderfully efficient and entertaining springboard for communicating insight and understanding, especially when the truths are dry, difficult to grasp or otherwise unpalatable. Here is a rhetorical economy that Hobbes might actually have approved were it not for his somewhat curmudgeonly disposition.

And why tell a joke here? Just to make you laugh and take your mind off the gravity of the topics covered and still more grave ones to come? To an extent, yes, but also to broaden out our discussion, letting it drift off into related philosophical avenues. For existence is seemingly absurd, is it not? Considered squarely, full-frontal, what’s it all about…? And jokes – especially ones that work beyond rational understanding – offer a playful recognition of the nonsensicalness of existence and of our species’ farcical determination to comprehend it and ourselves fully. What gives us the gall to ever speculate on the meaning of life, the universe and everything?

Meanwhile, we are free to choose: do we laugh or do we cry at our weird predicament. Both responses are surely sounder than cool insouciance, since both are flushed with blood. And were we madder, we might scream instead, of course, whether in joy or terror. As Theseus says in Shakespeare’s A Midsummer Night’s Dream:

Lovers and madmen have such seething brains,
Such shaping fantasies, that apprehend
More than cool reason ever comprehends.
The lunatic, the lover, and the poet
Are of imagination all compact.

*

French existentialist Albert Camus famously made the claim: “There is but one truly serious philosophical problem and that is suicide.” 25 Camus was not an advocate of suicide, of course; far from it. In fact, he saw it as perfectly vain attempt to flee from the inescapable absurdity of life, something he believed we ought to embrace in order to live authentically. Indeed, Camus regarded every attempt to deny the primacy of ultimate meaninglessness of life in a universe that is indifferent to our suffering as a surrogate form of psychological suicide.

But rather than staring blankly into the abyss, Camus urges us to rebel against it. To face its absurdity without flinching, and through rebellion, by virtue of which we individually reconstruct the meaning of our lives afresh, albeit paradoxically, we shall then come to face extreme rationality. Although perhaps he goes too far, and reaches a point so extreme that few can follow: such a Sisyphean outlook being too desolate for most of us, and his exhortation to authenticity so impassioned that it seems almost infinitely taxing. 26 Kierkegaard’s “leap of faith” is arguably more forgiving of the human condition – but enough of philosophy. 27

This pause is meant for introspection. I have therefore presented an opportunity to reconsider how my interlude set out, not only by telling a joke – and hopefully one that made you smile if not laugh out loud – but also to reflect upon the beautiful wisdom encapsulated in Chuang Tzu’s dream of becoming a butterfly; mystical enlightenment from 4th century BC China, that clashes intentionally with the plain silliness of a doctor-doctor joke about a moth-man; a surreal quip about clinical diagnosis and psychiatry (something I shall be coming to consider next).

However, the running theme here is one of transformation, and at the risk of also killing Chuang Tzu’s message by dissection, I will simply add (unnecessarily from the Daoist perspective) that existence does appear to be cyclically transformative; on personal, collective and altogether cosmic levels, the conscious and unconscious, spiralling outwards – whether upward into light or downward into darkness – each perpetually giving rise to the other just like the everblooming of yang and yin. As maverick clinical psychiatrist R. D. Laing once wrote:

“Most people most of the time experience themselves and others in one way or another that I… call egoic. That is, centrally or peripherally, they experience the world and themselves in terms of a consistent identity, a me-here over against you-there, within a framework of certain ground structures of space and time shared with other members of their society… All religious and all existential philosophies have agreed that such egoic experience is a preliminary illusion, a veil, a film of maya—a dream to Heraclitus, and to Lao Tzu, the fundamental illusion of all Buddhism, a state of sleep, of death, of socially accepted madness, a womb state to which one has to die, from which one has to be born.” 28

Returning from the shadowlands of alienation to contemplate the glinting iridescent radiance of Tzu’s butterfly’s wings is an invitation to scrape away the dross of habituated semi-consciousness that veils the playful mystery of our minds. On a different occasion, Tzu wrote:

One who dreams of drinking wine may in the morning weep; one who dreams weeping may in the morning go out to hunt. During our dreams we do not know we are dreaming. We may even dream of interpreting a dream. Only on waking do we know it was a dream. Only after the great awakening will we realize that this is the great dream. And yet fools think they are awake, presuming to know that they are rulers or herdsmen. How dense! You and Confucius are both dreaming, and I who say you are a dream am also a dream. Such is my tale. It will probably be called preposterous, but after ten thousand generations there may be a great sage who will be able to explain it, a trivial interval equivalent to the passage from morning to night. 29

Thus the world about us is scarcely less a construct of our imagination than our dreams are, deconstructed by the senses then seamlessly reconstructed in its entirety. And not just reconfigured via inputs from the celebrated five gateways of vision, sound, touch, taste and smell, but all portals including those of memory, intuition, and even reason. After all, it is curious how we speak of having ‘a sense’ of reason, just as we do ‘a sense’ of humour. Well, do we have… a sense of reason and a sense of humour? If you have followed this far then I sense you may share my own.

Next chapter…

*

Richard Rohr is a Franciscan priest, author and teacher, who says that his calling has been “to retrieve and re-teach the wisdom that has been lost, ignored or misunderstood in the Judeo-Christian tradition.” Rohr is the founder of the Center for Action and Contemplation and academic dean of the CAC’s Living School, where he practises incarnational mysticism, non-dual consciousness, and contemplation, with a particular emphasis on how these affect the social justice issues of our time. Recently he shared his inspirational standpoint in an hour-long chat with ‘Buddha at the Gas Pump’s Rick Archer:

*

Addendum: anyone with half a brain

“The intuitive mind is a sacred gift and the rational mind is a faithful servant. We have created a society that honors the servant and has forgotten the gift.”

— attributed to Albert Einstein 30

*

The development of split-brain operations for the treatment of severe cases of epilepsy, which involves the severing of the corpus callosum, a thick web of nerves that allow communication between the two hemispheres, first drew attention to how left and right hemispheres have quite different attributes. Unfortunately, the early studies in this field produced erroneous since superficial notions about left and right brain functions that were in turn vulgarised and popularised when they percolated down into pop psychology and management theory. The left brain was said to generate language and logic; while it was only the right brain which supposedly dealt with feelings and was the creative centre. In reality, both hemispheres are involved in all aspects of cognition, and as a consequence the study of what is technically called the lateralisation of brain function fell to some extent into academic disrepute.

In fact, important differences do occur between the specialism of the left and right hemispheres, although as psychiatrist Iain McGilchrist proposes in this book The Master and His Emissary (which he sees as the proper roles of the right and left hemispheres respectively) 31, it is often better to understand the distinctions in terms of where conscious awareness is placed. In summary, the left hemisphere attends to and focuses narrowly but precisely on what is immediately in front of you, allowing you to strike the nail with the hammer, thread the eye of the needle, sort the wheat from the chaff (or whatever activity you might be actively engaged with), while the right hemisphere remains highly vigilant and attentive to the surroundings. Thus, the left brain operates tools and usefully sizes up situations, while the right brain’s immediate relationship to the environment and to our bodies makes it the mediator to social activities and to a far broader conscious awareness. However, according to McGilchrist, the left brain is also convinced of its primacy, whereas the right is incapable of comprehending such hierarchies, which is arguably the root of a problem we all face, since it repeatedly leads humans to construct societal arrangements and norms in accordance with left brain dominance and so to the inevitable detriment of less restricted right brain awareness.

Supported by many decades of research, this has become the informed view of McGilchrist, and given that his overarching thesis has merit – note that the basic distinctions between left and right brain awareness are uncontroversial and well understood in psychology, whereas what he sees as the socio-historical repercussions is more speculative – then it raises brain function lateralisation as major underlying issue that needs to be incorporated in any final appraisal of ‘human nature’, the implications of which McGilchrist propounds at length in his own writing. In the preface to the new expanded edition of The Master and His Emissary (2009), he writes:

I don’t want it to be possible, after reading this book, for any intelligent person ever again to see the right hemisphere as the ‘minor’ hemisphere, as it used to be called – still worse the flighty, impetuous, fantastical one, the unreliable but perhaps fluffy and cuddly one – and the left hemisphere as the solid, dependable, down-to-earth hemisphere, the one that does all the heavy lifting and is alone the intelligent source of our understanding. I might still be to some extent swimming against the current, but there are signs that the current may be changing direction.

*

Embedded below is a lecture given to the Royal Society of Arts (RSA) in 2010, in which he offers a concise overview of how according to our current understanding, the ‘divided brain’ has profoundly altered human behaviour, culture and society:

To hear these ideas contextualised within an evolutionary account of brain laterality, I also recommend a lecture given to The Evolutionary Psychiatry Special Interest Group of the Royal College of Psychiatrists in London (EPSIG UK) in 2018:

For more from Iain McGilchrist I also recommend this extended interview with physicist and filmmaker Curt Jaimungal, host of Theories of Everything, which premiered on March 29th:

*

Please note that for the purposes of ‘publishing’ here I have taken advantage of the option to incorporate hypertext links and embed videos – in order to distinguish additional commentary from the original text all newly incorporated text has been italised.

*

1  “insensible perceptions are as important to [the science of minds, souls, and soul-like substances] as insensible corpuscles are to natural science, and it is just as unreasonable to reject the one as the other on the pretext that they are beyond the reach of our senses.” from Preface of New Essays concerning Human Understanding  by Gottfried Leibnitz, first published in 1704, translation courtesy of Stanford Encyclopedia of Philosophy.

2 From Anthropology from a Pragmatic Point of View by Immanuel Kant, first published in 1798.

3 “The definition of Psychology may be best given… as the description and explanation of states of consciousness as such. By states of consciousness are meant such things as sensations, desires, emotions, cognitions, reasonings, decisions, volitions, and the like. Their ‘explanation’ must of course include the study of their causes, conditions, and immediate consequences, so far as these can be ascertained.” from opening paragraph of “Introduction: Body and Mind” from The Principles of Psychology, by William James, first published in 1892.

4 Extract taken from The Varieties of Religious Experience, from chapter on “The Sick Soul”.

5 Letter to his friend, Francis Child.

6 According to James, the first division of “the self” that can be discriminated is between “the self as known”, the me, and “the  self as knower”, the I, or “pure ego”. The me he then suggests might be sub-divided in a constituent hierarchy: “the material me” at the lowest level, then “the social me” and top-most “the spiritual me”. It was not until very much later in the 1920s when Freud had fully developed his own tripartite division of the psyche in id, ego and super-ego, a division that surely owes much to James.

7

In the spring of 1876, a young man of nineteen arrived in the seaside city of Trieste and set about a curious task. Every morning, as the fishermen brought in their catch, he went to meet them at the port, where he bought eels by the dozens and then the hundreds. He carried them home, to a dissection table in a corner of his room, and—from eight until noon, when he broke for lunch, and then again from one until six, when he quit for the day and went to ogle the women of Trieste on the street—he diligently slashed away, in search of gonads.

“My hands are stained by the white and red blood of the sea creatures,” he wrote to a friend. “All I see when I close my eyes is the shimmering dead tissue, which haunts my dreams, and all I can think about are the big questions, the ones that go hand in hand with testicles and ovaries—the universal, pivotal questions.”

The young man, whose name was Sigmund Freud, eventually followed his evolving questions in other directions. But in Trieste, elbow-deep in slime, he hoped to be the first person to find what men of science had been seeking for thousands of years: the testicles of an eel. To see them would be to begin to solve a profound mystery, one that had stumped Aristotle and countless successors throughout the history of natural science: Where do eels come from?

From an article entitled “Where Do Eels Come From?” written by Brooke Jarvis, published in New Yorker magazine on May 18, 2020. https://www.newyorker.com/magazine/2020/05/25/where-do-eels-come-from

8 In the BBC TV sci-fi comedy Red Dwarf (Series 1 Episode), the eponymous characters “Confidence and Paranoia” form an alternative superego-id partnership, existing as physical manifestations, which appear onboard as symptoms of Lister’s illness.

9 Fixing on specific erogenous zones of the body, Freud believed that libidinous desire shaped our psychological development in a very specific fashion, naturally progressing, if permitted, through early stages from oral, to anal, and, then reaching adulthood, to genital.

10 Jocasta, the queen of Thebes, is barren, and so she and her husband, the king Laius, decide to consult the Oracle of Delphi. The Oracle tells them that if Jocasta bears a son, then the son will kill his father and marry her. Later, when Jocasta does indeed have a son, Laius demands that a servant take the baby to a mountain to be abandoned, his ankles pinned together just in case. But Oracles are rarely mistaken, fate is hard to avoid, and so as it happens the servant spares the infant, giving him to a shepherd instead. Eventually, as fortune will have it, the infant is adopted by the king and queen of Corinth, and named Oedipus because of the swellings on his feet. Years pass. Then, one day Oedipus learns that the king and queen are not his parents, but when he asks them, they deny the truth. So Oedipus decides put the question to the Oracle of Delphi instead, who being an enigmatic type, refuses to identify his true parents, but foretells his future instead, saying that he is destined to kill his father and marry his mother. Determined to avoid this, Oedipus determines not to return home to Corinth, heading to, you guessed it, Thebes instead. He comes to an intersection of three roads and meets Laius driving a chariot. They argue about who has the right of way and then, in an early example of road rage, their rage spills into a fight and thus Oedipus unwittingly kills his real father. Next up, he meets the sphinx, who asks its famous riddle. This is a question of life and death, all who have incorrectly answered having been killed and eaten, but Oedipus gets the answer right and so obligingly the sphinx kills itself instead. Having freed the people of Thebes from the sphinx, Oedipus receives the hand of the recently widowed Jocasta in marriage. All is well for a while, but then it comes to pass that Jocasta learns who Oedipus really is, and hangs herself. Then, later again, Oedipus discovers that he was the murderer of his own father, and gouges his own eyes out.

11 Sigmund Freud, The Interpretation of Dreams, chapter V, “The Material and Sources of Dreams”

12 From an essay by C.G. Jung published in CW XI, Para 520. The word ‘Raca’ is an insult translated as ‘worthless’ or ‘empty’ taken from a passage in the Sermon on the Mount from Matthew 5:22.

13 Jung described the shadow in a key passage as “that hidden, repressed, for the most part inferior and guilt-laden personality whose ultimate ramifications reach back into the realm of our animal ancestors…If it has been believed hitherto that the human shadow was the source of evil, it can now be ascertained on closer investigation that the unconscious man, that is his shadow does not consist only of morally reprehensible tendencies, but also displays a number of good qualities, such as normal instincts, appropriate reactions, realistic insights, creative impulses etc”

From Jung’s Collected Works, 9, part 2, paragraph 422–3.

14 In response to a question in an interview completed just two years before his death by John Freeman and broadcast as part of the BBC Face to Face TV series in 1959. Asking about his childhood and whether he had to attend church, he then asked: “Do you now believe in God?” Jung replies: “Now? Difficult to answer… I know. I don’t need to believe I know.”

15 The quote in full reads: “This is the only story of mine whose moral I know. I don’t think it’s a marvelous moral, I just happen to know what it is: We are what we pretend to be, so we must be careful about what we pretend to be.” From Mother Night (1962) by Kurt Vonnegut.

16 The Human Situation is a collection of lectures first delivered by Aldous Huxley at the University of California in 1959. These were edited by Piero Ferrucci and first published in 1978 by Chatto & Windus, London. Both extracts here were taken from his lecture on “Language”, p 172.

17 This is the premise behind Orwell’s ‘Newspeak’ used in his dystopian novel Nineteen Eighty-Four. In Chapter 5, Syme, a language specialist and one of Winston Smith’s colleagues at the Ministry of Truth, explains enthusiastically to Winston:

“Don’t you see that the whole aim of Newspeak is to narrow the range of thought? In the end we shall make thoughtcrime literally impossible, because there will be no words in which to express it. Every concept that can ever be needed, will be expressed by exactly one word, with its meaning rigidly defined and all its subsidiary meanings rubbed out and forgotten.”

18 I should note that the idea proposed here is not altogether original and that the original concept of ‘linguistic relativity’ is jointly credited to linguists Edward Sapir and Benjamin Whorf who whilst working independently came to the parallel conclusion that (in the strong form) language determines thought or (in the weak form) language and its usage influences thought. Whorf also inadvertently created the urban myth that Eskimos have hundred words for snow after he wrote in a popular article “We [English speakers] have the same word for falling snow, snow on the ground, snow hard packed like ice, slushy snow, wind-driven snow – whatever the situation may be. To an Eskimo, this all-inclusive word would be almost unthinkable…” The so-called “Sapir-Whorf hypothesis” continues to inspire research in psychology, anthropology and philosophy.

19 After writing this, I then read Richard Dawkins The Ancestor’s Tale. Aside from being a most wonderful account of what Dawkins poetically describes as his ‘pilgrimage to the dawn of life’, here Dawkins also returns to many earlier themes of other books, occasionally moderating or further elucidating previous thoughts and ideas. In chapter entitled ‘the peacock’s tale’ [pp 278–280], he returns to speculate more about the role memes may have had on human development. In doing so he presents an idea put forward by his friend, the philosopher Daniel Dennett,  from his book “Consciousness Explained”, which is that local variation of memes is inevitable:

“The haven all memes depend on reaching is the human mind, but the human mind is itself an artifact created when memes restructure a human brain in order to make it a better habitat for memes. The avenues for entry and departure are modified to suit local conditions, and strengthened by various artificial devices that enhance fidelity and prolixity of replication: native Chinese minds differ dramatically from native French minds, and literate minds differ from illiterate minds.” And is it not also implicit here, that the unconscious brain will also be differently ‘restructured’ due to different environmental influences.

20 Barack Obama, who’s own election was acclaimed by some and witnessed by many as proof of the American Dream, recently compared E pluribus unuman Indonisian motto Bhinneka Tunggal Ika — unity in diversity.

“But I believe that the history of both America and Indonesia should give us hope. It is a story written into our national mottos. In the United States, our motto is E pluribus unum — out of many, one. Bhinneka Tunggal Ika — unity in diversity. (Applause.) We are two nations, which have traveled different paths. Yet our nations show that hundreds of millions who hold different beliefs can be united in freedom under one flag.” Press release (unedited) from The White House, posted November 10th, 2010: “remarks by the President at the University of Indonesia in Jakarta, Indonesia”

21 Summary of statistical analysis by the Center for American Progress, “Understanding Mobility in America”, by Tom Hertz, American University, published April 26th, 2006. Amongst the key findings was a discovery that “Children from low-income families have only a 1 percent chance of reaching the top 5 percent of the income distribution, versus children of the rich who have about a 22 percent chance [of remaining rich].” and that “By international standards, the United States has an unusually low level of intergenerational mobility: our parents’ income is highly predictive of our income as adults.” The report adds that “Intergenerational mobility in the United States is lower than in France, Germany, Sweden, Canada, Finland, Norway and Denmark. Among high-income countries for which comparable estimates are available, only the United Kingdom had a lower rate of mobility than the United States.”

Reproduced from an article entitled “Advertising vs. Democracy: An Interview with Jean Kilbourne” written by Hugh Iglarsh, published in Counterpunch magazine on October 23rd 2020. https://www.counterpunch.org/2020/10/23/advertising-vs-democracy-an-interview-with-jean-kilbourne/ 

22 In his follow-up to the more famous On the Origin of Species (1959) and The Descent of Man (1871), Charles Darwin reported in Chapter VI entitled “Special Expressions of Man: Suffering and Weeping” of his three major work The Expression of the Emotions in Man and Animals (1872), that:

I was anxious to ascertain whether there existed in any of the lower animals a similar relation between the contraction of the orbicular muscles during violent expiration and the secretion of tears; but there are very few animals which contract these muscles in a prolonged manner, or which shed tears. The Macacus maurus, which formerly wept so copiously in the Zoological Gardens, would have been a fine case for observation; but the two monkeys now there, and which are believed to belong to the same species, do not weep. Nevertheless they were carefully observed by Mr. Bartlett and myself, whilst screaming loudly, and they seemed to contract these muscles; but they moved about their cages so rapidly, that it was difficult to observe with certainty. No other monkey, as far as I have been able to ascertain, contracts its orbicular muscles whilst screaming.

The Indian elephant is known sometimes to weep. Sir E. Tennent, in describing these which he saw captured and bound in Ceylon, says, some “lay motionless on the ground, with no other indication of suffering than the tears which suffused their eyes and flowed incessantly.” Speaking of another elephant he says, “When overpowered and made fast, his grief was most affecting; his violence sank to utter prostration, and he lay on the ground, uttering choking cries, with tears trickling down his cheeks.” In the Zoological Gardens the keeper of the Indian elephants positively asserts that he has several times seen tears rolling down the face of the old female, when distressed by the removal of the young one. Hence I was extremely anxious to ascertain, as an extension of the relation between the contraction of the orbicular muscles and the shedding of tears in man, whether elephants when screaming or trumpeting loudly contract these muscles. At Mr. Bartlett’s desire the keeper ordered the old and the young elephant to trumpet; and we repeatedly saw in both animals that, just as the trumpeting began, the orbicular muscles, especially the lower ones, were distinctly contracted. On a subsequent occasion the keeper made the old elephant trumpet much more loudly, and invariably both the upper and lower orbicular muscles were strongly contracted, and now in an equal degree. It is a singular fact that the African elephant, which, however, is so different from the Indian species that it is placed by some naturalists in a distinct sub-genus, when made on two occasions to trumpet loudly, exhibited no trace of the contraction of the orbicular muscles.

The full text is uploaded here: https://www.gutenberg.org/files/1227/1227-h/1227-h.htm#link2HCH0006

23 Quote from The Expression of the Emotions in Man and Animals (1872), Chapter VIII “Joy, High Spirits, Love, Tender Feelings, Devotion” by Charles Darwin. He continues:

The circumstances must not be of a momentous nature: no poor man would laugh or smile on suddenly hearing that a large fortune had been bequeathed to him. If the mind is strongly excited by pleasurable feelings, and any little unexpected event or thought occurs, then, as Mr. Herbert Spencer remarks, “a large amount of nervous energy, instead of being allowed to expend itself in producing an equivalent amount of the new thoughts and emotion which were nascent, is suddenly checked in its flow.” . . . “The excess must discharge itself in some other direction, and there results an efflux through the motor nerves to various classes of the muscles, producing the half-convulsive actions we term laughter.” An observation, bearing on this point, was made by a correspondent during the recent siege of Paris, namely, that the German soldiers, after strong excitement from exposure to extreme danger, were particularly apt to burst out into loud laughter at the smallest joke. So again when young children are just beginning to cry, an unexpected event will sometimes suddenly turn their crying into laughter, which apparently serves equally well to expend their superfluous nervous energy.

The imagination is sometimes said to be tickled by a ludicrous idea; and this so-called tickling of the mind is curiously analogous with that of the body. Every one knows how immoderately children laugh, and how their whole bodies are convulsed when they are tickled. The anthropoid apes, as we have seen, likewise utter a reiterated sound, corresponding with our laughter, when they are tickled, especially under the armpits… Yet laughter from a ludicrous idea, though involuntary, cannot be called a strictly reflex action. In this case, and in that of laughter from being tickled, the mind must be in a pleasurable condition; a young child, if tickled by a strange man, would scream from fear…. From the fact that a child can hardly tickle itself, or in a much less degree than when tickled by another  person, it seems that the precise point to be touched must not be known; so with the mind, something unexpected – a novel or incongruous idea which breaks through an habitual train of thought – appears to be a strong element in the ludicrous.

24 Quote from, Leviathan (1651), The First Part, Chapter 6, by Thomas Hobbes (with italics and spelling as original). Hobbes continues:

And it is incident most to them, that are conscious of the fewest abilities in themselves; who are forced to keep themselves in their own favour, by observing the imperfections of other men. And therefore much Laughter at the defects of others is a signe of Pusillanimity. For of great minds, one of the proper workes is, to help and free others from scorn; and compare themselves onely with the most able.

Interestingly, Hobbes then immediately offers his account of weeping as follows:

On the contrary, Sudden Dejection is the passion that causeth WEEPING; and is caused by such accidents, as suddenly take away some vehement hope, or some prop of their power: and they are most subject to it, that rely principally on helps externall, such as are Women, and Children. Therefore, some Weep for the loss of Friends; Others for their unkindnesse; others for the sudden stop made to their thoughts of revenge, by Reconciliation. But in all cases, both Laughter and Weeping, are sudden motions; Custome taking them both away. For no man Laughs at old jests; or Weeps for an old calamity.

https://www.gutenberg.org/files/3207/3207-h/3207-h.htm#link2H_PART1

25 “Il n’y a qu’un problème philosophique vraiment sérieux : c’est le suicide.” Quote taken from The Myth of Sisyphus (1942) by Camus, Albert.  Translated by Justin O’Brien.

26 In Greek mythology Sisyphus was punished in hell by being forced to roll a huge boulder up a hill only for it to roll down every time, repeating his action for eternity. In his philosophical essay The Myth of Sisyphus (1942) Camus compares this unremitting and unrewarding task of Sisyphus to the lives of ordinary people in the modern world, writing:

“The workman of today works every day in his life at the same tasks, and this fate is no less absurd. But it is tragic only at the rare moments when it becomes conscious.”

In sympathy he also muses on Sisyphus’ thoughts especially as he trudges in despair back down the mountain to collect the rock again. He writes:

“You have already grasped that Sisyphus is the absurd hero. He is, as much through his passions as through his torture. His scorn of the gods, his hatred of death, and his passion for life won him that unspeakable penalty in which the whole being is exerted toward accomplishing nothing. This is the price that must be paid for the passions of this earth. Nothing is told us about Sisyphus in the underworld. Myths are made for the imagination to breathe life into them.”

Continuing:

“It is during that return, that pause, that Sisyphus interests me. A face that toils so close to stones is already stone itself! I see that man going back down with a heavy yet measured step toward the torment of which he will never know the end. That hour like a breathing-space which returns as surely as his suffering, that is the hour of consciousness. At each of those moments when he leaves the heights and gradually sinks toward the lairs of the gods, he is superior to his fate. He is stronger than his rock.

“If this myth is tragic, that is because its hero is conscious. Where would his torture be, indeed, if at every step the hope of succeeding upheld him? The workman of today works everyday in his life at the same tasks, and his fate is no less absurd. But it is tragic only at the rare moments when it becomes conscious. Sisyphus, proletarian of the gods, powerless and rebellious, knows the whole extent of his wretched condition: it is what he thinks of during his descent. The lucidity that was to constitute his torture at the same time crowns his victory. There is no fate that can not be surmounted by scorn.”

You can read the extended passage here: http://dbanach.com/sisyphus.htm

27 Søren Kierkegaard never actually coined the term “leap of faith” although he did use the more general notion of “leap” to describe situations whenever a person is faced with a choice that cannot be fully justified rationally. Moreover, in this instance the “leap” is perhaps better described as a leap “towards” or “into” faith that finally overcomes what Kierkegaard saw as an inherent paradoxical contradiction between the ethical and the religious. However, Kierkegaard never advocates “blind faith”, but instead recognises that faith ultimately calls for action in the face of absurdity.

In Part Two, “The Subjective Issue”, of his 1846 work and impassioned attack against Hegelianism, Concluding Unscientific Postscript to the Philosophical Fragments (Danish: Afsluttende uvidenskabelig Efterskrift til de philosophiske Smuler), which is known for its dictum, “Subjectivity is Truth”, Kierkegaard wrote:

“When someone is to leap he must certainly do it alone and also be alone in properly understanding that it is an impossibility… the leap is the decision… I am charging the individual in question with not willing to stop the infinity of [self-]reflection. Am I requiring something of him, then? But on the other hand, in a genuinely speculative way, I assume that reflection stops of its own accord. Why, then, do I require something of him? And what do I require of him? I require a resolution.”

28 R. D. Laing, The Politics of Experience  (Ballantine Books, N.Y., 1967)

29 Quoted from the book known as Zhuangzi (also transliterated as Chuang Tzu or Chuang Chou). Translation by Lin Yutang

30 Although in all likelihood a reworking of a passage from a book entitled The Metaphoric Mind: A Celebration of Creative Consciousness written by Bob Samples and published in 1976 in which the fuller passage reads [with emphasis added]:

“The metaphoric mind is a maverick. It is as wild and unruly as a child. It follows us doggedly and plagues us with its presence as we wander the contrived corridors of rationality. It is a metaphoric link with the unknown called religion that causes us to build cathedrals — and the very cathedrals are built with rational, logical plans. When some personal crisis or the bewildering chaos of everyday life closes in on us, we often rush to worship the rationally-planned cathedral and ignore the religion. Albert Einstein called the intuitive or metaphoric mind a sacred gift. He added that the rational mind was a faithful servant. It is paradoxical that in the context of modern life we have begun to worship the servant and defile the divine.

31 The book is subtitled The Divided Brain and the Making of the Western World

4 Comments

Filed under « finishing the rat race »

keep taking the tablets

The following article is Chapter Four of a book entitled Finishing The Rat Race.

All previously uploaded chapters are available (in sequence) by following the link above or from category link in the main menu, where you will also find a table of contents and a preface on why I started writing it.

*

“Psychiatry could be, or some psychiatrists are, on the side of transcendence, of genuine freedom, and of true human growth. But psychiatry can so easily be a technique of brainwashing, of inducing behaviour that is adjusted, by (preferably) non-injurious torture. In the best places, where straitjackets are abolished, doors are unlocked, leucotomies largely forgone, these can be replaced by more subtle lobotomies and tranquillizers that place the bars of Bedlam and the locked doors inside the patient.”  

— R. D. Laing in a later preface to The Divided Self. 1

*

A few notes of caution before proceeding:

From this point onwards I shall use the words ‘madness’ and ‘insanity’ interchangeable and to denote mental illness of different kinds in an entirely general and overarching way. Beyond the shorthand, I have adopted this approach for two principle reasons. Given the nature of the field and on the basis of historical precedent, technical labels tend to be transitory and superseded, and so traditional and non-technical language avoids our need to grapple with the elaborate definitions found in medical directories of psychiatry (more later), while taking this approach also keeps clear of the euphemism treadmill. Moreover, the older terms have simplicity which, if used with sensitivity, bestow weight on the day-to-day misery of mental illness and dignify its suffering. R. D. Laing, who spent a lifetime treating patients with the most severe schizophrenia, unflinching talked about ‘madness’. A flawed genius, I return to Laing in the final section of the chapter.

The second point I wish to highlight is that illnesses associated with the workings of the mind, will sadly, but in all likelihood, remain a cause for social prejudice and discrimination. In part this is due to the detrimental effect mental illness has on interpersonal relationships. And since ‘the person’ – whatever this entity can be said to fully represent – is presupposed to exist in a kind of one-to-one equivalence to the mind, it is basically taken for granted not only that someone’s behaviour correlates to unseen mental activity, but that it is an expression of a person’s character. Indeed, person, mind and behaviour are usually apprehended as a sort of coessential three-in-one.

All forms of suffering are difficult to face, of course, for loved ones as for the patient; however our degree of separation becomes heightened once someone’s personality is significantly altered through illness. I contend however that beyond these often practical concerns, there are further barriers that lie in the way of our full acceptance of mental illness, ones automatically instilled by everyday attitudes and opinions that may cause us to register a greater shock when faced with the sufferings of an unsound mind; some features of the disease not just directly clashing with expectations of acceptable human behaviour, but threatening on occasion to fundamental notions of what it means to be human.

For these reasons mental illness tends to isolate its victims. Those who in all likelihood are suffering profound existential detachment becoming further detached from ordinary human contact. In extreme circumstances, mental illness makes its victims appear as monstrosities – the freaks who ordinary folks once visited asylums simply to gawp at when it only cost a shilling to see “the beasts” rave at Bedlam, as London’s Bethlem Royal Hospital was once known. 2 Whom the gods would destroy they first make mad, the ancient saying goes 3, and it is difficult indeed to conjure up any worse fate than this.

*

Before returning to the main issues around mental illness, I wish to briefly consider the changing societal attitudes toward behaviour in general. The ongoing trend for many decades has been for society to become more tolerant of alternative modes of thinking and acting. Indeed, a plethora of interpersonal norms have either lapsed altogether or, are now regarded as old-fashioned and outmoded, with others already in the process of slow abandonment. For successive generations, the youth has looked upon itself as more liberated than its parents’ generation which it then regards, rightly or wrongly, as repressive and rigid.

To cite a rather obvious example, from the 1950s onwards sex has been gradually and almost totally unhitched from marriage and commensurate with this detachment there is more and more permission – indeed encouragement – to be sexual experimental: yesterday’s magnolia has been touched up to include a range of fifty thousand shades of grey! 4 But the zone of the bedroom is perhaps the exception rather than the rule, and outside its liberally sanctioned walls much that was seen as transgressive remains so and in fact continues to be either prohibited by law or else proscribed by customs or just ‘plain common sense’ – thus we are constrained by restrictions sometimes imposed for perfectly sound reasons plus others that lack clear ethical or rational justification.

Arguably indeed, there are as many taboos today as yesterday that inform our oftentimes odd and incoherent relationships to our own bodies and minds. As another illustrative example, most of us have probably heard how the Victorians were so prudish that they would conceal the nakedness of their piano legs behind little skirts of modesty (in fact an urban myth), when it is surely more scandalous (at least by today’s standards) that over the counter at the local apothecary drugs including laudanum (tincture of opium) were freely available to all.

It seems indeed that just as we loosened restraints on sexuality, new anxieties began to spring up concerning our relationship with our bodies as such. Suddenly perhaps we had more to measure up to, especially once all the bright young (and rather scantily-clad) things began to parade themselves indecorously if alluringly throughout our daily lives: ubiquitous in movies, on TV, billboards, and in magazines and newspapers. The most intriguing aspect of this hypersexualisation, however, is that modern society has simultaneously remained prudish in many other regards, most curiously in the case of public nudity; an ‘indecency’ that goes completely unrecognised within so-called primitive societies.

In parallel with these changes, our own culture, which increasingly fixates on youthfulness, has simultaneously fallen into the habit of marginalising old age and death. Not that death, as often presumed, now represents our final unuttered taboo, because arguably more shunned even than death is madness, and presumably because its spectre remains so uniquely terrifying to us.

The overarching point is that no society, however permissive, is ever well-disposed toward individuals who fail to measure up to established norms. The rule is perfectly straightforward in fact: in every society and throughout historical times, social deviants are prone to be ostracised. And as a rule, this applies whether one’s behavioural aberrance is a matter of personal choice or not.

I conjecture, moreover, that our abhorrence of madness is actually informed by the very biological classification of our species and sub-species: Homo Sapiens Sapiens. The wise, wise man! By which we discreetly imply (in our determinedly positivist account) the rational, rational man! Thus, to “lose your mind”, as we often say colloquially, involves the loss of the singular vital faculty – dare I say our ‘essential’ faculty? – The very thing that taxonomically defines us.

Of course, we are trespassing on hugely controversial territory and into areas I am (by profession) totally unqualified to enter. This must be conceded, whilst nevertheless, I do have privileged access when it comes to entering and exploring the field, as do you. Because we all have insider knowledge and deeply vested interest when it comes to comprehending the intricate activities of human consciousness, while no-one has the superhuman immunity that ensures perfect mental health – indeed, most people quietly experience episodes, whether passing or more prolonged, when our minds may go a little wonky.

Lastly then, my real purpose is not to dwell on what madness may be, but, arguably more importantly, to consider the consequences of being treated as mad; and in both senses of ‘treated’. So let’s just slip into these white coats. Ready…? Now to begin some informal examination of this rather delicate matter that is of such immediate and absolutely central importance.

*

I        Sorting the sheep from the goats

“Not all who rave are divinely inspired” – Morris Raphael Cohen

*

“The sole difference between myself and a madman is the fact that I am not mad!” said Salvador Dalí. 5 Dalí, with his dangerous flare for showmanship, was keen to impress upon his audience the exceptionally deranged quality of his genius, yet this well-known quip appeals in part because genius and madness are already romantically entwined, especially in the popular imagination.

Genius equates to madness presumably because both elude ordinary forms of thinking, and thus, a rather banal accountancy goes: genius appears as madness when it is anything but. Alternatively, however, and as Dali intimates, genius truly is a form of madness, at least for some. The artistic visionary in particular draws inspiration, if not upon literal hallucinatory visions – as the poet William Blake did – then from the upwelling of deep and uncertain psychological forces within.

Fascinated by the half-light and the liminal, impelled upon occasion to peer into the abyss, the genius in extreme cases, will indeed tread close to the verge of madness. Yet, most geniuses have not gone mad, nor does genius seem especially vulnerable or susceptible to such self-destructive forces. Even amongst greatest artists, exceptions prove to be the rule – the manic depression of Vincent van Gogh, the profound melancholia of Robert Schumann, the self-destructive alcoholism of Jackson Pollack (and it is noteworthy that van Gogh had a taste for the more deadly alcoholic beverage absinthe), the severe neurosis of Edvard Munch (another excessive drinker), and the depression and tragic suicide of Sylvia Plath. There is nothing however to suggest that Shakespeare or Bach were anything other than entirely sane, or that Mozart, Goethe and Beethoven suffered from frailties or maladies of any lasting psychological kind. The same goes for such modern masters as Picasso, Matisse, Stravinsky, and Mahler – though Mahler did consult Sigmund Freud once for advice on a marital crisis shortly before he died. I could go on and on listing countless sane individuals who excelled in the field of the arts or in other disciplines – indeed Salvador Dalí was another: madness for Dalí being primarily an affectation, as cultured and considered as his trademark moustache, rather than a debilitating affliction.

The problem with all romanticised notions of insanity, especially when upholding insanity as the more honest and thus valid conception of an insane world, is twofold. Not only does it detract from the terrible suffering of those victims most truly lost to the world, but also, and vitally, it mistakes madness for freedom. And there is still a further step. Since madness appears to be a natural manifestation, the most extreme of romanticists have more fervently contended that rather than delusionary, such alternative awareness is no less valid, indeed more valid, than more normalised and thus artificial states of domesticated consciousness. This is a wonderfully tempting fancy for all of us who’ve ever had concerns over a loosening “grip on reality”. Consider, for instance, the following syllogistic fallacy: all geniuses are mad, I’m mad ergo…

But this again is a very lazy method for cancelling madness, in which unpleasant reality is cheaply dismissed basically out of arithmetic convenience, and the two negatives – the horrors of the world and the terrors of the mind – are determined to add to zero. It simply isn’t good enough to say that madness doesn’t exist, or that madness does exist but it is natural and thus wholesome, or even that madness is really just sanity in disguise. That said, and albeit in a more inspirational way, Dalí is speaking for most of us. For the greatest barrier keeping many of us outside the padded cell is that, like him, “we are not mad”.

*

If sanity and insanity exist, how shall we know them? The question is neither capricious nor itself insane.

So begins a paper published by the journal Science in January 1973 and written by David L. Rosenhan, a Professor of Psychology at Stanford University. The “Rosenhan experiment”, as it is now known, had in fact involved two related studies, the first of which was certainly one of the most daring ever conducted in the social sciences.

Rosenhan would send seven mentally healthy volunteers, with himself making eight, on a mission to be admitted as patients within the American psychiatric system. These eight courageous ‘pseudopatients’ soon after arrived at the doors of selected hospitals with instructions to say only that they were hearing a voice which pronounced these three words: “empty”, “hollow” and, most memorably, “thud”. If admitted the volunteers were then further instructed to act completely normally and say that had had no recurrence of those original symptoms. 6

What transpired came as a surprise, not least to Rosenhan himself. Firstly, although none of the volunteers had any prior history of mental illness and none were exhibiting behaviour that could be deemed seriously pathological in any way – Rosenhan having ensured that “[t]he choice of these symptoms was also determined by the absence of a single report of existential psychoses in the literature” – every one of his ‘pseudopatients’ were admitted and so became real patients. More alarmingly, and as each quickly realised, they had landed themselves in a seemingly intractable catch-22 situation: for how does anyone prove their sanity, once certified insane?

If you say that you are fine, then who is to decide whether or not your expressed feelings of wellness are not delusional? It was certainly not lost on Rosenhan that this is a position all psychiatric patients inevitably find themselves in. In the event, it would take the eight ‘pseudopatients’ almost three weeks on average (19 days to be precise, and in one instance 52 days) to convince the doctors that they were sane enough to be discharged. But it didn’t end there, because all but one were finally discharged with a diagnosis of schizophrenia “in remission”, and as Rosenhan notes:

The label “in remission” should in no way be dismissed as a formality, for at no time during any hospitalization had any question been raised about any pseudopatient’s simulation. Nor are there any indications in the hospital records that the pseudopatient’s status was suspect. Rather, the evidence is strong that, once labeled schizophrenic, the pseudopatient was stuck with that label. If the pseudopatient was to be discharged, he must naturally be “in remission”; but he was not sane, nor, in the institution’s view, had he ever been sane. 7

For a second experiment, Rosenhan then cleverly turned the tables. With results from his first test released, he now challenged a different research and teaching hospital where staff fervently denied that they would have made comparable errors, telling them that over the period of three months he would send an undisclosed number of new ‘pseudopatients’ and it was up to them to determine which patients were the imposters. Instead Rosenhan sent no one:

Judgments were obtained on 193 patients who were admitted for psychiatric treatment. All staff who had had sustained contact with or primary responsibility for the patient – attendants, nurses, psychiatrists, physicians, and psychologists – were asked to make judgments. Forty-one patients were alleged, with high confidence, to be pseudopatients by at least one member of the staff. Twenty-three were considered suspect by at least one psychiatrist. Nineteen were suspected by one psychiatrist and one other staff member. Actually, no genuine pseudopatient (at least from my group) presented himself during this period. 8

Rosenhan provocatively although accurately titled his paper “On being sane in insane places”. The results of his study had not only undermined the credibility of the entire psychiatric establishment, but his main conclusion that “we cannot distinguish the sane from the insane in psychiatric hospitals”, touched on a far bigger issue. For aside from challenging existing methods of diagnosis, and calling into question the treatment and stigmatisation of mental illness – in view of what he described in the paper as “the stickiness of psychodiagnostic labels” 9 – the results of his study more fundamentally (and thus controversially) cast doubt on how psychological ‘normality’ can ever be differentiated decisively from ‘abnormality’ in all instances? Buried within his paper, Rosenhan posits:

… there is enormous overlap in the behaviors of the sane and the insane. The sane are not “sane” all of the time. We lose our tempers “for no good reason.” We are occasionally depressed or anxious, again for no good reason. And we may find it difficult to get along with one or another person –  again for no reason that we can specify. Similarly, the insane are not always insane.

So the ‘sane’ are not always ‘sane’ and the ‘insane’ are not always ‘insane’, although Rosenhan never leaps to the erroneous conclusion (as others have and do) that there is no essential difference between sanity and insanity. He simply responds to the uncomfortable facts as revealed by his studies and implores other professionals who are involved in care and treatment of psychiatric patients to be extra vigilant. Indeed, he opens his paper as follows:

To raise questions regarding normality and abnormality is in no way to question the fact that some behaviors are deviant or odd. Murder is deviant. So, too, are hallucinations. Nor does raising such questions deny the existence of the personal anguish that is often associated with “mental illness.” Anxiety and depression exist. Psychological suffering exists. But normality and abnormality, sanity and insanity, and the diagnoses that flow from them may be less substantive than many believe them to be.

So though his albeit small experiment had objectively undermined the credibility of both the academic discipline and clinical practice of psychiatry, his conclusions remained circumspect (no doubt he wished to tread carefully), with the closing remarks to his paper as follows:

I and the other pseudopatients in the psychiatric setting had distinctly negative reactions. We do not pretend to describe the subjective experiences of true patients. Theirs may be different from ours, particularly with the passage of time and the necessary process of adaptation to one’s environment. But we can and do speak to the relatively more objective indices of treatment within the hospital. It could be a mistake, and a very unfortunate one, to consider that what happened to us derived from malice or stupidity on the part of the staff. Quite the contrary, our overwhelming impression of them was of people who really cared, who were committed and who were uncommonly intelligent. Where they failed, as they sometimes did painfully, it would be more accurate to attribute those failures to the environment in which they, too, found themselves than to personal callousness. Their perceptions and behaviors were controlled by the situation, rather than being motivated by a malicious disposition. In a more benign environment, one that was less attached to global diagnosis, their behaviors and judgments might have been more benign and effective. 10

*

Before pursuing this matter by delving into deeper complexities, I would like to reframe the central concept almost algebraically. In this regard I am taking the approach of the stereotypical physicist in the joke, who when asked how milk production on a diary farm might be optimised, sets out his solution to the problem as follows: “Okay – so let’s consider a spherical cow…” 11

By applying this spherical cow approach to psychiatry, I have produced the following three crude equivalences, which are listed below (each accompanied by brief explanatory notes).

#1. Insanity = abnormality

Normality, a social construct [from etymological root ‘right-angled’], implies conventionality, conformity and being in good relation to the orthodoxy [from orthos ‘straight or right’] such that a person is adjudged sane when they appear to be well-balanced, rational, and functional.

#2. Insanity = unhealthiness

Health, a medical consideration [from root ‘whole’] indicates a lack of pathology and in this case emphasises something akin to good mental hygiene. ‘Health’ in the sense of mental health will correspond to low levels of stress and anxiety; high self-awareness and self-assuredness; to happiness and well-being.

And lastly,

#3. Insanity = psychological maladjustment to reality [from late Latin realis ‘relating to things’], with emphasis here placed on authenticity and realism as opposed to fantasy and delusion.

There is, of course, a good measure of crossover between these three pseudo-identities. For instance, if you are ‘normal’ (i.e., adjusted to society) then you have a greater likelihood of being ‘happy’ than if you are at variance. Moreover, if you’re well-adjusted socially, society as a whole will likely attest to you being ‘well adjusted’ in a broader psychological sense, because ‘reality’ is always to some extent socially construed. Imagine, for instance, being suddenly transported to the caste ossified and demon-haunted worlds of the Late Middle Ages; would the people determined sane today be thought sane as they disembarked from our imagined time machine, and would they stay sane for long? 12

I have included this rather crude and uncertain section in order to highlight how appearances of ‘madness’ and ‘sanity’ can often be coloured by alternative societal interpretations. As we venture forward, keep this in mind too: societal influences that shape and inform the prevailing notions of ‘normality’, ‘reality’ and even ‘happiness’ are more often latent than manifest.

“Happiness”: the story of a rodent’s unrelenting quest for happiness and fulfilment by Steve Cutts.

*

Did you ever stride warily over the cracks in the pavement? Have you crossed your fingers, or counted magpies, or stepped around a ladder, or perhaps ‘touched wood’ to ward off some inadvertently tempted fate? Most of us have. Are we mad? Not really, just a little delusional perhaps. Though does superstition itself contain the kernel of madness?

What if that compulsion to step across the cracks becomes so tremendous that the pavement exists as a seething patchwork of uncertain hazards? Or if we really, really feel the urge to touch the wooden object over and over until our contact is quite perfect and precise. When the itch is so irresistible and the desire to scratch quite unbearable, this otherwise silly superstition embroils the sufferer (today diagnosed with Obsessive Compulsive Disorder or OCD) in extended rituals that must be fastidiously completed; a debilitating affliction in which everyday routine becomes a torment as life grinds nearly to a halt, the paralysed victim reduced to going round and round interminably in the completely pointless loops of their own devising: life reduced to a barmy and infuriating assault course that is nearly impossible to complete.

As a child, to entertain yourself, did you ever look out for familiar shapes within the amorphous vapour of clouds or the random folds of a curtain? Doubtless you looked up into the night sky to admire the ‘Man in the Moon’, or if you are Chinese, then to spot the rabbit. Both are wrong, and right – connecting the dots being a marvellous human capacity that allows us to be creators extraordinaire. Yet the same aptitude holds the capacity to drive us literally crazy. How about those monsters at the back of your wardrobe or lurking in wait under the bed… and did the devil live around the U-bend of the toilet ready to leap out and catch you if you failed to escape before the flush had ended? It is fun to indulge in such fantasies. Everyone loves a ghost story.

Not that reconstructing faces or other solid forms where none exist involves hallucinating in the truest sense. However, these games, or harmless tics of pattern recognition – which psychologists call pareidolia – do involve our latent faculty for hallucinations – a faculty that is more fully expressed in dreams or just as we are falling asleep and during waking; images technically described as hypnagogic and hypnopomptic respectively. Some of us also hear imaginary things: and not only “things that go bump in the night”, but occasionally things that go bang upon waking (or on the brink of sleeping). This highly disconcerting experience even has the technical name “exploding head syndrome” – just to let you know, in case you ever suffer from it. Alongside still more frightening and otherworldly apparitions (the worst ones are usually associated with sleep paralysis) auditory hallucinations happen to billions of perfectly sober and otherwise sane individuals.

In fact, it is now known that about one percent of people with no diagnosed mental health problem hear voices on a regular basis – this happens to be approximately equivalent to the number of people who are diagnosed with schizophrenia (and it is important to note here that while not all schizophrenics hear voices, nor is schizophrenia the single mental illness in which hearing voices is a symptom). Within the general population, still more of us have fleeting episodes of hearing voices, while very nearly everyone will at some time experience the auditory hallucination of voices on the brink of sleep and waking.

Of course in a different though related sense, we all hear voices: the familiar inner voice that speaks softly as we think, as we read and perhaps as we console ourselves. And how many of us articulate that voice by talking to ourselves from time to time? As young children between the ages of two to eight we all would have done so. Then sometimes as we literally speak our minds, we also find ourselves listening attentively to what we ourselves just said aloud in these unaccompanied chinwags; although catching yourself fully in the act as an adult can often come as a bit of a shock – but a shock to whom exactly? So are we mad to talk to ourselves… or as the joke would have it, just seeking a more intelligent conversation!

In talking to ourselves we immediately stumble upon a remarkable and unexpected division in consciousness too. One—self becomes two selves. The ‘I’ as subjective knower abruptly perceiving a ‘me’ as a separate entity – perhaps this known ‘me’ perceived by the knower ‘I’ is deemed worthy of respect (but perhaps not, the knower can decide!) Curiously this is not just a mind becoming vividly aware of its existence as a manifestation (modern science would say ‘epiphenomenon’, as if this is an adequate explanation) of the brain-body (and such consciousness of the material self is strange enough), but the mind becoming literally self-aware and this self-awareness having endlessly self-reflecting origins, since if ‘I’ begin to think about ‘me’ then there can now exist a further ‘I’ which is suddenly aware of both the original knower and the already known. Fuller contemplation of this expanding hall of mirrors where the self also dwells is very possibly a road to madness: yet this habit of divorcing ‘I’ from ‘me’ is a remarkably familiar one. As usual, our language also gives us away: we “catch ourselves” in the act, afterwards commenting “I can’t believe I did it!” But what if our apprehension of the one—self becomes more broken still, and our sense of being can only be perceived as if refracted through shattered glass: the splintered fragments of the anticipated ‘me’ (whatever this is) appearing horrifically other?

Perhaps we’ve even had intimations of a feeling that we are entirely disconnected from every other part of the universe, and as such, then felt profoundly and existentially cast adrift with no recall of who we are. Such altered states of detachment are known in psychology as ‘dissociation’ and are not uncommon, especially to those with any appetite for ‘recreational substances’. Even alcohol is known to sometimes elicit temporary dissociation. And if these are representative of some of our everyday brushes with madness, then what of our more extended nocturnal lapses into full-blown irrationality: the hallucinations we call dreams and nightmares, and those altogether more febrile deliriums that occasionally take hold when we are physically ill?

These are the reflections of Charles Dickens, after one of his night walks brought on by insomnia led him to nocturnal contemplation of Bethlehem Hospital:

Are not the sane and the insane equal at night as the sane lie a dreaming? Are not all of us outside this hospital, who dream, more or less in the condition of those inside it, every night of our lives? Are we not nightly persuaded, as they daily are, that we associate preposterously with kings and queens, emperors and empresses, and notabilities of all sorts? Do we not nightly jumble events and personages and times and places, as these do daily? Are we not sometimes troubled by our own sleeping inconsistencies, and do we not vexedly try to account for them or excuse them, just as these do sometimes in respect of their waking delusions? Said an afflicted man to me, when I was last in a hospital like this, “Sir, I can frequently fly.” I was half ashamed to reflect that so could I by night. Said a woman to me on the same occasion, “Queen Victoria frequently comes to dine with me, and her Majesty and I dine off peaches and maccaroni in our night-gowns, and his Royal Highness the Prince Consort does us the honour to make a third on horseback in a Field-Marshal`s uniform.” Could I refrain from reddening with consciousness when I remembered the amazing royal parties I myself had given (at night), the unaccountable viands I had put on table, and my extraordinary manner of conducting myself on those distinguished occasions? I wonder that the great master who knew everything, when he called Sleep the death of each day’s life, did not call Dreams the insanity of each day`s sanity. 13

Meanwhile, obsessing over trifling matters is a regular human compulsion. The cap is off the toothpaste. The sink is full of dishes. That’s another tin gone mouldy in the fridge… during times when our moods are most fraught, seething with dull anger and impatient to explode at the slightest provocation, it is the fridge, sink, and the toothpaste that fills our head with troubles. Presumably again there is a limit beyond which such everyday obsessing becomes pathological. Indeed, I dare to suggest that obsessing over mundanities may be a kind of displacement activity: another distraction from the greatest unknown we all face – our certain endpoint with its dread finality. For we may, without lack of justification, dread our entire future; and with it the whole world outside our door: just as we may with due reason, based on past experiences, panic at the prospect of every encounter.

But whereas normal levels of fear act as a helpful defence mechanism and a necessary hindrance, the overbearing anxiety of the neurotic comes to stand in full opposition to life. Likewise, although indignation can be righteous and rage too is warranted on occasions, a constantly seething ill temper that seldom settles is corrosive to all concerned. In short, once acute anxiety and intense irritability worsen in severity and manifest as part of a chronic condition, life is irredeemably spoiled; in still greater severity, anxiety and anger will likely be attributed to symptoms of a psychiatric condition. The threshold to mental illness is once again crossed, but whereabouts was the crossing point?

Each of us has doubtless succumbed to moments of madness, and not just momentary lapses of reason, but perhaps entered into more extended periods when we have been caught up in obsessive and incoherent patterns of thought and behaviour. Loops of loopiness. Moreover, the majority of us will have had occasions of suicidal ideation, which again remain unspoken in part because they signal a psychological frailty that may point to a deeper pathology, or be mistaken as such. Because madness is not really such a faraway and foreign country, and even the sanest among of us (so far as this can be judged), are from time to time permitted entry at its gates.

*

II       Conspiracies against the laity

“That a dictator could, if he so desired, make use of these drugs for political purposes is obvious. He could ensure himself against political unrest by changing the chemistry of his subjects’ brains and so making them content with their servile condition. He could use tranquillizers to calm the excited, stimulants to arouse enthusiasm in the indifferent, halluciants to distract the attention of the wretched from their miseries. But how, it may be asked, will the dictator get his subjects to take the pills that will make them think, feel and behave in the ways he finds desirable? In all probabil­ity it will be enough merely to make the pills available.”

— Aldous Huxley 14

*

In earlier chapters I have discussed how science is soon out of its depth when it comes to understanding the mind and states of consciousness because the province of science is restricted to phenomena that not only can be observed and unambiguously categorised, but thereafter measured with known precision and modelled to an extent that is reliably predictive. Of course, hidden within that statement is an awful lot of maths, however, use of maths is not the issue here, measurement is.

For measurement becomes scientifically applicable once and only once there is a clear demarcation between the quantities we wish to measure. Length and breadth are easy to separate; time and space, likewise. The same case applies to many physical properties – all of the quantities that physicists and chemists take for granted in fact.

When we come to psychology and psychiatry we are likewise retrained. Brain-states are measurable and so we investigate these and then attempt to map our findings back onto sense-impressions, memories and moods. For instance, if we locate a region of the brain where these sense-impressions, memories and moods can be stimulated then we can begin the partial mapping of conscious experience onto brain-states. But we still have not analysed consciousness itself. Nor do we know how the brain-states permit volition – the choice of whether to move, and how and where to move, or, just as importantly, the freedom to think new thoughts. In short, how does our brain actually produce our states of minds, our personalities, and the entity we each call I? As neurologist Oliver Sacks noted in his book A Leg to Stand On in which he drew on his personal experience of a freak mountaineering accident to consider the physical basis of personal identity:

Neuropsychology, like classical neurology aims to be entirely objective, and its great power, its advances, come from just this. But a living creature, and especially a human being, is first and last active – a subject, not an object. It is precisely the subject, the living ‘I’, which is being excluded. Neuropsychology is admirable, but it excludes the psyche – it excludes the experiencing, active, living ‘I’ 15

We as yet have no grounds whatsoever to suppose that science will ever be able to objectively observe and measure states of consciousness. In fact, what would that actually entail? For we do not have even the slightest inkling what consciousness is, or, far more astonishingly, as yet understand how consciousness is routinely and reversibly switched off with use of general anaesthetics, even though general anaesthetics have been widely and effectively used in surgery for over a century and a half.

Moreover, having acknowledged its non-measureability, it is seen as permissible by some scientists to casually relegate consciousness to the status of an epiphenomenon. That is, science takes the singular certainty of our everyday existence and declines from taking any serious interest in its actual reality; in the most extreme case, proclaiming that it is purely illusory… Now think about that for a second: how can you have the ‘illusion of consciousness’? For what vehicle other than a conscious one can support or generate any kind of illusion at all? Although language permits us frame the idea, inherently it is self-contradictory, and proclaiming the illusoriness of consciousness is akin to deciding on the insubstantiality of substance or the unwetness of water.

South African psychoanalyst and neuropsychologist Mark Solms, who has devoted his career to reconnecting these scientific disciplines, here makes a persuasive case built upon studies of brain damaged patients that the source of consciousness cannot lie within the higher level cortex, as has been generally supposed, but instead involves mechanisms operating within the brain stem:

Furthermore, the literal root to our modern terms ‘psychology’, ‘psychoanalysis’ and ‘psychiatry’ is a derivative of the Greek word ‘psyche’ with its origins in ‘spirit’ and ‘soul’, and yet each of the disciplines have altogether abandoned this view in order to bring a strictly biomedical approach to questions of mind. No longer divorced from the brain, mind is thus presumed to be nothing more or less than outputs of brain function, and so the task of today’s clinicians becomes one of managing these outputs by means of physical or chemical adjustments. To these ends, the origins and causes of mental illness are often presumed to be fully intelligible and detectable in abnormalities of brain physiology and most specifically in brain chemistry – this is something I will discuss in greater detail.

Taking such a deeply biochemical approach to mental illness also leads inexorably to questions of genetics since there is no doubt that genes do predispose every person to certain illnesses, and so, with regards to the issue at hand, we might envisage some kind of psychological equivalent to the physical immune system. There is indeed no controversy in saying that the individual propensity to suffering mental illness varies, or that, if you prefer, we inherit differing levels of psychological immunity. Some people are simply more resilient than the average, and others less so and this difference in propensity – one’s ‘psychological immune system’ – is to some extent innate to us.

Of course, if genetic propensity was the primary determinant for rates of mental illness then within any given gene pool we ought to expect a steady level in the rates for diagnosis given that variations within any gene pool change comparatively slowly and over multiple generations. Evidently genetics alone cannot therefore explain any kind of sudden and dramatic rise in incidence of health problems, whether mental or otherwise. One note of caution here: the newer field of epigenetics may yet have something to add to this discussion.

But psyche, to return to the main point, is not a purely biological phenomenon determined solely by genetics, and other wholly material factors such as diet, levels of physical activity and so forth. For one thing, mind has an inherent and irreducible social component and this is the reason solitary confinement or similar forms of deprivation of social stimulus are exceedingly cruel forms of punishment. Taking the still more extreme step of subjecting a victim to the fullest sensory deprivation becomes a terrifying form of torture and one that rapidly induces psychological breakdown. All of this is well-established and yet still the scientific tendency is treat minds just as highly sophisticated programmes running on the wetware of our brains. But the wetware unlike the hardware and software of this computer in front of me possesses both subjectivity and agency. Put another way around: the brain isn’t the conscious agent; you are. And it is equally true to say, as the great theoretical physicist Max Planck elegantly pointed out, that consciousness is absolutely foundational:

I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing, postulates consciousness. 16

Planck is precisely right to say we cannot get behind consciousness. And by everything he quite literally means everything including of course the brain, although unfortunately we are very much in the bad habit of forgetting this glaring fact.

With developments in neurology and biochemistry, science becomes ever more accomplished at measuring and, again with increasing refinement, is able to alter brain function, and in doing so, to alter states of consciousness. Yet even while a scientist or doctor is manoeuvring a patient’s mind, he remains deeply ignorant of how the change is achieved, and it is worth bearing in mind that methods for alteration of states of consciousness have been known and practiced throughout all cultures long before the advent of science.

To offer a hopefully useful analogy, when tackling problems of consciousness, our best scientists remain in the position of a motorist who lacks mechanical understanding. The steering wheel changes direction and two of pedals make the car go faster or slower – yet another pedal does something more peculiar again that we needn’t dwell on here! Of course, our imaginary driver is able to use all these controls to manoeuvre the car – increasingly well with practice. Added to which he is free to lift the bonnet and look underneath, however, without essential knowledge of engineering or physics, it provides no eye-opening additional insights. Although such an analogy breaks down (if you’ll pardon my pun), as every analogy here must, because as Planck says, when it comes to consciousness all our understanding of the world, all concepts, are contingent on it, including in this instance, the concept of mechanisms.

For these reasons we might quite reasonably ask which factors the psychiatrist ought to invest greater faith in: definite quantities or indefinite qualities? Measureable changes in electrical activity or a patient’s reports of mood swings? Rates of blood flow or recognisable expressions of anxiety? Levels of dopamine or the unmistakeable signs of the patient’s sadness and cheerfulness?

More philosophically, we might wonder deeply into what awareness is. How do we navigate the myriad nooks and crannies of the world that our minds (in a very real sense) reconstruct – our perceptions being projections informed by sensory inputs and produced to give the appearance of external reality – in order to inquire into the nature of both the world and the organs of perception and cognition when the precursory nature of awareness somehow remains tantalisingly beyond all reconstruction? When confronted by these questions science is struck dumb – it is dumbfounded. Obviously, so too is psychiatry.

Mathematician and Quantum Physicist Roger Penrose has devoted a great deal of time thinking about the nature of consciousness and in his best-selling book The Emperor’s New Mind (1989) he explained why science is wrong in presuming it is a purely computational process. In conversation with AI researcher Lex Fridman, here Penrose again stresses our lack of basic scientific understanding of consciousness and proffers his own tentative ideas about where we might begin looking and, in particular, how investigating the causal mechanisms underlying general anaesthetics looks a profitable place to begin:

*

In the early 1960s, tired of signing his name on the skin of naked women, transforming them instantly into living sculptures (and what’s not to like about that?), avant-garde Italian artist, Piero Manzoni turned his hand instead to canning his own excrement and selling his tins to galleries. In May 2007, a single tin of Manzoni’s faeces was sold at Sotheby’s for more than £100,000; more recently in Milan another tin of his crap fetched close to a quarter of a million! It would be madness, of course, to pay anything at all for bona fide excrement (and it remains uncertain whether Manzoni’s labels reliably informed his customers of their literal contents), was it not for the fact that other customers were queuing up and happy to pay as much or more. Indeed, if anyone can ever be said to have had the Midas touch, then surely it was Manzoni; just a flick of his wrist miraculously elevating anything at all to the canonised ranks of high art – literally turning shit into gold.

But then the art world is an arena that excels in perversity and so pointing out its bourgeois pretensions and self-indulgent stupidities has itself become a cheap pursuit, while to the initiated it simply marks me out as another unenlightened philistine. What is blindingly obvious to the rest of us has instead become undetectable to the connoisseur, the banality obscured by fashion and their own self-gratification. In an era that is exceptionally cynical and commercial, it comes as no surprise therefore to find the art world reflecting and extolling works of commensurate cynicism and degeneracy. What is more interesting, however, is this contemporary notion that art has finally become anything done by an artist: for we might reasonably ask, does this same approach to validation apply across other disciplines too? For instance, if scientists collectively decide to believe in a particular method or theory, does this automatically make their shared belief somehow ‘scientific’? I pose this as a serious question.

What is more important here is to understand and recognise how all intellectual fields face a similar risk of losing sight of what is inherently valuable, becoming seduced by collective self-deception and wrapped up in matters of collective self-importance. Peer pressure. Groupthink. The bandwagon effect. If you’ve never seen the footage before then I highly recommend watching Solomon Asch’s ‘conformity experiments’ in which test subjects were found to consistently and repeatedly defer to false opinion and in blatant contradiction to what they could see perfectly clearly and right in front their own eyes. 17

In short, most people will “go along to get along” and this maxim applies across all levels of society and in all spheres of activities including the sciences. Moreover, it is very seldom the case that any scientific paradigm changes because its opponents are suddenly won over by a novel framework of ideas due to its intrinsic elegance or power, but rather as Max Planck put it most bluntly (at least as it usually paraphrased): “Science progresses one funeral at a time”. 18

These problems are additionally compounded by reification: the mistaking of abstractions for solid aspects of reality; of confusing the map with the territory. Related to this is something William James once described as the “Psychologist’s fallacy”:

The great snare of the psychologist is the confusion of his own standpoint with that of the mental fact about which he is making his report. I shall hereafter call this the ‘psychologist’s fallacy’ par excellence. 19

There are actually three ways of interpreting James’ statement here and each of these is equally applicable. The first and most general cautions against mistaking one’s personal perception and interpretation of an event as a perfectly accurate account – this strictly applies to all fields of objective research. The next is that it is easy to mistake another person’s experience and falsely imagine it is identical to your own. This ‘confusion of standpoints’ can cause you to believe you know why someone did what they did believing they are motivated in just the same way you are. Then finally, there is an error that applies in situations whenever you are involved in studying another person’s mental state (for whatever reason and not necessarily in a clinical setting) and you suppose that the subject is likewise critically aware of their own thoughts and actions. This is called ‘attribution of reflectiveness’ and it may occur for instance if you come across someone blocking your way once you then presume that they are fully aware of the obstruction they have caused to your progress and are obviously being inconsiderate.

Besides the issues of groupthink and the fallacies outlined above, there is a related difficulty that arises whenever you are constrained by any systems of classification, and given how incredibly useful categories are (especially in the sciences), this is again hard to avoid. Whenever a system comes to be defined and accepted, the tendency will always be for adherents to look for and find examples that fit and support it; and if this means cherry-picking the facts then so be it. Within no time an entire discipline can spring up this way, as was the case of phrenology (a subject I shall come back to in a later chapter).

*

George Bernard Shaw nattily remarked that “all professions are conspiracies against the laity”. In the same spirit, we might extend his concern adding that such conspiracies will tend to feign understanding, disguise ambiguity and perpetuate fallacies. The quip itself comes from Shaw’s play The Doctor’s Dilemma, and was most pointedly aimed toward the medical profession. But then in defence of doctors, medicine as a discipline is arguably the science most plagued by vagueness; a nearly intractable problem given how symptoms of so many diseases can be easily muddled just because of their inherent similarities. Consider, for instance, the thousand and one ailments that all have “flu-like symptoms”.

In turn, patients are equally prone to vagueness when giving accounts of their own symptoms, in part because symptoms are often rather difficult to describe – just how do you distinguish the various feelings of pain, for instance. To make matters worse, human biology is already fiendishly complex. Textbooks provide only textbook examples: they show ideal anatomy, while real anatomies are seldom ideal and it is a surprisingly common occurrence for actual patients to have organs with structures or locations that are very markedly different.

The unavoidable outcome of all this uncertainty and peculiarity is that medical professionals do not understand nearly half so much as those without medical training are given to believe – and, importantly, choose to believe. Because, as patients, not only do we seek clear diagnoses, but we look to medicine for sure-fire remedies, all of which encourages an inclusion in medical nomenclature of elaborate – and preferably Latinised labels – for the full gamut of our daily complaints. A complete taxonomy that catalogues and accounts for every combination of symptoms and one or two half-glimpsed maladies. All of which brings us to the consideration of ‘syndromes’ and ‘disorders’.

When your doctor diagnoses abc-itis, then presuming the diagnosis is a correct one, it is very certain that you have inflammation of your abc. Diagnoses of thousands of complaints and diseases are absolutely clear-cut like this. However, if told you are suffering from xyz syndrome, it may mean instead that you are presenting a cluster of symptoms which are recognised to occur in a specific combination; a grouping that crops up often enough to have acquired its label ‘xyz syndrome’, rather than a disease with a well-established or single underlying cause. In short, the term ‘syndrome’ will sometimes hide a lot more than it reveals.

Whenever patterns of symptoms have been rolled together and labelled for the sake of convenience under a single catch-all name, here is the shorthand for saying we recognise the signs, and though can’t tell you the cause and as yet remain unable to recommend a cure, we are working on it! And if the shorthand was unavailable, then instead the clinician would have to shrug their shoulders and usher you away, which, given how patients usually have a strong preference for receiving (at the very least) a name for the cause of their suffering, this more customary exchange allows both parties to leave the consultation far happier. We are often content therefore to indulge our medical (and other experts) in maintaining many of these Shavian “conspiracies” against us.

Returning to consider psychiatry, it is necessary to appreciate that all but the rarest of psychiatric diagnoses fall under the category of ‘disorders’ rather than diseases – and that the underlying aetiology in many cases is not just unknown but more or less unconsidered. It follows that historically, the development of diagnosis and treatments has very often had recourse to little more than educated hunches and trial-and-error testing on (all-too often) unwilling patients.

As former National Institute of Mental Health (NIMH) Director, Thomas Insel, pointed out:

“While DSM [Diagnostic and Statistical Manual of Mental Disorders] has been described as a “Bible” for the field, it is, at best, a dictionary, creating a set of labels and defining each. The strength of each of the editions of DSM has been “reliability” – each edition has ensured that clinicians use the same terms in the same ways. The weakness is its lack of validity. Unlike our definitions of ischemic heart disease, lymphoma, or AIDS, the DSM diagnoses are based on a consensus about clusters of clinical symptoms, not any objective laboratory measure. In the rest of medicine, this would be equivalent to creating diagnostic systems based on the nature of chest pain or the quality of fever. Indeed, symptom-based diagnosis, once common in other areas of medicine, has been largely replaced in the past half century as we have understood that symptoms alone rarely indicate the best choice of treatment. Patients with mental disorders deserve better.” 20

*

Psychiatrist: Have you ever heard of the old saying “a rolling stone gathers no moss?”

Patient: Yeah.

Psychiatrist: Does that mean something to you?

Patient: Uh… it’s the same as “don’t wash your dirty underwear in public.”

Psychiatrist: I’m not sure I understand what you mean.

Patient: [smiling] I’m smarter than him, ain’t I? [laughs] Well, that sort of has always meant, is, uh, it’s hard for something to grow on something that’s moving.

If you’ve seen the film One Flew Over the Cuckoo’s Nest 21 then you may recognise the dialogue above. It comes when the central protagonist Randle McMurphy (brilliantly cast as the young Jack Nicholson) is subjected to a follow-up evaluation carried out by a team of three psychiatrists trying to determine whether or not he is fit enough to be discharged.

Released only a couple of years after Rosenhan and his ‘pseudopatients’ had sneaked under the diagnostic radar, and like Rosenhan and his associates, but for reasons which we need not go into, in the film McMurphy is an apparently sane inmate plunged into an infuriating and intractable catch-22 situation.

Now the question posed to McMurphy appears an odd one, yet questions of precisely this kind, commonly based around well known proverbs, were once regularly used for such diagnostic purposes. Just as with the better known Rorschach inkblot test, there is no single ‘correct’ answer, but there were built-in ways a patient might fail such an examination. In this case, responses considered too literal were taken as evidence of pathology on the grounds that they show an inability for the patient to think in ways other than concretely. Simply re-expressing the proverb in order to precisely account for how a rolling rock is an inhospitable environment for vegetation is therefore an ill-advised response.

Indeed, McMurphy’s second answer conclusively fails the test, whereas his first stab at saying something deliberately obtuse merely confuses the three doctors. Of course, in the film it is McMurphy’s deeply rebellious nature and truculent behaviour, rather than the results of tests of this sort that ultimately seal his fate – and again there is no need for details here, but merely to add that whilst the ramifications of Rosenhan’s experiment challenged opinions within academic and professional circles, the multiple Academy Award-winning One Flew Over the Cuckoo’s Nest, reached out to a far wider audience and helped to change the public perception of how we care for the mentally ill. Moreover, Rosenhan’s criticisms had been restrained, whereas the film – like the book – went straight for the jugular.

Author of the book “One Flew Over the Cuckoo’s Nest”, Ken Kesey, was involved with the film adaptation, but for a variety of reasons including a dispute over royalties, he left barely two weeks into production and has since claimed not to have watched the final version. Embedded below is a short interview with Kesey talking about the main characters and interspersed with relevant clips:

In the wake of Rosenhan’s experiment (1972) and Kesey’s fictional portrayal of life inside the asylum (published in 1962, released as a film in 1975), the ‘anti-psychiatry’ movement (a term coined by one of its most prominent advocates, South African psychiatrist David Cooper in 1967) soon began to gain political traction. With the legitimacy of mainstream psychiatry subject to sustained attack and very concept of mental illness suddenly coming under scrutiny, in the midst of this crisis, the American Psychiatric Association (APA) made a decision to release its new manual: a fully updated directory that would authoritatively categorise and thus authenticate all forms of ‘mental disorder’.

The Diagnostic and Statistical Manual of Mental Disorders – soon after known as ‘the bible of psychiatry’ – is now in its fifth edition, DSM-V, and with each updated edition it has become an ever weightier tome, expanding at a faster rate than almost any other technical manual in history. And this snowballing really started in 1968 when the revised second edition introduced an additional seventy-six ‘disorders’, thereby expanding the original 1952 catalogue by more than 70 percent. When revised again in 1980, the DSM-III added a further 83 diagnostic categories; its list growing from 182 (DSM-II) to 265 (DSM-III) – this represents a 150 percent increase on the original. Although less conspicuously, the same trend continued when DSM-IV was released in 1994, which catalogues a total of 410 disorders – almost a three-fold increase on the original.

James Davies is a Reader in Social Anthropology and Mental Health at the University of Roehampton, a psychotherapist, and co-founder of the Council for Evidence Based Psychiatry. In trying to understand how the present manual had come to be constructed he decided to speak to the many of authors directly, and so in May 2012 he took a trip to Princeton. There he was welcomed by Dr Robert Spitzer who had chaired the core team of nine people who put together the seminal third edition of the DSM, which amongst other things established the modern diagnostic system still broadly in operation. It was this edition of the manual that had introduced such household-name disorders as Borderline Personality Disorder and Post-Traumatic Stress Disorder. For these reasons, Spitzer is widely regarded as the most influential psychiatrist of the last century.

Davies began his interview by asking Spitzer what was the rationale behind his significant expansion in number of disorders in the DSM-III edition and Spitzer told him:

“The disorders we included weren’t really new to the field. They were mainly diagnoses that clinicians used in practice but which weren’t recognised by the DSM or the ICD.” 22

Davies then pressed further and asked how many of these disorders had been discovered in a biological sense. In reply Spitzer reminded him that “there are only a handful of mental disorders… known to have a clear biological cause” adding that these organic disorders like epilepsy, Alzheimer’s and Huntington’s are “few and far between”; conceding that no biological markers have been identified for any of the remaining disorders in DSM. With this established, Davies then asked how the DSM taskforce did determine which new disorders to include. Spitzer explained:

“I guess our general principle was that if a large enough number of clinicians felt that a diagnostic concept was important in their work, then we were likely to add it as a new category. That was essentially it. It became a question of how much consensus there was to recognise and include a particular disorder.” 23

Davies also spoke to Dr Theodore Millon, another of the leading lights on Spitzer’s taskforce, to ask more about the construction of their manual. Millon told him:

“There was little systematic research, and much of the research that existed was really a hodgepodge – scattered, inconsistent, and ambiguous. I think the majority of us recognised that the amount of good, solid science upon which we were making our decisions was pretty modest.” 24

Afterwards, Davies had put Millon’s points directly to Spitzer, who responded:

“Well it’s true that for many of the disorders that were added, there wasn’t a tremendous amount of research, and certainly there wasn’t research on the particular way that we defined these disorders… It is certainly true that the amount of research validating data on most psychiatric disorders is very limited indeed.”

Adding that:

“There are very few disorders whose definition was a result of specific research data.” 25

On the basis of Spitzer’s surprising admissions, Davies than tracked down other members of the same DSM team. For instance, he spoke on the phone to Professor Donald Klein, another leader on the taskforce, who said:

“We thrashed it out basically. We had a three-hour argument… If people [at the meeting] were still undecided the matter would be eventually decided by a vote.” 26

And Davies finally decided to check what he was hearing from these members by looking through the minutes of taskforce meetings which are still held in the archives, discovering that voting did indeed take place to make such determinations. Renee Garfinkel, a psychologist who participated in two DSM advisory subcommittees, told Davies more bluntly:

“You must understand what I saw happening in these committees wasn’t scientific – it more resembled a group of friends trying to decide where they want to go for dinner.”

She then cited the following concrete example of how one meeting had proceeded:

“As the conversation went on, to my great astonishment one Taskforce member suddenly piped up, ‘Oh no, no, we can’t include that behaviour as a symptom, because I do that!’ And so it was decided that that behaviour would not be included because, presumably, if someone on the Taskforce does it, it must be perfectly normal.” 27

Although comprised of a rather small team, DSM-III has had far-flung and long-lasting influence on psychiatry. Spitzer told Davies:

“Our team was certainly not typical of the psychiatry community, and that was one of the major arguments against DSM-III: it allowed a small group with a particular viewpoint to take over psychiatry and change it in a fundamental way.

“What did I think of that charge? Well, it was absolutely true! It was a revolution, that’s what it was. We took over because we had the power.” 28

In any case, reliance upon a single definitive and encyclopaedic work of this kind presents a great many hazards. As Allen Frances, the former chairman of the psychiatry department at Duke University School of Medicine who led the taskforce that produced DSM-IV has publicly admitted:

At its annual meeting this week [in May 2012], the American Psychiatric Association did two wonderful things: it rejected one reckless proposal that would have exposed nonpsychotic children to unnecessary and dangerous antipsychotic medication and another that would have turned the existential worries and sadness of everyday life into an alleged mental disorder.

But the association is still proceeding with other suggestions that could potentially expand the boundaries of psychiatry to define as mentally ill tens of millions of people now considered normal.

In the same op-ed published by the New York Times, Frances continued:

Until now, the American Psychiatric Association seemed the entity best equipped to monitor the diagnostic system. Unfortunately, this is no longer true. D.S.M.-5 promises to be a disaster — even after the changes approved this week, it will introduce many new and unproven diagnoses that will medicalize normality and result in a glut of unnecessary and harmful drug prescription. The association has been largely deaf to the widespread criticism of D.S.M.-5, stubbornly refusing to subject the proposals to independent scientific review.

Many critics assume unfairly that D.S.M.-5 is shilling for drug companies. This is not true. The mistakes are rather the result of an intellectual conflict of interest; experts always overvalue their pet area and want to expand its purview, until the point that everyday problems come to be mislabeled as mental disorders. Arrogance, secretiveness, passive governance and administrative disorganization have also played a role.

New diagnoses in psychiatry can be far more dangerous than new drugs. 29

In an earlier interview speaking with Wired magazine, Frances – credited as “the guy who wrote the book on mental illness” – made an even more startling confession, telling Gary Greenberg, who is himself a practicing psychotherapist:

“[T]here is no definition of a mental disorder. It’s bullshit. I mean, you just can’t define it… these concepts are virtually impossible to define precisely with bright lines at the boundaries.” 30

*

The entry of psychiatry into the province of science is a comparatively recent one. Indeed, in the ancient world and times prior to the Enlightenment, some severe forms of mental illness would most likely have appeared the work of demons. And if a person was believed to be possessed, then religious protocols, informed by the opinion that their soul was in existential peril and without intervention would suffer eternal damnation, called for extremely drastic measures.

Indeed, the very word psychiatry derives (as mentioned above) from the Greek psukhē for ‘breath, life, soul’ (Psyche also the Greek goddess of the Soul), though in accordance to the strict biomedical model of mind, psychiatry today takes no interest in these ‘spiritual’ matters. Nevertheless, the interventions of psychiatry to save a person’s mind have often been as drastic, and if anything crueller, than those inflicted throughout prior ages. The dark arts of exorcism or trepanning superseded and upgraded by the aid of technological means: the unfortunate victims, at first, subjected to induced convulsions by the administration of an overdose of insulin, then more latterly by means of high voltage electric shocks passed between the temples (electroconvulsive therapy or ECT). Still more invasive treatments were also introduced throughout the twentieth century that excised a patient’s demons by means of irreversible surgical mutilation.

When we retrace the short but terrible history of psychiatry, it is rather easy to overlook how many of these barbaric pseudoscientific treatments were once lauded as state-of-the-art. As recently as 1949, Portuguese neurologist António Egas Moniz actually shared the Nobel Prize for Medicine for his invention of a routine procedure for carrying out lobotomies; his original procedure refined by Moniz’s mentor, American neurologist Walter Freeman, who used an ice-pick hammered through the eye socket to sever the frontal lobes. Such horrific procedures were frequently performed without anaesthetic and led to the destruction of the minds – although I am tempted to say souls – of tens of thousands of people; the majority of whom were women (also predominant amongst victims were homosexuals). This use of so-called ‘psychosurgery’ was phased out gradually but lobotomies continued to be performed into the 1970s and even later. 31

Today it is also an open, if dirty, secret that throughout modern times, psychiatry has played a pivotal role in the coercion of political opponents of the state. Many authoritarian regimes – the former Soviet Union the most frequently cited – operating their mental health systems as a highly efficient means for cracking down on dissidents (who more or less by definition failed to think ‘normally’). The abuse of psychiatry by western governments is less known, however, at the height of the Cold War, the CIA carried out a whole range of experiments under Sidney Gottleib’s MKUltra mind control programme.

One of the MKUltra researchers was Ewan Cameron, the then-President of the American Psychiatric Association (APA), who went so far as to attempt to entirely erase his patients’ existing memories by means of massive doses of psychotropics and ECT in attempts to reprogram the victim’s psyche from scratch. Decades later, some the survivors won financial rewards as compensation for their part in this secret regime of state-sponsored torture. 32 Moreover, this very close collaboration between military intelligence services and the APA has continued and during the “War on Terror” a number of ‘operational psychologists’ are now known to have worked on CIA’s “enhanced interrogation” torture programme. 33

Of course, state coercion is not always to control political enemies. Minorities who have suffered discrimination for different reasons have likewise fallen victim to psychiatric abuse. In fact, prior to 1973, when homosexuality was designated a disease and placed on the list of ‘mental disorders’ according to the DSM ‘bible’, otherwise healthy gay men were forcibly subjected to treatments involving aversion ‘therapies’ that included electric shock to the genitals and nausea-inducing drugs administered simultaneously with the presentation of homoerotic stimuli. In the Anthony Burgess novel Clockwork Orange (1962) this was called “the Ludovico Technique”.

Thus historically, the insane subject – i.e., anyone who is diagnosed as mentally ill – has been uniquely deprived their basic human rights. Downgraded in social status and transformed de facto into a kind of second class human. Even today, when clinical procedures are kinder, patients are routinely subjected to many involuntary treatments including the long-term administration of powerful drugs and ECT.

*

Leaving aside the moral questions, this terrible history also casts a shadow over the whole science underpinning these treatments. What do we really know about the efficacy of ECT today that we didn’t know in the middle of the last century?

Or consider the now familiar labelling of drugs as ‘antipsychotic’ and ‘antidepressant’: terms that are wholly misleading and deeply unscientific, since the implication is that these are antidotes are much like antibiotics, acting to cure specific disease by targeting the underlying pathology. But this is entirely false, and the reason it is misleading can be best understood by once again reviewing the history of psychiatry.

Firstly, it is important to recognise that none of the first generation of psychiatric drugs was ever developed for the purpose either of alleviating neurological dysfunction or enhancing brain activity. Chlorpromazine (CPZ) – marketed under the brand names Thorazine and Largactil – the earliest of these ‘antipsychotics’ had previously been administered as an antihistamine to relieve shock in patients undergoing surgery, although it was in fact derived from a family of drugs called phenothiazines originally used as antimalarials and to combat parasitic worm infestations. 34

It had been noticed, however, that many of the patients who received Thorazine would afterwards manifest mood changes and in particular experience a deadening in their emotional response to the external world while otherwise retaining full consciousness. In short, the drug happened to reproduce the effects observed in patients who underwent a surgical lobotomy (which in 1950 was still considered a highly effective treatment for psychosis of course).

On the other hand, ‘antidepressants’ emerged as a by-product of research into tuberculosis, after it was noticed that some patients in the trials became more roused following their medication. Only in the aftermath of studies carried during in the 1960s, did science finally begin to understand how these pharmaceuticals were having direct effects within the brains of patients, and specifically on processes involving, respectively, the neurotransmitters dopamine and serotonin. In patients suffering psychosis there was found to be an excess of the former, whereas those suffering depression showed an apparent deficit of the latter. The conclusion followed that the drugs must have been acting to correct an existing imbalance, very much as insulin does in the case of diabetes.

So the conclusions from these early studies were drawn wholly from understanding the mechanism of action of the drugs. Since the antipsychotics were found to block dopamine receptors, the hypothesis formed that the condition of psychosis must be due to an excess of dopamine activity; likewise, since antidepressants held serotonin longer in the synaptic cleft (the space that separates and forms a junction between neurons) boosting the activity, it followed that depression was a result of low serotonin activity. However, this reasoning turns out to be inherently flawed, and as subsequent research had quickly revealed, actual differences in brain chemistry detected in patients were a feature not of the underlying pathology associated with their disorder, but instead a direct effect of the medications used to treat them. Indeed for decades, clued-up pharmacologists and many psychiatric practitioners have regarded the theory of ‘chemical imbalance’ not as a scientific model, but nothing more than a metaphor: a means of explaining the use of the treatment to patients as well as an encouragement.

This is what Ronald W. Pies, Editor-in Chief Emeritus of Psychiatric Times, wrote a decade ago about the ‘theory of chemical imbalance’:

“I am not one who easily loses his temper, but I confess to experiencing markedly increased limbic activity whenever I hear someone proclaim, “Psychiatrists think all mental disorders are due to a chemical imbalance!” In the past 30 years, I don’t believe I have ever heard a knowledgeable, well-trained psychiatrist make such a preposterous claim, except perhaps to mock it. On the other hand, the “chemical imbalance” trope has been tossed around a great deal by opponents of psychiatry, who mendaciously attribute the phrase to psychiatrists themselves. And, yes—the “chemical imbalance” image has been vigorously promoted by some pharmaceutical companies, often to the detriment of our patients’ understanding. In truth, the “chemical imbalance” notion was always a kind of urban legend – never a theory seriously propounded by well-informed psychiatrists.” 35

David Cohen is Professor of Social Welfare and Associate Dean for Research and Faculty Development at UCLA Luskin. His research looks at psychoactive drugs (prescribed, licit, and illicit) and their desirable and undesirable effects as socio-cultural phenomena “constructed” through language, policy, attitudes, and social interactions. Here he explains how psychiatry has painted itself into a corner and became unable to look for treatments for mental illness that lie outside the biomedical model, which treats all conditions of depression, anxiety and psychosis as brain disorders:

Today we have become habituated to the routine ‘medication’ of our youth with children as young as six years old being administered tranquilisers relabelled as ‘antidepressants’ and ‘antipsychotics’ that are intended ‘to cure’ dysfunctions like “oppositional defiant disorder”. These considerations bring us to the broader issue of what constitutes ‘mental health’, and by extension, what it is to be ‘normal’.

Moreover, it hardly needs saying that increased diagnosis and prescription of medication of every variety is demanded by the profit motive of pharmaceutical industry, so for now, I wish merely to add that we have no demonstrable proof that the identified rise in mental illness is wholly attributable to a commensurate rise in mental illness rather than an artefact bound up with the medicalisation of the human condition. However, given that mental health is expressly bound up with, and to a great extent defined by a person’s feelings of wellness, attempts to downgrade or dismiss patient testimony or to overrule personal accounts of psychological distress, declaring some parts of it illusory, are not only callous but another kind of category mistake. Whatever terminology we apply it is evident that more people than ever are suffering forms of psychological distress. I shall consider this at greater length in the final section.

*

Before continuing, I would like to introduce a genuinely serendipitous finding – a newspaper clipping torn out by someone I have never met, and left inside the cover of a second-hand book for reasons I shall never know. I cannot even reference this item because I have no idea in which newspaper it was originally printed, and so will simply label it Exhibit A (the author’s name is also redacted out of courtesy):

“Someone close to me has been smoking cannabis for many years,” the author tells us, adding “That person has never worked and lives in a state of euphoria.”

From these preliminary remarks it is actually hard to tell whether the writer is issuing a caution or an endorsement for pot smoking – or at least it would be hard to tell, were it not for our informed social prejudices, and since the presumed social norm is that work is always good and drugs (meaning illegal ones) unconditionally bad. Suppose, however, this surmised state of euphoria had been ascribed to quite different causes. Let’s say, for example, that the person in question was in love, or that s/he’d found God, or alternatively that s/he had been proscribed a legally sanctioned medicine lifting them from a prior state of depression and anxiety, and this lasting euphoria was the outcome. Would this not be a good thing?

But the next part of the letter is perhaps the most interesting part. It begins: “People on cannabis lose touch with reality. They cannot relate to normal life because they are in a perpetual state of relaxation, and doing everyday tasks or even getting up in the morning is a big deal. They drift from day to day.”

At this point, I ought to make a personal confession. The person described here is me – not me in all actuality, but another me, another drifter. It is me and a considerable number of my closest friends, who have spent a great many years smoking pot and “losing touch with reality”. Doubtless, it will describe the lives of some of the people who happen to read this too. Personally, I gave up smoking pot years ago for health reasons, and I do not advise others to follow my lead either way. Undeniably, there is some truth within the letter, but there is also a great deal of misunderstanding.

Do pot smokers live in realms of make-believe? Do we care about nothing? Interestingly, we could just as easily ask the same question of those proscribed SSRI (selective serotonin reuptake inhibitor) antidepressants like Prozac, and all of the other legally sanctioned mind-altering substances. Leaving aside social acceptance, which surely owes much to the profit motive, what other distinction can we make here once we dismiss the false hypothesis of redressing chemical imbalance?

Of course, none of us ever knows what might otherwise have been had they not done such and such. The road not taken is forever unknown. The only fair question therefore must involve regret, and I confess that I do not regret my decision to smoke pot, nor do I know any friends who have told me they regret their own choice in this regard. The important point I wish to emphasise is that legal determinations do not automatically establish what is to our better health and well-being, and nor do they determine what is right and wrong in a moral sense. Indeed, who dares to tell another adult how they ought to think, and equally who dares to say how one may or may not alter their consciousness by whatever means they see fit? If we are not entirely free to think as we choose, then as creatures so fully submerged in our thoughts, we can hardly be said to free at all.

*

Here is David Cohen again, discussing how psychiatric drugs are no different in kind from many street drugs:

David Nutt, Professor of Neuropsychopharmacology at Imperial College London, has closely studied the range of harms that legal and illegal drugs can do to individuals and society. On the basis of his findings, he has reached the perhaps surprising conclusion that policy should begin with an end to the criminalisation of all drug use and possession. In March 2021 he was interviewed by Novara Media’s Ash Sarkar:

Comedian Bill Hicks also his own opinions on why some drugs are taxed when others are banned [warning: very strong language and opinions!]:

*

III      Driven crazy?

“[P]eople who experience themselves as automata, as robots, as bits of machinery, or even as animals… are rightly regarded as crazy. Yet why do we not regard a theory that seeks to transmute persons into automata or animals as equally crazy?”

— R. D. Laing 36

*

Type the words ‘mental health crisis’ into any search engine and you will find more than a million pages with links to reports from Australia, Canada, Europe and America all presenting stark evidence that the Western world is in the grip of what in other contexts would certainly be called a pandemic: a plague of disease that is horribly debilitating, too often fatal, and affecting nearly one in ten of our population: men and women, children and the old alike. According to the latest surveys in any given week in England, 1 in 6 people (15%) report experiencing some kind of mental health problem. In just twenty years (1993 to 2014) the number of people experiencing mental health problems went up by 20%, while the number reporting severe mental health symptoms in any given week has risen from 7% in 1993 to over 9% in 2014. 37 Indeed, this issue has now become such a grave one that it receives serious attention in political debates. Still more positively, ways to deal with it are today widely discussed, and the stigma associated with mental illness is at last aired and challenged across the mainstream. But one question very seldom addressed is this: what has generated so much suffering and distress in the first place? What is the cause of this now admitted mental health crisis?

Since the issue is obviously an extremely complex one, I propose that we break it down into three parts that can be abbreviated as three A’s: access, accountancy and aetiology. The most simplistic assumption we could make would be that our current crisis is a consequence of just one of these three factors. So, for instance, if the rise in case numbers is a purely matter of easier access to treatment, then it follows from our presumption that there is no underlying increase but that sufferers of mental health problems are simply more able and willing to seek professional help. If true then ‘the crisis’ has always existed but previously the greatest number simply suffered in silence.

Alternatively, we might presume that the rise is a perceived one and its origin is entirely due to changes in accountancy, in which instance states of mind that in the past were undifferentiated from the norm have gradually been medicalised as I have discussed above. Whereas improved access to care is a laudable good, by contrast, if accountancy is to blame, then society is increasingly in the business of treating the sane as if they were sick. Reclassifying normality as abnormality would mean psychiatry has helped create the illusion of an epidemic, although it is important to understand that it does not follow that the suffering itself is illusory, only that our tendency is to see that suffering as psychiatric in nature.

Alternatively again, we might instead conclude that the rise in cases is real and unrelated to either ease of access or what has been described as “the medicalisation of misery”. In this case, we are necessarily drawn into the matter of aetiology and must extend the investigation to search for underlying external causes – causes that to some degree can be found to account for a genuine rise in mental illness.

Certainly these aren’t mutually exclusive considerations, but are these three A’s exhaustive? Broadly considered yes, however, a breakdown of this kind has indistinct fuzzy edges and all that is certain is a combination, or potentially even a synergy, operates between the three. Indeed, given that mental health is expressly bound up with and unavoidably defined by feelings of wellness, no psychiatric diagnosis can ever be scientifically objective in the strictest sense. Setting aside therefore the matter of access to better healthcare, which all else being equal, is wholly positive, my considerations in the remainder of this chapter are to disentangle the other strands.

In one sense the mental health crisis is undeniably real. More and more people are suffering forms of psychological distress and in no way do I mean to suggest otherwise. There is an urgent need therefore to get to the bottom of what is causing this crisis.

Johann Hari is a journalist who spent many years investigating the causes of depression, the reasons why the West is seeing such a rise in incidence, and how we might find better alternatives to treat this epidemic. It isn’t caused by a chemical imbalance in our brains, he notes at the outset, but crucially by changes in the way we are living:

*

The evidence of a connection between what happens in childhood and the effects on later behaviour is very strong indeed. This is unsurprising of course. It is perhaps self-evident that mental illness grows out of trauma and hunger, which are the bitter fruits of abuse, neglect and abandonment, both physical and psychological. But to explain the ongoing rise (affecting adults as much as children) we would be hard pressed to attribute much cause to changes in parenting styles given how the rise is so steep with a 20% increase over just two decades – very definitely not if Philip Larkin is to be believed. 38

To be frank, parents have always “fucked you up”, as for that matter have our siblings, our peers, and undoubtedly, many of our fucked-up teachers. Of course, one significant change during recent decades is that parents spend more time working, thus leaving children with childminders or, if money is tight, with the keys to an empty house. Studies again unsurprisingly show that latchkey kids are more susceptible to behavioural problems.

A related issue affecting early development is the omnipresence of new technologies. Once the pacifier was television, but this single room distraction has been slowly superseded by the introduction of computer games, iphones, etc. There is a widespread dependency on these types of electronic devices, and so without any immediate control group, the psychological damage caused by habitually engaging in such virtual interactions will be extremely difficult to gauge.

Of course, television has been used as an infant pacifier ever since I can remember. No doubt it once pacified me too. But television itself has been radically transformed. It has become louder, brighter, more intense due to faster and slicker editing, and it is surely reasonable to presume, since the sole purpose is to grab attention and transfix its audience, more and more intoxicating. Viewing TV today is a fundamentally altered experience compared to viewing it decades ago. Could any of this be having a knock-on effect with regards to attention span, cognitive skills, or, more importantly, our sense of self? This is a highly complex issue that I shall not delve into here – in the addendum I do however consider the psychological and societal impacts of advertising (I also dedicate a later chapter to the role advertising plays in our society).

What is known for certain is this: that other than in exceptional instances when the origin of severe mental illness can be directly traced to an underlying physical disease (syphilis is perhaps the best known example), the usual trigger for mental health problems is found to be either sudden or prolonged trauma – very often although not exclusively childhood trauma – and the development of the vast majority of mental disorders occurs therefore as a pathological but defensive response to trauma.

*

Following Freud, causes of mental illness came to be thought buried deep within the patient’s unconscious. For this reason, Freud and the psychoanalysts pioneered their ‘talking cure’: conversational techniques that probed deep into the psyche. Various schools arose. They inquired into dreams, biography, sexuality, family relations or even spirituality, feeling down for the roots of their patent’s distress. With the psychical wound discovered, it might now be cleansed and disinfected by means of further introspection. Healing came about as nature then took its course. Here the patient plays a central role in their own treatment.

R. D. Laing dignified his patients in another way. Refraining from excessive presumptions built on the unsteady and evolving theories of the unconscious – the Oedipal Complex, Penis Envy, and other fabulous chimera detected by Freud and his followers – Laing gave his patients the common respect the rest of us outside the padded walls of the asylum receive from our peers. No matter how superficially crazy, he adjudged every patient’s account of his or her lived experience as entirely valid in the existential sense as he would the truthful account of any sane human being, including his own. This exceedingly hazardous (some might say reckless) approach to a patent’s illness did, however, produce remarkable outcomes – at least to begin with – as many of those he treated were speedily recovered and declared fit enough to return home.

However, Laing’s successes seldom lasted long, and predictably within a just few months, more than half would drift back into his care. Witnessing this cyclical pattern of decline had an interesting effect on Laing, for it caused him to reach a new and shocking conclusion. With no less conviction than before, he let it be known that social relationships, and especially ones within families, were the major triggers of his patients’ relapse. This was an audacious diagnosis which, unsurprisingly, met with general hostility, as the accused – not only the families but society as a whole – felt immediately affronted by the charge that they were fons et origo of the patient’s sickness.

Undaunted, Laing took his ideas to their logical extreme. He allowed his patients to play out their madness to the full, believing that for a lasting cure the condition must be allowed to run its course – and who can honestly say if and when madness is fully cured? Unconstrained by the boundaries of orthodox medicine, Laing and his fellow therapists would enter perilously into the worlds of their patients. Laing himself, by all accounts, went somewhat bonkers in the process, which is hardly surprising, since whatever madness is, it is most certainly contagious (and after all, this in a nutshell is really Laing’s central point). 39

As his conduct became morally questionable – sexual affairs with his patients creating troubles within his own family – his professional reputation was understandably tarnished and alongside this reputational decline, his ideas went out of fashion. In spite of this, Laing’s legacy persists in important ways. The more dignified respect for sufferers of mental illness (who even today are sadly denied full human rights equivalence) owes a great deal to Laing’s daring intellectual courage and integrity. On the other hand, the true and lasting value of Laing’s work has been both forgotten and dismissed. For when he tells us that insanity is “a perfectly rational adjustment to an insane world” 40, then given the rise of today’s ‘mental health crisis’, our mental health professionals and society more broadly needs to listen up.

In a world that’s ever slicker, faster, and as human contact becomes more distant and superficial, increasingly artificial indeed, the modern self (perhaps that should read ‘postmodern’) becomes more atomised and systematised than in Laing’s time (Laing died three decades ago). Cajoled to sacrifice ever more individuality for the sake of conformity, convenience, security and status; our given raison d’etre is to engorge our material well-being, either for its own pleasure or, more egotistically, with shows of conspicuous consumption. We are, as T.S. Eliot put it so elegantly, “distracted from distraction by distraction/ filled with fancies and empty of meaning”. 41

*

“The normal process of life contains moments as bad as any of those which insane melancholy is filled with, moments in which radical evil gets its innings and takes its solid turn. The lunatic’s visions of horror are all drawn from the material of daily fact. Our civilization is founded on the shambles, and every individual existence goes out in a lonely spasm of helpless agony.” 42

These are the grim observations of William James, another pioneer of the field of psychology, who is here trying to get to grips with “the unpleasant task of hearing what the sick souls, as we may call them in contrast to the healthy-minded, have to say of the secrets of their prison-house, their own peculiar form of consciousness”. James’ vocabulary is remarkably direct and unambiguous, so allow me to very briefly skim the thesis of what he saw as the underlying cause of madness, sticking closely to his original terminology wherever possible.

Their “morbid-minded way”, James reluctantly concedes, should not be too readily dismissed. “With their grubbing in rat-holes instead of living in the light; with their manufacture of fears, and preoccupation with every unwholesome kind of misery…” it may appear to the “healthy-minded” as “unmanly and diseased”, but, on the other hand, “living simply in the light of good”, although “splendid as long as it will work”, involves us in a partial denial of reality which “breaks down impotently as soon as melancholy comes”. Furthermore, says James:

“… there is no doubt that healthy-mindedness is inadequate as a philosophical doctrine, because the evil facts which it refuses positively to account for are a genuine portion of reality; and they may after all be the best key to life’s significance, and possibly the only openers of our eyes to the deepest levels of truth.”

With the advent of modern comforts and our immersive condition of historically unprecedented safety and security it can appear that those of born in the wealthiest regions of the world have little reason to grumble, certainly when compared to the conditions of previous generations. Indeed for anyone in Britain born into the working class or above, the famous words of Tory Prime Minister Harold Macmillan that “we’ve never had it so good” do mostly still apply. Studies have shown, of course, that social equality is far more closely correlated to overall levels of happiness than absolute levels of wealth 43, but no less apparent is the more straightforward fact that having become materially satisfied, what we might call ‘psychological immiseration’ is more widespread than ever.

With material wants met we are left to tread a vertiginous tightrope that has been called ‘happychondria’: that perpetual and single-minded pursuit of happiness per se that makes us achingly self-aware of shortcomings in this narrow regard. And feelings of an ‘unbearable lightness of being’ become all the lighter once our striving to be happy burgeons into an all-consuming monomaniacal fixation, since happiness is insufficient to ground us and make us feel real. Worse still, as James explains, perpetual happiness is absolutely unattainable due to the inevitable travails of life, and given most people’s tangential urge to negotiate life’s experiences authentically. Or putting matters the other way around, since most people inevitably fail to attain the levels of happiness socially demanded, such non-stop pursuit of happiness (and by ‘pursuit’ here I mean ‘chasing’ rather than ‘activity’ or ‘recreation’ 44) inevitably will have adverse effects and very likely result in neurosis and feelings of moroseness. The etymological root of our word ‘happiness’ is revealing in this regard: ‘hap’ meaning luck or good fortune. Dormant in the language a vestigial memory that happiness is a gift bestowed, rather than a treasure seized.

*

Unable to function for long or to endure everyday states of consciousness, a growing number of people are now turning either to legally prohibited narcotics or proscribed and medically endorsed opiates: drugs that lift the clouds of emptiness, or else, numb the user to the tawdriness of everyday reality. These pills offer a surrogate escape when it can no longer be supplied by the local shopping mall, or, and always more persuasively, by TV and similar distractions online – both places where our big pharmaceutical companies go to enhance their profits by continuously pushing more of their psychoactive wares.

To a great extent, these powerful industries, whether through lobbying or via alternative means of self-promotion, have gradually reshaped psychiatry itself. The patient who was once central to their own treatment has been made peripheral once again, as the psychiatrist gets on with mending their mental apparatus. And by ‘mending’ it is better to read ‘made happier’, or else, ‘made normal’, and thus subjected to a transformation which is centred on societal functioning, but that may or may not be life enhancing in a fuller and more meaningful sense. So does it finally matter if society becomes ‘happier’ and people are better able to cope due only to a widespread use of pharmaceuticals? And does it matter if children as young as six are proscribed a daily dose of mind-altering drugs just to fit in and get by? 45

What if anguish and sorrow are vital parts to an authentic experience of life, and, as a good friend and poet once put it: “woe is part of the worship”? To rebut sorrow and utterly disregard the origins of distress seems to me irredeemably Panglossian, which is surely no less life-denying than its counter opposite, a fatalistic surrender to misery. Because to be able truly to affirm in capitals – to say “YES” – is finally predicated on our capability to no less defiantly scream “NO”! In the finish it is zombies alone that are unable ever to scream “NO!” and especially once confronted by the reoccurring cruelties and stupidities of this sometimes monstrous world.

Fritjof Capra says that Laing once told him, “Mystics and schizophrenics find themselves in the same ocean, but the mystics swim whereas the schizophrenics drown.” And latent within even the most zombified of people, there must linger, no matter how faintly, an inextinguishable inner presence akin to spirit, to soul; a living force that cannot be fully disabled without untold consequences. It is this inner life that fights on and kicks against the main object it can kick against: those modes of thinking and behaving that the ‘normal world’ sanctions and calls ‘sane’, but which the organism (sometimes correctly) identifies as aspects of an inexplicable, incomprehensible and literally terrifying existential threat.

This is how Laing understood the nature of madness, and Laing was one of the sanest (both under legal and more popular definitions) ever to have stayed so close to its shadow. He studied the mad without ever flinching away; listening on with patient compassion to their plight. He stayed open and survived. In an important sense, he trusted their testimony. If we wish to understand what is happening to us, I believe we ought to trust just one of his findings too. As Laing concludes in the same preface to his book The Divided Self:

“Thus I would wish to emphasize that our ‘normal’ ‘adjusted’ state is too often the abdication of ecstasy, the betrayal of our true potentialities, that many of us are only too successful in acquiring a false self to adapt to false realities” 46

While on another occasion he wrote still more emphatically:

“From the alienated starting point of our pseudo-sanity, everything is equivocal. Our sanity is not ‘true’ sanity. Their madness is not ‘true’ madness. The madness of our patients is an artefact of the destruction wreaked on them by us and by them on themselves. Let no one suppose that we meet ‘true’ madness any more than that we are truly sane. The madness that we encounter in ‘patients’ is a gross travesty, a mockery, a grotesque caricature of what the natural healing of that estranged integration we call sanity might be. True sanity entails in one way or another the dissolution of the normal ego, that false self competently adjusted to our alienated social reality; the emergence of the ‘inner’ archetypal mediators of divine power, and through this death a rebirth, and the eventual reestablishment of a new kind of ego-functioning, the ego now being the servant of the divine, no longer its betrayer.” 47

As with death per se, we choose on the whole to remain oblivious to our all-embracing deathly materialist existence, excepting a dwindling minority who our secular society marginalise as deluded and misguided at best, and at worst cranks or fanatics – and there are many religious cranks and fanatics, of course, just as there are no less fanatical anti-religious zealots. Perhaps, to paraphrase Philip Larkin, the rest of us really ought to be screaming. Whether stultified or petrified, inwardly, many are, and that’s where the pills come in.

Laing did not mistake madness for normality, but understood perfectly well that normality can often be madness too. And normality, in turn, after being exposed as madness, has deliberately misunderstood Laing ever since.

Next chapter…

*

Addendum: Advertising vs. sanity

The following brief extract is drawn from an article by satirist Hugh Iglarsh based around an interview with activist and award-winning documentary filmmaker Jean Kilbourne that was published in Counterpunch magazine in October 2020.

HI: What kind of personality does advertising cultivate? How would you describe the ideal consumer or recipient of advertising?

JD: The ideal ad watcher or reader is someone who’s anxious and feels incomplete. Addicts are great consumers because they feel empty and want to believe that something out there, something for sale, can fill them up. Perhaps the ideal consumer is someone suffering from bulimia, because this person will binge and gorge and then purge, thus needing to start the cycle all over again.

HI: Addiction is one of the major themes of your book. How does advertising help foster addiction?

JD: The selling of addictive products is of course a big part of what advertisers do. They study addiction very closely, and they know how addicts think – they literally know what lights up their brains.

Advertisers understand that it is often disconnection in childhood that primes people for addiction. For many traumatized people, the first time they drink or smoke or take drugs may be the very first time they feel okay. Soon they feel they are in a relationship with the alcohol or the cigarette. Addicts aren’t stupid – the stuff they’re taking really does work, at least at first. It numbs the pain, which makes them feel connected to the substance. Eventually the drug or substance turns on them and makes all the problems they’re fleeing from worse.

What struck me about the genius of advertisers is how they exploit themes of tremendous importance to addicts, such as their fear of loneliness and desire for freedom. This is precisely what addiction does to you – it seems to offer you what you need, while actually making you more dependent, more alone. The ads promise freedom and connection, in the form of products that entrap users and weaken relationships. 48

In Chapter Eight, The Unreal Thing, I present my own thoughts on the detrimental impact of advertising on modern culture.

*

Please note that for the purposes of ‘publishing’ here I have taken advantage of the option to incorporate hypertext links and embed videos – in order to distinguish additional commentary from the original text all newly incorporated text has been italised.

*

1 Extract from The Divided Self: An Existential Study in Sanity and Madness by R. D. Laing, first published 1959/60; “Preface to the Pelican Edition” written September 1964.

2 From an article entitled “Asylum tourism” by Jennifer L. Bazar and Jeremy T. Burman, published in Monitor on Psychology, February 2014, Vol 45, No. 2. https://www.academia.edu/11707191/Asylum_tourism_In_the_19th_century_travelers_visited_asylums_to_admire_the_institutions_architecture_and_grounds

3 Sometimes quoted in Latin as Quos Deus vult perdere, prius dementat (literally: Those whom God wishes to destroy, he first deprives of reason) or Quem Iuppiter vult perdere, dementat prius (literally: Those whom Jupiter wishes to destroy, he first deprives of reason) this expression has been used in English literature since at least the 17th century. In the form presented here it first appeared in the Reverend William Anderson Scott’s book Daniel, a Model for Young Men and then later in Longfellow’s poem The Masque of Pandora. Although falsely attributed to Euripides, earlier versions of this phrase do indeed have classical Greek origins.

4 The shift in attitude towards sexual practices as extreme as sadomasochism is a curious one. I take the liberal view that it is right to be fully tolerant of activities that do not injure innocent parties and so do not wish to infringe individual freedoms when they do not violate the freedom of others. Nevertheless, I tend to regard sexual practices such as sadomasochism as perverse, and not because I do not understand them, but because I do. I recognise the urge that twists pleasure and pain together; the same one that mixes up vulnerability with humiliation. The psychological dangers are abundantly clear to me and the fact that our society today actively promotes and normalises S/M is perhaps indicative of a traumatic breakdown in human relations.  It is wonderful that society has overcome so many of its hang-ups, but all taboos aren’t equal. Taboos against inflicting severe pain, even when consensual, do make sense.

Sarah Byrden, a sex educator and sacred sexuality teacher, says we are simultaneously (without realising it) “being bounced off the walls between pornography and Puritanism”:

5    Salvador Dalí is certainly attributed with a quote along these lines.

6

After calling the hospital for an appointment, the pseudopatient arrived at the admissions office complaining that he had been hearing voices. Asked what the voices said, he replied that they were often unclear, but as far as he could tell they said “empty,” “hollow,” and “thud.” The voices were unfamiliar and were of the same sex as the pseudopatient. The choice of these symptoms was occasioned by their apparent similarity to existential symptoms. Such symptoms are alleged to arise from painful concerns about the perceived meaninglessness of one’s life. It is as if the hallucinating person were saying, “My life is empty and hollow.” The choice of these symptoms was also determined by the absence of a single report of existential psychoses in the literature.

Beyond alleging the symptoms and falsifying name, vocation, and employment, no further alterations of person, history, or circumstances were made. The significant events of the pseudopatient’s life history were presented as they had actually occurred. Relationships with parents and siblings, with spouse and children, with people at work and in school, consistent with the aforementioned exceptions, were described as they were or had been. Frustrations and upsets were described along with joys and satisfactions. These facts are important to remember. If anything, they strongly biased the subsequent results in favor of detecting insanity, since none of their histories or current behaviors were seriously pathological in any way.

Immediately upon admission to the psychiatric ward, the pseudopatient ceased simulating any symptoms of abnormality. In some cases, there was a brief period of mild nervousness and anxiety, since none of the pseudopatients really believed that they would be admitted so easily. Indeed, their shared fear was that they would be immediately exposed as frauds and greatly embarrassed. Moreover, many of them had never visited a psychiatric ward; even those who had, nevertheless had some genuine fears about what might happen to them. Their nervousness, then, was quite appropriate to the novelty of the hospital setting, and it abated rapidly.

Apart from that short-lived nervousness, the pseudopatient behaved on the ward as he “normally” behaved. The pseudopatient spoke to patients and staff as he might ordinarily. Because there is uncommonly little to do on a psychiatric ward, he attempted to engage others in conversation. When asked by staff how he was feeling, he indicated that he was fine, that he no longer experienced symptoms. He responded to instructions from attendants, to calls for medication (which was not swallowed), and to dining-hall instructions. Beyond such activities as were available to him on the admissions ward, he spent his time writing down his observations about the ward, its patients, and the staff. Initially these notes were written “secretly,” but as it soon became clear that no one much cared, they were subsequently written on standard tablets of paper in such public places as the dayroom. No secret was made of these activities.

The pseudopatient, very much as a true psychiatric patient, entered a hospital with no foreknowledge of when he would be discharged. Each was told that he would have to get out by his own devices, essentially by convincing the staff that he was sane. The psychological stresses associated with hospitalization were considerable, and all but one of the pseudopatients desired to be discharged almost immediately after being admitted. They were, therefore, motivated not only to behave sanely, but to be paragons of cooperation. That their behavior was in no way disruptive is confirmed by nursing reports, which have been obtained on most of the patients. These reports uniformly indicate that the patients were “friendly,” “cooperative,” and “exhibited no abnormal indications.”

Extract taken from Rosenhan DL (January 1973) entitled “On being sane in insane places” published in Science 179 (4070): 250–8. http://web.archive.org/web/20041117175255/http://web.cocc.edu/lminorevans/on_being_sane_in_insane_places.htm

7    Ibid.

8    Ibid.

9

A psychiatric label has a life and an influence of its own. Once the impression has been formed that the patient is schizophrenic, the expectation is that he will continue to be schizophrenic. When a sufficient amount of time has passed, during which the patient has done nothing bizarre, he is considered to be in remission and available for discharge. But the label endures beyond discharge, with the unconfirmed expectation that he will behave as a schizophrenic again. Such labels, conferred by mental health professionals, are as influential on the patient as they are on his relatives and friends, and it should not surprise anyone that the diagnosis acts on all of them as a self-fulfilling prophecy. Eventually, the patient himself accepts the diagnosis, with all of its surplus meanings and expectations, and behaves accordingly. [Ibid.]

10  Ibid.

11 Physicists – at least all the one I’ve known – whether they’ve heard it before or not (and they generally have heard it before), get the joke immediately; non-physicists, on the other hand, I refer to the old saw that “many a true word is spoken in jest.” For such blunt reductionism certainly does lie at the heart of physics, and indeed of all ‘hard science’; disciplines that are founded upon the simplification of the infinitely complex processes of the natural world. With its especial penchant for ‘elegance’ and parsimoniousness, every physicist is trained through repeated worked examples, and eventually hard-wired to consider the most straightforward and ideal case as the most productive first step in solving every problem: hence the spherical cow. The funny thing is, how often it works!

Consider a Spherical Cow became the title of a book about methods of problem solving using simplified models written by Environmental Scientist John Harte, published in 1988.

In a letter to Science journal published in 1973 the author Steven D. Stellman instead postulated “A Spherical Chicken”. https://science.sciencemag.org/content/182/4119/1296.3

12 The fact that no-one is actually able to answer this question says a lot about time machines – but that’s for a separate discussion!

13    From the essay Night Walks written by Charles Dickens, originally published in the weekly journal All Year Round in 1859, and appearing as Chapter 13 of The Uncommercial  Traveller (1861).

14 From Aldous Huxley’s Brave New World Revisited (1958), chapter 8 “Chemical Persuasion”

15 From Oliver Sack’s A Leg to Stand On (1984), chapter VII “Understanding”

16 From an interview in The Observer published January 25, 1931.

17 In 1951, Solomon Asch conducted his first conformity laboratory experiments inviting groups of male college students to participate in a simple “perceptual” task, which involved distinguishing between three lines labelled A,B and C to decide which matched the length of another comparator line on a different card. In reality, all but one of the participants was an actor, and the true focus of the study was how the remaining participant would react to the actors’ behaviour. Each participant was asked in turn to say aloud which line matched the length of that on the first card and seated such that the real participant always responded last.

In the control group, with no pressure to conform to actors, the error rate on the critical stimuli was less than 1%. In the actor condition also, the majority of participants’ responses remained correct (63.2%), but a sizable minority of responses conformed to the actors’ (incorrect) answer (36.8 percent). The responses revealed strong individual differences: 5% of participants were always swayed by the crowd and only 25% consistently defied majority opinion; the rest conforming on some trials. Overall, 75% of participants gave at least one incorrect answer out of the 12 critical trials. In his opinion regarding the study results, Asch put it this way: “That intelligent, well-meaning, young people are willing to call white black is a matter of concern.”

18 This is sometimes called ‘Planck’s Principle’ and it is taken from the following passages drawn from Wissenschaftliche Selbstbiographie. Mit einem Bildnis und der von Max von Laue gehaltenen Traueransprache. [trans: Scientific Autobiography. With preface and obituary by Max von Laue] Johann Ambrosius Barth Verlag (Leipzig 1948), p. 22, in Scientific Autobiography and Other Papers, (1949), as translated by F. Gaynor, pp. 33–34, 97.

“A new scientific truth does not generally triumph by persuading its opponents and getting them to admit their errors, but rather by its opponents gradually dying out and giving way to a new generation that is raised on it. … An important scientific innovation rarely makes its way by gradually winning over and converting its opponents: it rarely happens that Saul becomes Paul. What does happen is that its opponents gradually die out, and that the growing generation is familiarized with the ideas from the beginning: another instance of the fact that the future lies with the youth.”

19 William James, Principles of Psychology volume I. chapter vii. p. 196, 1890.

20    Transforming Diagnosis, a post by former National Institute of Mental Health (NIMH) Director Thomas Insel, published by NIMH on April 29, 2013. https://www.nimh.nih.gov/about/directors/thomas-insel/blog/2013/transforming-diagnosis.shtml

21  The film (released 1975) was the adaptation of a novel of the same name written by Ken Kesey and published more than a decade earlier in 1962. Kesey based his story on experiences he had had working late shifts as an orderly at a mental health institution, as well as more personal experiences of using psychodelics.

22 Quote taken from Cracked: Why Psychiatry is Doing More Harm Than Good (2012) by James Davies, Chapter 2, “The DSM – a great work of fiction?”

23 Ibid.

24 Ibid.

25 Ibid.

26 Ibid.

27 Ibid.

28 Ibid.

29 From an article entitled “Diagnosing the D.S.M.” written by Allen Francis, published in The New York Times on May 11, 2012. http://www.nytimes.com/2012/05/12/opinion/break-up-the-psychiatric-monopoly.html?_r=1

30 From an article entitled “Inside The Battle To Define Mental Illness” written by Gary Greenberg, published in Wired magazine on December 27, 2010. https://www.wired.com/2010/12/ff_dsmv/

31 Although the practice continued in France into the 1980s, whereas, perhaps surprisingly, it had been banned already on moral grounds by 1950 in the Soviet Union.

32

The Montreal Experiments were carried out on patients suffering from schizophrenia that used sensory deprivation, ECT and drugs (included drug induced coma) combined with “psychic driving” which was an early form of brainwashing involving pre-recorded audio tapes played non-stop for days with up to half a million repetitions altogether. One of Cameron’s victims was Jean Steel, whose daughter Alison (only four and a half at the time of her mother’s treatment) told CBC News in an interview:

“She was never able to really function as a healthy human being because of what they did to her.”

From an article entitled “Federal government quietly compensates daughter of brainwashing experiments victim” written by Elizabeth Thompson, published by CBC News on October 26, 2017. https://www.cbc.ca/news/politics/cia-brainwashing-allanmemorial-mentalhealth-1.4373590

Embedded below is an episode from CBC investigative documentary series The Fifth Estate entitled “MK Ultra: CIA mind control program in Canada” that was first broadcast in 1980:

33 An article titled “Rorschach and Awe” written by Katherine Eban, published in Vanity Fair in July 2007 reported that:

A psychologist named Jean Maria Arrigo came to see me with a disturbing claim about the American Psychological Association, her profession’s 148,000-member trade group. Arrigo had sat on a specially convened A.P.A. task force that, in July 2005, had ruled that psychologists could assist in military interrogations, despite angry objections from many in the profession. […]

Two psychologists in particular played a central role: James Elmer Mitchell, who was attached to the C.I.A. team that eventually arrived in Thailand, and his colleague Bruce Jessen. Neither served on the task force or are A.P.A. members. Both worked in a classified military training program known as SERE—for Survival, Evasion, Resistance, Escape—which trains soldiers to endure captivity in enemy hands. Mitchell and Jessen reverse-engineered the tactics inflicted on SERE trainees for use on detainees in the global war on terror, according to psychologists and others with direct knowledge of their activities. The C.I.A. put them in charge of training interrogators in the brutal techniques, including “waterboarding,” at its network of “black sites.” In a statement, Mitchell and Jessen said, “We are proud of the work we have done for our country.”

https://www.vanityfair.com/news/2007/07/torture200707?printable=true%C2%A4tPage=all

An article titled “The Black Sites” written by Jane Mayer, published in The New Yorker in August 2007 picked up the same story:

The use of psychologists [on the SERE program] was also considered a way for C.I.A. officials to skirt measures such as the Convention Against Torture. The former adviser to the intelligence community said, “Clearly, some senior people felt they needed a theory to justify what they were doing. You can’t just say, ‘We want to do what Egypt’s doing.’ When the lawyers asked what their basis was, they could say, ‘We have Ph.D.s who have these theories.’ ” He said that, inside the C.I.A., where a number of scientists work, there was strong internal opposition to the new techniques. “Behavioral scientists said, ‘Don’t even think about this!’ They thought officers could be prosecuted.”

Nevertheless, the SERE experts’ theories were apparently put into practice with Zubaydah’s interrogation. Zubaydah told the Red Cross that he was not only waterboarded, as has been previously reported; he was also kept for a prolonged period in a cage, known as a “dog box,” which was so small that he could not stand. According to an eyewitness, one psychologist advising on the treatment of Zubaydah, James Mitchell, argued that he needed to be reduced to a state of “learned helplessness.” (Mitchell disputes this characterization.)

https://www.newyorker.com/magazine/2007/08/13/the-black-sites

A subsequent Senate Intelligence Committee report from 2014 confirms that:

The CIA used two outside contract psychologists to develop, operate, and assess its interrogation operations. The psychologists’ prior experience was at the Air Force Survival, Evasion, Resistance and Escape (SERE) school. […]

The contractors developed the list of enhanced interrogation techniques and personally conducted interrogations of some of the CIA’s most significant detainees using those techniques. The contractors also evaluated whether detainees’ psychological state allowed for the continued use of the techniques, even for some detainees they themselves were interrogating or had interrogated. […]

In 2005, the psychologists formed a company to expand their work with the CIA. Shortly thereafter, the CIA outsourced virtually all aspects of the program. The CIA paid the company more than $80 million.

https://www.feinstein.senate.gov/public/index.cfm/senate-intelligence-committee-study-on-cia-detention-and-interrogation-program

34

“The discovery of phenothiazines, the first family of antipsychotic agents has its origin in the development of the German dye industry, at the end of the 19th century (Graebe, Liebermann, Bernthsen). Up to 1940 they were employed as antiseptics, antihelminthics and antimalarials (Ehrlich, Schulemann, Gilman). Finally, in the context of research on antihistaminic substances in France after World War II (Bovet, Halpern, Ducrot) the chlorpromazine was synthesized at Rhône-Poulenc Laboratories (Charpentier, Courvoisier, Koetschet) in December 1950. Its introduction in anaesthesiology, in the antishock area (lytic cocktails) and “artificial hibernation” techniques, is reviewed (Laborit), and its further psychiatric clinical introduction in 1952..”

From the abstract to a paper entitled “History of the Discovery and Clinical Introduction of Chlorpromazine” authored by Francisco Lopez-Muñoz, Cecilio Alamo, Eduardo Cuenca, Winston W. Shen, Patrick Clervoy and Gabriel Rubio, published in the Annals of Clinical Psychiatry, 17(3):113–135, 2005. https://www.researchgate.net/publication/7340552_History_of_the_Discovery_and_Clinical_Introduction_of_Chlorpromazine

35 Psychiatry’s New Brain-Mind and the Legend of the “Chemical Imbalance” written by Ronald W. Pies, Editor-in Chief Emeritus and published by Psychiatric Times on July 11, 2011. http://www.psychiatrictimes.com/couch-crisis/psychiatrys-new-brain-mind-and-legend-chemical-imbalance

36 Extract from The Divided Self: An Existential Study in Sanity and Madness by R. D. Laing, first published 1959/60; Part 1, Chapter 1, “The Existential-Phenomenological Foundations for A Science of Persons”.

37 McManus S, Bebbington P, Jenkins R, Brugha T. (eds.) (2016). Mental health and wellbeing in England: Adult psychiatric morbidity survey 2014.

38 Larkin’s celebrated poem This be the Verse which begins with the lines “They fuck you up, your Mum and Dad/ They may not mean to, but they do” was written and first published in 1971.

39 One of Laing’s great interests was in the “double bind” situation, which he came to diagnose as the root cause for most of the madness around him. Laing had adopted the idea of the “double bind” from anthropologist Gregory Bateson. Bateson, in turn, had traced the notion back to a semi-autobiographical novel by Victorian Samuel Butler, entitled The Way of All Flesh. But Butler had only described the condition and not named it, whereas Bateson had rediscovered it and labelled it as an important cause of schizophrenia.

Hearing from a parent, for instance, that “I love you” whilst seeing no expression which supported the evidence of that expressed love, presented the patient with a “double-bind” situation. This is just one example, but Laing had witnessed this and many other kinds of “paradoxical communication” in his patients’ relationships to their nearest and dearest. He eventually came to believe, along with Bateson, that being caught in such a “double-bind” situation was existentially damaging and very commonly, therefore, psychologically crippling. In recognising this, Laing had undoubtedly discovered a fragment of the truth, and it is a shame that he then over-intellectualises the issue, as intellectuals are wont to do. Replace “double bind” with “mind game” and his case becomes much clearer. If people, especially those you are closest to you and those you need to trust, constantly undermine your view of yourself and of your relationship to others, then the seeds of destruction are being sown. But to my mind, such details of Laing’s outlook are nothing like as interesting and illuminating as the general thrust of what he had to say about our society.

40 As quoted in Wisdom for the Soul: Five Millennia of Prescriptions for Spiritual Healing (2006) by Larry Chang, p. 412; this might be a paraphrase, as the earliest occurrence of this phrase thus far located is in the form: “Ronald David Laing has shocked many people when he suggested in 1972 that insanity can be a perfectly rational adjustment to an insane world.” in Studii de literatură română i comparată (1984), by The Faculty of Philology-History at Universitatea din Timioara. A clear citation to Laing’s own work has not yet been found.

41 From the first of T.S. Eliot’s Four Quartets titled Burnt Norton.

42 This passage continues:

“If you protest, my friend, wait till you arrive there yourself! To believe in the carnivorous reptiles of geologic times is hard for our imagination—they seem too much like mere museum specimens. Yet there is no tooth in any one of those museum-skulls that did not daily through long years of the foretime hold fast to the body struggling in despair of some fated living victim. Forms of horror just as dreadful to their victims, if on a smaller spatial scale, fill the world about us to-day. Here on our very hearths and in our gardens the infernal cat plays with the panting mouse, or holds the hot bird fluttering in her jaws. Crocodiles and rattlesnakes and pythons are at this moment vessels of life as real as we are; their loathsome existence fills every minute of every day that drags its length along; and whenever they or other wild beasts clutch their living prey, the deadly horror which an agitated melancholiac feels is the literally right reaction on the situation.”

Extract taken from The varieties of religious experience: study in human nature, Lectures VI and VII, “The Sick Soul”, William James (1902)

43 In their 2009 book The Spirit Level: Why More Equal Societies Almost Always Do Better authors Richard G. Wilkinson and Kate Pickett examined the major impact that inequality has on eleven different health and social problems: physical health, mental health, drug abuse, education, imprisonment, obesity, social mobility, trust and community life, violence, teenage pregnancies, and child well-being. The related Equality Trust website that was co-founded by the authors also includes scatter plots from their book. One of these shows a remarkably close correlation between prevalence of mental illness and income inequality with the following explanatory notes attached:

“Until recently it was hard to compare levels of mental illness between different countries because nobody had collected strictly comparable data, but recently the World Health Organisation has established world mental health surveys that are starting to provide data. They show that different societies have very different levels of mental illness. In some countries only 5 or 10% of the adult population has suffered from any mental illness in the past year, but in the USA more than 25% have.

“We first showed a relationship between mental illness and income inequality in eight developed countries with WHO data – the USA, France, Netherlands, Belgium, Spain, Germany, Italy, and Japan. Since then we’ve been able to add data for New Zealand and for some other countries whose surveys of mental illness, although not strictly comparable, use very similar methods – Australia, the UK and Canada. As the graph [above] shows, mental illness is much more common in more unequal countries. Among these countries, mental illness is also more common in the richer ones.”

More Information

Pickett KE, James OW, Wilkinson RG. Income inequality and the prevalence of mental illness: a preliminary international analysis. Journal of Epidemiology and Community Health 2006;60(7):646-7.

Wilkinson RG, Pickett KE. The problems of relative deprivation: why some societies do better than others. Social Science and Medicine 2007; 65: 1965-78.

James O. Affluenza, London: Vermilion, 2007.

Friedli L. Mental health, resilience and inequalities: how individuals and communities are affected, World Health Organisation. 2009.

Wilkinson RG, Pickett KE. The Spirit Level. Penguin. 2009. Buy the book from Amazon.

The notes and graph are also available by following the link: https://www.equalitytrust.org.uk/mental-health

44 A distinction I owe to American Archetypal Psychologist and former Director of Studies the C.G. Jung Institute in Zurich, James Hillman.

45 The facts speak for themselves really. For instance, a 2011 report from Centers for Disease Control and Prevention (CDC) reveals that in just ten years antidepressant use in the US has increased by a staggering 400%.

http://www.cbsnews.com/8301-504763_162-20123062-10391704.html

The report reveals that more than one in ten of the American population aged 12 or over is taking antidepressants. But that’s okay, according to “the authors of the report” because: “… many people who could benefit from antidepressants aren’t taking them. Only a third of people with symptoms of severe depression take antidepressants.”

The same report also reveals how a further 8% of Americans without depressive symptoms take the drugs for other reasons such as anxiety. And what about the population below 12 years old? Well, the following is taken from a report on what’s happening closer to home, published by the Guardian in March 2011 and which begins:

“Children as young a four are being given Ritalin-style medication for behavioural problems in breach of NHS guidelines.”

http://www.guardian.co.uk/society/2011/mar/18/behaviour-drugs-four-year-olds

According to official UK guidelines, children over the age of six can now be prescribed with mind-altering substances and even when these are to be administered on a daily basis.

46 Extract from The Divided Self: An Existential Study in Sanity and Madness by R. D. Laing, first published 1959/60; “Preface to the Pelican Edition” written September 1964. Laing adds: “But let it stand. This was the work of a young man. If I am older, I am now also younger.”

47 R. D. Laing, The Politics of Experience  (Ballantine Books, N.Y., 1967)

48 From an article entitled “Advertising vs. Democracy: An Interview with Jean Kilbourne” written by Hugh Iglarsh, published in Counterpunch magazine on October 23rd 2020. https://www.counterpunch.org/2020/10/23/advertising-vs-democracy-an-interview-with-jean-kilbourne/

7 Comments

Filed under « finishing the rat race », Uncategorized

roll up the red carpet!

The following article is Chapter Five of a book entitled Finishing The Rat Race which I am posting chapter by chapter throughout this year and beyond. Since blog posts are stacked in a reverse time sequence (always with the latest at the top), I have decided that the best approach is to post the chapters in reverse order.

All previously uploaded chapters are available (in sequence) by following the link above or from category link in the main menu, where you will also find a brief introductory article about the book itself and why I started writing it.

*

“All animals are equal
but some animals are more equal than others

— George Orwell 1

*

I discovered recently and by happy accident that the author, Michael Young, who invented the term ‘meritocracy’, detested his own creation. Here’s how Young outlined his position in a Guardian article “Down with meritocracy”, published in 2001:

I have been sadly disappointed by my 1958 book, The Rise of the Meritocracy. I coined a word which has gone into general circulation, especially in the United States, and most recently found a prominent place in the speeches of Mr Blair.

The book was a satire meant to be a warning (which needless to say has not been heeded) against what might happen to Britain between 1958 and the imagined final revolt against the meritocracy in 2033.2

But I shall save further thoughts of Michael Young until later, and begin here by considering what lies in the shadows of a meritocracy. After all, and at first glance, what on earth can be wrong with the purposeful restructuring of society in ways that prioritise ‘merit’ above all else? Isn’t this the epitome of a fair system?

As with examining most ideas, it is helpful first to step back a little to gain perspective. In this case, it is important to get a fuller grasp of what ‘merit’ means when buried within the heart of ‘meritocracy’. What does ‘merit’, in this narrow political sense, finally equate to?

Throughout the last two hundred and more years, including under progressive administrations such as Clement Attlee’s reforming government in Britain and FDR’s earlier New Deal for America, the political systems in the West have remained very solidly rooted in capitalism, and being so, they have remained inherently utilitarian in design. It follows that ‘merit’ (in our narrow definitional sense) must be gauged on the scales of those extant utilitarian-capitalist conventions: that ‘merit’ therefore becomes an adjunct of ‘utility’ or, in other words, ‘usefulness’.

Advocates of capitalism like to evoke the invisible hand of the market, which they say enhances productivity and safeguards against wanton overproduction, thereby ensuring society’s needs are met. Thanks to the market that which is wasteful falls away, and in consequence profits and earnings will flow to the most efficient producers. So it follows that within a meritocracy governed strictly by market forces, with the invisible hand steering our efforts unerringly toward ‘usefulness’, estimations of ‘merit’ ought to be fairly directly measureable in terms of salaries and wealth. Maximum profits and earnings tending to go to those who serve the most useful function and are, by dint of this, the most ‘merited’. The losers are those who merit little since they provide little to nothing of use, and, conversely, the winners contribute most gainfully in every sense…

There is already a suffocating tightness in this loop; a circularity that brings me to consider the first serious objection against meritocracy, if only the most trivial and conspicuous. For judged solely by its own terms just how meritocratic is our celebrated meritocracy? Hmmm – need I go on? Very well then, I shall offer this brisk reductio ad absurdum:

Let’s start where the debate ordinarily ends, with the topic of professional footballers… To most people, the excessive salaries paid to footballers stands out as an egregious example of unfairness. I share the same view, but wonder why we stop at footballers. They are not alone; not by a long chalk.

Indeed, given that our utilitarian-capitalist meritocracy does in fact function as it is presumed to function, then it follows that most top sportsmen (to a lesser extent, sportswomen too), including footballers, but also tennis players, golfers, F1 drivers, cyclists, athletes, etc – sports of low popularity by comparison – as well as pop idols, TV celebrities and film stars (not forgetting agents and the retinue of hangers-on) are, by virtue of their fabulous incomes, not merely most deserving of such high rewards, but also, by direct extension, some of the most ‘productive’ amongst us. Would any deign to defend this high visibility flaw in our socio-economic system? Truth is that many on this ever-expanding list are rewarded for just one thing: fame – thanks to another self-perpetuating cycle in which fame makes you wealthy, and then wealth makes you more famous again.

Nor does such rightful utilitarian calculus reliably account for the gargantuan salaries and bonuses (and who else gets bonuses in excess of their salaries!) of so many bankers, hedge fund managers and other financiers who callously wrecked our western economies. With annual remuneration that outstrips most ordinary worker’s lifetime earnings, the staggering rewards heaped upon those working in The City and Wall Street have little relationship to levels of productivity and usefulness, but worse, remuneration is evidently disconnected from levels of basic competence. Instead we find that greedy ineptitude is routinely and richly rewarded, if only for the ‘made men’ already at the top and lucky enough to be “too big to fail”. In light of the crash of 2008, any further talk of “the classless society” ought to have us all running for the exits!

Then we come to the other end of our meritocratic muck-heap. And here amongst the human debris we find contradictions of an arguably more absurd kind. I am referring to those disgustingly unworthy winners of our many lotteries – you know the types: petty criminals, knuckle-draggers and wastrels (the tone here is strictly in keeping with tabloid outrage on which it is based) who blow all their winnings on a binge of brash consumerism and a garage full of intoxicants. Conspicuous consumption of the most vulgar kinds! How dare they squander such hard, unearned dosh on having fun! But wait a minute… surely the whole point of running a lottery is that anyone can win. Have we forgotten the advertisement already? So if we are really serious about our meritocracy then perhaps we should to be stricter: no lotteries at all! Yet a cursory consideration of this point presents us with far bigger hurdles by far. For if we are truly committed to the project of constructing a meritocracy (and we must decide precisely what this means), it is vital to acknowledge the fact that life is inherently beset with lotteries. Indeed when roundly considered, this represents an existential dilemma that potentially undermines the entire project.

For life begins with what might best be described as our lottery of inheritance. Where you are born and to whom, the postal code you reside in, the schools you attended, your religious (or not) upbringing, whether you happen to carry one or two x-chromosomes, and the colour of your skin… the whole nine yards. Your entire existence happened by extraordinary chance and each and every aspect of it owes an unfathomable debt to further blind chance.

Therefore, in our most puritanical understanding of meritocracy, lotteries relating to the guessing of random numbers will be abolished altogether, in order to set a precedent, although still these other lotteries, life’s lotteries, remain inescapable. Which is devastating blow to the very concept of fully-fledged meritocracy, since whatever meritocracy we might choose to build will always remain a compromise of one kind or another.

In point of fact, however, we have been moving instead in the completely opposite direction. There has been a tremendous and rapid growth in lotteries of all shapes and sizes: from the casino economy working to the advantage of financial speculators at the top; to the rise of online casinos and the latest betting apps, mathematically honed to suck money from the pockets of the desperate and sometimes destitute pipedreamers at the bottom. Further indications of how far our society truly diverges from even the most rudimentary notions of meritocracy.

So there is plenty of scope for devising a better version of meritocracy; one that isn’t so riddled with blatant inconsistencies and arbitrary rewards. A more refined meritocracy operating according to common sense fairness and consistency, with built-in checks and balances to ensure the winners are more consistently worthy than the losers. A more level playing field bringing us closer to the ideal – for surely a better devised version of meritocracy is the fairest system we can ever hope to live under. In fact, I beg to differ, but before entering further objections to the sham ideal of meritocracy, I wish first to celebrate the different areas in which greater equality has indeed been achieved and ones where it is still dangerously lacking.

During the Q&A session following a lecture entitled “Capitalist Democracy and its Prospect’s” that he delivered in Boston on September 30th, 2014, Noam Chomsky speaks to why the notion of a capitalist democracy is oxymoronic. He also discusses the widespread misinterpretation of Adam Smith’s economic thinking, especially amongst libertarians, and specifically regarding the misuse of his terms ‘invisible hand’ and ‘division of labour’.

*

There is no denying that at the start of the twenty-first century our own society has, and in a number of related ways, been made fairer and more equal than it was just thirty years ago when I was a school-leaver. Most apparent is the sweeping change in attitudes towards race and gender. Casual racism wasn’t merely permissible in seventies and early eighties Britain, but an everyday part of the mainstream culture. The sporadic Black or Asian characters on TV were neatly allotted into their long-established stereotypes, and comedians like bilious standup Bernard Manning had free rein to defile the airwaves with their popular brands of inflammatory bigotry. Huge strides have been taken since then, and social attitudes are unalterably changed for the better. Today the issue of diversity is central to political debate, and social exclusion on the grounds of race and gender is outlawed.

In the prophetic words of abolitionist preacher Theodore Parker, “the arc of the moral universe is long but it bends toward justice”; words famously borrowed by Martin Luther King in a celebrated sermon he delivered in the year of 1965.3 It was a momentous year: one that marked the official end to racial segregation in the Southern United States with the repeal of the horrendous Jim Crow laws, and the same year when Harold Wilson’s Labour government passed the Race Relations Act prohibiting discrimination in Britain on “grounds of colour, race, or ethnic and national origins”.

On August 28th (last Tuesday) ‘Democracy Now’ interviewed co-founder and chair of the Black Panther Party, Bobby Seale, who was arrested and indicted after speaking outside the 1968 Democratic National Convention in Chicago. He describes how during his trial Judge Julius Hoffman ordered him to be gagged and bound to his chair [from 9:15 mins]:

Did Bobby Seale’s treatment provide inspiration for Woody Allen’s madcap courtroom scene in ‘Bananas’? [from 5:00 mins]:

*

As Parker and King understood well, of course, the arc of the moral universe does not bend of its own accord but requires tremendous pressure from below. And so it was, again in 1965, after shockwaves sent by Wilson’s government through former colony Rhodesia, that in efforts to avoid the end of its apartheid system, the white minority government under then-Prime Minister Ian Smith, declared independence, and an armed struggle for black liberation ensued. It was a bloody struggle that would grind on throughout the 70s, but one that ended in triumph. Meanwhile, apartheid in neighbouring South Africa outlasted Rhodesia for a further decade and a half before it too was dismantled in 1994 and the rainbow flag could be hoisted.

In solidarity with Nelson Mandela and leading the armed struggle had been Joe Slovo, a commander of the ANC’s military wing Umkhonto we Sizwe (MK) who fought alongside deputy Ronnie Kasrils; both the sons of émigré Jews. Also prominent within the anti-apartheid resistance were other Jewish figures including Denis Goldberg, Albie Sachs, and Ruth First – an activist, scholar and wife of Joe Slovo, she was murdered by a parcel bomb sent to her in Mozambique. Ironically, today Israel stands alone as the last remaining state that legally enforces racial segregation, but even the concrete walls and barbed wire dividing the West Bank and Gaza cannot hold forever.

This video footage was uploaded as recently as Wednesday 29th. It shows a young Palestinian girl living under Israeli control in Hebron having to climb a closed security gate just to get home:

The fence had been extended in 2012 and fitted with a single gate to provide entrance to the Gheith and a-Salaimeh neighborhoods in Hebron. The footage below was recorded by B’Tselem in May 2018 and shows other students unable to return from school and their mothers beseeching the Border Police officers to open it. The officers say in response that the gate is closed as “punishment” for stone throwing; a collective punishment that is prohibited under international law:

*

Likewise, homosexuality, which until astonishingly recent times remained a virtually unspoken taboo, was decriminalised as comparatively recently as 1967 – the year of my birth and coincidentally the same year aboriginal Australians received full citizenship and the right to vote.

Before the Sexual Offences Act came into force, gay men faced prosecution and a prison sentence (lesbians slipped through the legal loophole due to technicalities surrounding the delicate issue of penetration), whereas today they enjoy the equal right to marriage, which cynics will doubtless say entitles them to an alternative form of imprisonment, but hurrah for that… since irrespective of one’s views on the institution of marriage, equality under law is indicative of genuine social progress. The same goes for the transformation of attitudes and legal framework in countering discrimination on grounds of gender, disability and age. Discrimination based on all these prejudices is plain wrong, and liberation on all fronts, an unimpeachable good.

In these ways, our own society – like others across the globe – has become more inclusive, and, if we choose to describe it as such, more meritocratic. Yet many are still left out in the cold. Which people? Sadly, but in truth, all of the old prejudices linger on – maybe they always will – but prime amongst them is the malignant spectre of racism.

For overall, as we have become more conscious and less consenting of racism than in the past, the racists, in consequence, have adapted to fit back in. More furtive than old-style racism, which wore its spiteful intolerance so brashly on its sleeve, many in the fresh crop of bigots have learned to feign better manners. The foaming rhetoric of racial supremacy is greatly moderated, and there is more care taken to legitimise the targeting of the chosen pariahs. Where it used to be said how “the Coloureds” and “the Pakis” (and other labels very much more obscene again) were innately ‘stupid’, ‘lazy’, ‘doped-up’ and ‘dirty’ (the traditional rationalisations for racial hatred), the stated concern today is in difference per se. As former BNP leader Nick Griffin once put it:

[I]nstead of talking about racial purity, you talk about identity, and about the needs and the rights and the duty to preserve and enhance the identity of our own people.4

And note how identity politics here plays to the right wing just as does to the left, better in fact, because it is a form of essentialism. In effect, Griffin is saying ‘white lives matter’, when of course what he really means is ‘white lives are superior’. But talk of race is mostly old hat to the new racists in any case, who prefer to attack ‘culture’ over ‘colour’.

In multicultural Britain, it is the Muslim minority, and especially Muslim women, who receive the brunt of the racial taunts, the physical abuse, and who have become the most preyed upon as victims of hate crimes, while the current hypocrisy lays blame at their door for failing to adopt western values and mix in; a scapegoating that alarmingly recalls the Nazi denigration and demonisation of the Jews. It follows, of course, that it is not the racists who are intolerant but the oppressed minority who are or who look like Muslims. By this sleight of hand, Islamophobia (a very clumsy word for a vile creed) festers as the last manifestation of semi-respectable racism.

When it was released in 1974, “Blazing Saddles” shocked audiences. It is no less shocking today, but the difference today is that no-one could make it. No contemporary film in which every third word is a vile racist expletive would pass the censors. Yet as it plunges us headlong into a frenetic whirlwind of bigotry, and as all commonsense rationality is suspended, nothing remains besides the hilarious absurdity of racial prejudice. Dumb, crude, and daring: it is comedy of rare and under-appreciated genius. As Gene Wilder puts it “They’ve smashed racism in the face and the nose is bleeding, but they’re doing it while you laugh” [6:15 mins]. Embedded below is a BTS documentary tribute entitled “Back in the Saddle” [Viewer discretion advised]:

*

“It is only shallow people who do not judge by appearances,” quipped Oscar Wilde.5 And though the accusation at the heart of his bon mot may be contested, that most people certainly do judge by appearances really cannot be. Briefly then, I wish to consider a few of the most overlooked but widespread social prejudices, which though seldom so vicious and of less clear historical significance than other such virulent strains as sexism and racism, are long-standing and ingrained prejudices nonetheless. These tend to be prejudices against certain types of individual, rather than against interconnected “communities”. Prejudices so commonplace that some readers will doubtless see my digression as trivial, or even laughable, and yet there is good reason to delve into the matter as it opens up a bigger question, and, once expanded upon, more fundamentally challenges our whole notion of meritocracy. So here goes… (I am braced for the many titters and guffaws and encourage you to laugh along!)

Firstly, there is a permitted prejudice on the one hand against short blokes (trust me, I am one), and on the other against fat ladies. Short men and fat women being considered fair game for ridicule literally on the grounds that we don’t shape up. Which would be fine – believe me, I can take a joke – except that in playing down the deep-seated nature of such prejudice, as society generally does, there are all sorts of insidious consequences. For it means, to offer a hopefully persuasive example, that whenever satirists (and I use the term loosely, since genuine satire is rather thin on the ground) lampoon Nicolas Sarkozy, rather than holding him to account for his reactionary politics and unsavoury character, they go for the cheaper shot of quite literally belittling him (and yes, prejudice in favour of tallness saturates our language too). Worse still, Sarkozy had the gall to marry a taller and rather glamorous woman, which apparently makes him a still better target for wisecracks about being a short-arse (it’s okay, I’m reclaiming the word). As a result, Sarkozy is most consistently disparaged only for what he couldn’t and needn’t have altered, instead of what he could and should have. No doubt he takes it all on the chin… presuming anyone can actually reach down that far! Yes, it’s perfectly fine to laugh, just so long as we don’t all continue pretending that there is no actual prejudice operating.

Moreover, it is healthy for us to at least admit that there is a broader prejudice operating against all people regarded in one way or another as physically less attractive. Being fat, short, bald or just plain ugly are – in the strictest sense – all handicaps, which, and though far from insurmountable, represent a hindrance to achieving success. Even the ginger-haired enjoy a less than even break, as Neil Kinnock (who was unfortunate enough to be a Welshman too) discovered shortly after he was elected leader of the Labour Party.

Indeed, most of us will have been pigeon-holed one way or another, and though we may sincerely believe that we don’t qualify to be categorised too negatively, our enemies will assuredly degrade us for reasons beyond our ken. But then, could we ever conceive of, for instance, the rise of something akin to let’s say an “ugly pride” movement? Obviously it would be comprised solely of those self-aware and unblinkingly honest enough to see themselves as others actually see them. This envisaged pressure group would comprise an exceptionally brave and uncommon lot.

Then what of the arguably more delicate issues surrounding social class? Indeed, we might reasonably ask ourselves why is there such an animal as social class in the first place? And the quick answer is that people are inherently hierarchical. That “I look up to him because he is upper class, but I look down on him because he is lower class”, to quote again the famous skit from The Frost Report. But now pay proper attention to the vocabulary and its direct correspondence with the physical stature of the three comedians.6

*

Class and stature side-by-side, just as they are in the dictionary – and as they have been throughout recent history thanks to dietary deficiencies. Here is a visual gag with etymological parallels: the word ‘stature’ itself a double entendre. But, and unlike physical stature, class is already inextricably tied into levels of wealth and success, and virtually impossible to escape in any society – the Soviet system and Mao’s China were arguably more deeply class-riven than our own purportedly “classless” societies.

Incidentally, I in no way advocate the drafting of future legislation to close the gap on these alternative forms of everyday discrimination: demanding social justice for all those with unpopular body shapes, or who speak with the wrong accent, or stutter, or who have chosen to grow patches of hair in the wrong places, or whatever it is (beards became fashionable after I wrote this!). That would instantly make our lives intolerable in another way: it would be (as the Daily Mail loves to point out) “political correctness gone mad!” After all, prejudice and discrimination come in infinite guises, so where could we finally draw the line?

All of which brings me to our last great tolerated prejudice, and one that is seldom if ever acknowledged as a prejudice in the first place. It is our own society’s – and every other society’s for that matter – very freely held discrimination on the grounds of stupidity. And no, this is not meant as a joke. But that it sounds like a joke makes any serious discussion about it inherently tricky.

Because the dim (and I have decided to moderate my language to avoid sounding unduly provocative, which is not easy – I’ll come to other tags I might have chosen in a moment) cannot very easily stand up for themselves, even if they decide to try. Those willing to concede that their lives are held back by a deficit in braininess (sorry, but the lack of more appropriate words is unusually hampering) will very probably fail to grasp much, if anything at all, of the bigger picture, or be able to articulate any of the frustrations they may feel as daily they confront a prejudice so deeply entrenched that it passes mostly unseen. Well, it’s fun to pick on the idiots, blockheads, boneheads, thickos, cretins, dimwits, dunderheads, dunces, knuckleheads, dumbbells, imbeciles, morons, jerks, and simpletons of the world isn’t it? It is the cheaper half of every comedy sketch, and in all likelihood will remain so; with much of the rest that brings us merriment being the schadenfreude of witnessing the self-same idiots cocking up over and over again. And finally, is there really a nicer word that usefully replaces all the pejoratives above? Our casual prejudice against the dim has been indelibly written into our dictionaries.

On May 13th, 1999, comedian George Carlin was invited to deliver a speech to the National Press Club at Washington D.C. He used the occasion to poke fun at the tortuous abuse of language by politicians as well as the growing tyranny of an invented “soft language”, which includes what he describes as ‘the tedious liberal labeling’ of minorities. His speech is followed by an entertaining Q&A session:

Here’s a little more from Carlin dishing the dirt on political correctness:

*

Now if I’d been writing say a hundred years ago (or even more recently) the available vocabulary would have been a little different. For it was permissible during the first half of the last century to speak and write about the problem of ‘feeble-mindedness’ – a term that implies an innate (and thus inherited) ‘disability’. Moreover, as part of a quasi-scientific conversation, social reformers including intellectuals and political thinkers got into the habit of discussing how this affliction (as it was then regarded) might best be eradicated.

Those on the political left were no less shameful in this regard than those on the right, with radical thinkers like H.G. Wells 7 and George Bernard Shaw, chipping in alongside the youthful Winston Churchill8; all scratching their high brows to think up ways of preventing the spread of such evidently bad stock from ruining good society – ‘the feeble-minded’, for reasons never dwelt on by the pioneering eugenicists, not the least bit incapable of passing on their enfeebled genes.

Thanks again to genuine social progress it is unacceptable to speak (openly) about the elimination of the underclasses in our societies today, or to openly speculate on means of halting their uncontrolled and unwanted proliferation (though I write very much in terms that Wells, Shaw and Churchill would have understood). But eugenics, we should constantly remind ourselves, was a great deal more fashionable not so very long ago – even after the concentration camps and worryingly under alternative names it finds advocates still today (for instance, the Silicon Valley techies gather nowadays for conferences on transhumanism, the artificial ‘enhancement’ of humanity, which is one way in which eugenics has reemerged9).

Today’s progressives (and keep in mind that Wells and Shaw both regarded themselves as progressives of their own times) prefer to adopt a more humanitarian position. Rather than eliminating ‘feeble-mindedness’, the concern is to assist ‘the disadvantaged’. A shift in social attitude that is commendable, but it brings new hazards in its stead. For implicit in the new phraseology is the hope that since disparities stem from disadvantage, all differences between healthy individuals might one day be overcome. That aside from those suffering from disability, everyone has an approximately equivalent capacity when it comes to absorbing knowledge and learning skills of one form or another, and that society alone, to the advantage of some and detriment of others, makes us smart or dim. But this is also false, and cruelly so – though not yet barbarously.

For differences in social class, family life, access to education, and so forth (those things we might choose to distinguish as environment or nurture) are indeed significant indicators of later intellectual prowess (especially when our benchmark is academic performance). So it makes for comfortable presupposition that regarding intelligence (an insanely complex matter to begin with) the inherent difference between individuals is slight, and upbringing is the key determinant, but where’s the proof? And if this isn’t the whole picture – as it very certainly isn’t – then what if, heaven forfend, some people really are (pro)created less cognitively proficient than others? Given that they did indeed receive equivalent support through life, it follows that failure is “their own fault”, is it not?

In any case, intelligence, like attractiveness, must be to some degree a relative trait. During any historical period, particular forms of mental gymnastics are celebrated when others are overlooked, and so instruments to measure intelligence will automatically be culturally biased (there is a norm and there are fashions) to tally with the socially accepted idea of intelligence which varies from place to place and from one era to the next. There can never be an acid test of intelligence in any pure and absolute sense.10

Furthermore, whatever mental abilities happen to confer the mark of intelligence at any given time or place, obviously cannot be equally shared by everyone. As with other human attributes and abilities, there is likely to be a bell curve. It follows, therefore, that whatever braininess is or isn’t (and doubtless it takes many forms), during every age and across all nations, some people will be treated as dimmer, or brighter, than their fellows. And notwithstanding that whatever constitutes intelligence is socially determined to some extent, and that estimates of intelligence involve us in a monumentally complex matter, it remains the case that an individual’s capacity for acquiring skills and knowledge must be in part innate. This admission is both exceedingly facile and exceedingly important, and it is one that brings us right to the crux of meritocracy’s most essential flaw.

For how can those who are thought dim be left in charge of important things? They can’t. Which means that it would be madness to give the dimmest people anything other than the least intellectually demanding jobs. The meritocratic logic then follows, of course, that being less capable (and thus relegated to performing only the most menial tasks) makes you less worthy of an equal share, and yet this cuts tangentially across the very principle of ‘fairness’ which meritocracy is supposed to enshrine. For wherein lies the fairness in the economic exclusion of the dim? To reiterate what I wrote above, our prejudice is so deeply ingrained that to many such exclusion will still appear justified. As if being dim is your own lookout.

For whether or not an individual’s perceived failure to match up to society’s current gauge of intelligence is primarily down to educational ‘disadvantage’ (in the completest sense) or for reasons of an altogether more congenital kind, we may justifiably pass over the comfortable view that equal opportunity (laudable as this is) can entirely save the day. Degrees of intellectual competence – whether this turns out to be more socially or biologically determined – will always be with us, unless that is, like Wells, Shaw and Churchill (together with a many other twentieth century social reformers including Theodore Roosevelt, Woodrow Wilson, Alexander Graham Bell, and the founder of Planned Parenthood, Margaret Sanger) we opt instead for the eugenic solution – and I trust we do not. But bear in mind that programmes of forced sterilisation kept running in regions of the western world long after WWII right up to the 1970s.11 Earlier calls to weed out the “feeble-minded” that never fully went away, but instead went underground.

On March 17th 2016, ‘Democracy Now!’ interviewed Adam Cohen, co-editor of TheNationalBookReview.com and author of “Imbeciles: The Supreme Court, American Eugenics, and the Sterilization of Carrie Buck”, who explained how:

After World War II, we put the leading Nazis on trial for some of the worst things that the Nazis did. One of those very bad things was they set up a eugenics program where they sterilized as many as 375,000 people. So we put them on trial for that. And lo and behold, as the movie [“Judgment at Nuremberg”] shows, their defense was: “How can you put us on trial for that? Your own U.S. Supreme Court said that sterilization was constitutional, was good. And it was your own Oliver Wendell Holmes, one of your most revered figures, who said that. So, why are we the bad guys in this story?” They had a point.

Click here to watch on the Democracy Now! website.

*

Now for those further thoughts from the man we might describe as “the father of meritocracy” – even though he would certainly hate it! This is Michael Young speaking out against about his accidental bastard child and the decisive role it is has played in reshaping our societies:

I expected that the poor and the disadvantaged would be done down, and in fact they have been. If branded at school they are more vulnerable for later unemployment.

They can easily become demoralised by being looked down on so woundingly by people who have done well for themselves.

It is hard indeed in a society that makes so much of merit to be judged as having none. No underclass has ever been left as morally naked as that.12

This meritocracy we live in today, as Michael Young points out, is not just a distant remove from the fairest society imaginable, but in other ways – psychological ones especially – arguably crueller than any older, and less enlightened, -ocracies.

Embedded below is one of a series of lectures “Biology as Ideology” given by distinguished geneticist and evolutionary biologist Richard Lewontin in 1990. Lewontin here explaining how erroneous theories of biological determinism have been used to validate and support the dominant sociopolitical theories and vice versa. He also offers his subversive thoughts on meritocracy:

*

Inevitably, ‘merit’ is equated with, and thus mistaken for, ‘success’, and this is true not only for our self-declared meritocracy, but universally. Think about it: if millions of people love to read your books, or to listen to your songs, or just to watch your delightful face on their TV screens, then who would not leap to the conclusion that what they do is of the highest ‘merit’? How else did they rise to stand above the billions of ordinary anonymous human drones?

The converse is also true. That those who remain anonymous are often in the habit of regarding themselves as less significant – in fact psychologically less real – than others in the limelight they see and admire: the celebrities and the VIPs. Which brings me to a lesson my father taught me; an observation which reveals in aphoristic form the inbuilt fault with all conceptions of meritocracy: VIP being a term that makes him curse. Why? For the clinching fact that every one of us is a “very important person”. If this sounds corny or trite then ask yourself sincerely, as my father once asked me: “Are you a very important person…?”

*

Famously, Van Gogh sold just a single painting in his lifetime13, but then we all know that millions of terrible painters have also sold one (or less than one!) Not so widely known is that a great deal of Schubert’s music was lost when, in the immediate aftermath of his death, it was recycled as waste paper; but then again, thousands of dreadful composers have also had their music posthumously binned. So the odds are that if you can’t sell your music or publish your book, then you’re just another of the billions, rather than an as yet unappreciated master and another Van Gogh or Schubert. For aside from posterity, and no matter how much we might like to conjure one up, there is no established formula for separating ‘merit’ from ‘success’, and no good reason for supposing we will ever discover such a razor.

In reality therefore, any form of meritocracy will only ever be a form of success-ocracy, and in our own system, money is the reification of success. A system in which success and thus money invariably breeds more success and more money because unavoidably it contains positive and negative feedback loops. For this reason the well-established ruling oligarchies will never be unseated by means of any notional meritocracy – evidence of their enduring preeminence being, somewhat ironically, more apparent in the American republic, where dynasties, and especially political ones, are less frowned upon, and in consequence have remained more visible than in the class-ridden island kingdom it abandoned and then defeated. But even if our extant aristocracies were one day uprooted wholesale, then meritocracy simply opens the way for that alternative uber-class founded by the “self-made man”.

Indeed, ‘aristocracy’, deriving from the Greek ἀριστοκρατία (aristokratia) and literally meaning “rule of the best”, sounds a lot like ‘meritocracy’ to me. Whereas governance by those selected as most competent (the other way ‘meritocracy’ is sometimes defined) is better known by an alternative name too – ‘technocracy’ in this case – with the select order of technocrats working to whose betterment we might reasonably ask. Meritocracy of both kinds – and every meritocratic system must combine these twin strands – has fascistic overtones.

The promise of meritocracy has been seductive largely because of its close compatibility with neoliberalism, today’s predominant, in fact unrivalled, politico-economic ideology. Predicated on the realism that humans do indeed have an ingrained predisposition to social hierarchy (something that traditional concepts of egalitarianism sought to abolish), it offers a reconfigured market solution to foster a sort of laissez-faire egalitarianism: the equalisation of wealth and status along lines that are strictly “as nature intended”. Furthermore, it appeals to some on the left by making a persuasive case for “equality of opportunity”, if always to the detriment of the more ambitious goal of “equality of outcome”. A sidelining of “equality of outcome” that has led to a dramatic lowering of the bar with regards to what even qualifies as social justice.

Moreover, the rightward drift to meritocracy involves the downplaying of class politics in favour of today’s more factional and brittle politics of identity. This follows because under meritocracy the rigid class barriers of yesteryear are ostensibly made permeable and in the long run must slowly crumble away altogether. In reality, of course, social mobility is heavily restricted for reasons already discussed at length. But this abandonment of class politics in favour of the divisiveness of identity politics is greatly to the benefit of the ruling establishment of course. Divide and conquer has been their oldest maxim.

Interestingly, of the many advocates of meritocracy – from Thatcher to Reagan; Brown to Blair; Cameron to Obama; Merkel to May – none have bothered to very precisely define their terms. What do they mean to imply by ‘merit’ and its innately slippery counterpart ‘fairness’? And whilst they talk of ‘fairness’ over and over again – ‘fairness’ purportedly underlying every policy decision they have ever taken – the actual direction all this ‘fairness’ was leading caused a few to wonder whether ‘fairness’ might be wrong in principle! Like other grossly misappropriated abstract nouns – ‘freedom’ and ‘democracy’ spring instantly to mind – the difficulty here is that ‘fairness’ is a handy fig-leaf.

Instead, and if we genuinely wish to live in a society striving for greater equality, then the political emphasis ought not to be placed too heavily on wooly notions like ‘merit’ or ‘fairness’ but upon enabling democracy in the fullest sense. The voice of the people may not be the voice of God, but it is, to paraphrase Churchill (who mostly hated it), the least worst system.14 One person, one vote, if not quite the bare essence of egalitarianism, serves both as a fail-safe and a necessary foundation.

Of course, we must always guard against the “tyranny of the majority” by means of a constitutional framework that ensures basic rights and freedoms for all. For democracy offers an imperfect solution, but cleverly conceived and justly organised neither is it, as so many right-wing libertarians are quick to tell you: “two wolves and a sheep deciding what to have for dinner”. This sideswipe is not just glib, but a better description by far of the extreme right-wing anarchy they advocate. In reality, it is their beloved ‘invisible hand’ that better ensures rampant inequality and social division, and for so long as its influence remains unseen and unfettered, will continue to do so, by rigging elections and tipping the scales of justice.

Democracy – from its own etymology: rule by the people – is equality in its most settled form. Yet if such real democracy is ever to arise and flourish then we must have a free-thinking people. So the prerequisite for real democracy is real education – sadly we are a long way short of this goal too and once again heading off in the wrong direction. But that’s for a later chapter.

Next chapter…

*

Addendum: our stakeholder society and the tyranny of choice

Prior to the rise of Jeremy Corbyn and to a lesser extent Bernie Sanders (for further thoughts on Sanders read my earlier posts), mainstream politics in Britain and America, as more widely, were converged to such a high degree that opposition parties were broadly in conjunction. Left and right had collapsed to form a single “centrist” amalgam in agreement across a wide range of diverse issues spanning race relations, gender equality, immigration, environmentalism, to foreign policy, and most remarkably, economics. In Britain, as in America, the two major parties ceased even to disagree over the defining issue of nationalisation versus privatisation because both sides now approved of the incorporation of private sector involvement into every area of our lives. “Big government”, our politicians echoed in unison, is neither desirable nor any longer possible. Instead, we shall step aside for big business, and limit ourselves to resolving “the real issues”.

The real issues? Why yes, with the business sector running all the fiddly stuff, governments pivoted to address the expansion of individual opportunity and choice. Especially choice. Choice now became the paramount concern.

Even the delivery of essential public services, once the duty of every government (Tory and Labour alike), began to be outsourced. No holy cows. It became the common doctrine that waste and inefficiency in our public services would be abolished by competition including the introduction of internal markets and public-private partnerships, which aside from helping to foster efficiency, would, importantly, diversify customer choice once again.

Under the new social arrangement, we, the people, became “stakeholders” in an altogether more meritocratic venture. Here is Tony Blair outlining his case for our progressive common cause:

“We need a country in which we acknowledge an obligation collectively to ensure each citizen gets a stake in it. One Nation politics is not some expression of sentiment, or even of justifiable concern for the less well off. It is an active politics, the bringing of a country together, a sharing of the possibility of power, wealth and opportunity…. If people feel they have no stake in society, they feel little responsibility towards it, and little inclination to work for its success. ….”15

Fine aspirations, you may think. But wait, and let’s remember that Blair was trained as a lawyer, so every word here counts. “Sharing in the possibility of power…” Does this actually mean anything at all? Or his first sentence which ends: “…to ensure each citizen gets a stake in it” – “it” in this context presumably meaning “the country” (his subject at the beginning). But every citizen already has a stake in the country, doesn’t s/he? Isn’t that what being a citizen means: to be a member of a nation state with an interest, or ‘stake’ (if we insist) in what goes on. However, according to Blair’s “One Nation” vision, members of the public (as we were formerly known) are seemingly required to become fully paid-up “stakeholders”. But how…?

Do we have to do something extra, or are our “stakeholder” voices to be heard simply by virtue of the choices we make? Is this the big idea? The hows and wheres of earning a salary, and then of spending or else investing it; is this to be the main measure of our “stakeholder” participation? In fact, is “stakeholder” anything different than “stockholder” in UK plc? Or is it less than this? Is “stakeholder” substantially different from “consumer”? According to the Financial Times lexicon’s definition, a stakeholder society is:

“A society in which companies and their employees share economic successes.”16

Well, I certainly don’t recall voting for that.

*

We are increasingly boggled by choice. Once there was a single electricity supply and a single gas supply – one price fitting all. Now we have literally dozens of companies offering different deals – yet all these deals finally deliver an entirely identical supply of electricity and gas. The single difference is the price, but still you have to choose. So precious moments of our once around the sun existence are devoted to worrying about which power company is charging the least amount. And the companies know all this, of course, so they make their deals as complicated as possible. Perhaps you’ll give up and choose the worst of options – for the companies concerned, this is a winning strategy – thinking about it, this is their only winning strategy! Or, if you are of a mind to waste a few more of your precious never to be returned moments of existence, you may decide to check one of the many comparison websites – but again, which one? Just one inane and frustrating choice after another. And more of those tiresome tickboxes to navigate.

But choice is everything. So we also need to worry more about the latest school and hospital league tables. It is vital to exercise our right to choose in case an actual ambulance arrives with its siren already blaring. In these circumstances we need to be sure that the ambulance outside is bound for a hospital near to the top of the league, because it is in the nature of leagues that there is always bottom – league tables giving a relative assessment, and ensuring both winners and losers.

And provided, an entirely free choice – and not one based on catchment areas – what parent in their right mind elects to send their offspring to a worse school over a better one? So are we just to hope our nearest school and/or hospital is not ranked bottom? Thankfully, house prices save much of the time in helping to make these determinations.

Meantime I struggle to understand what our politicians and civil servants get up to in Whitehall these days. Precisely what do those who walk the corridors of power find to do each day? Reduced to the role of managers, what is finally left for them manage?

And where is all of this choice finally leading? In the future, perhaps, in place of elections, we will be able to voice our approval/dissatisfaction by way of customer surveys. With this in mind, please take a moment to select the response that best reflects your own feelings:

Given the choice, would you say you prefer to live in a society that is:

 More fair

Less fair

Not sure

*

Please note that for the purposes of ‘publishing’ here I have taken advantage of the option to incorporate hypertext links and embed videos – in order to distinguish additional commentary from the original text all newly incorporated text has been italised.

*

1 Quote taken from Chapter 10 of George Orwell’s satirical fairytale Animal Farm published in 1945. After the animals have ceased power at the farm they formulate “a complete system of thought” which is designed to unite the animals as well as preventing them from returning to the evil ways of the humans. The seventh and last of these original commandments of ‘Animalism’ is straightforwardly that “All animals are equal”, however, after the pigs have risen to dominance again, the sign is revised and so this last commandment reads “All animals are equal, but some animals are more equal than others”.

2 From an article entitled “Down with meritocracy: The man who coined the term four decades ago wishes Tony Blair would stop using it” written by Michael Young, published in the Guardian on June 29, 2001. http://www.guardian.co.uk/politics/2001/jun/29/comment

3 Quote taken from a sermon by Martin Luther King Jr. delivered at Temple Israel of Hollywood delivered on February 25, 1965. In fuller context, he said:

“And I believe it because somehow the arc of the moral universe is long but it bends toward justice. We shall overcome because Carlyle is right: “No lie can live forever.” We shall overcome because William Cullen Bryant is right: “Truth crushed to earth will rise again.” We shall overcome because James Russell Lowell is right: “Truth forever on the scaffold, wrong forever on the throne. Yet, that scaffold sways the future and behind the dim unknown standeth God within the shadow, keeping watch above his own.”

An audio recording of King’s speech and a full transcript is available here: http://www.americanrhetoric.com/speeches/mlktempleisraelhollywood.htm

4 Quote taken from a meeting on April 22nd, 2000 with American white supremacist and former Grand Wizard of the Ku Klux Klan, David Duke, that was recorded as “American Friends of the British National Party” video.

In fuller context Griffin says:

“Perhaps one day, once by being rather more subtle we got ourselves in a position where we control the British Broadcasting media and then we tell ’em really how serious the immigration problem was, and we tell them the truth about a lot of the crime that’s been going on, if we tell ’em really what multiracialism has meant and means for the future, then perhaps one day the British people might change their mind and say yes every last one must go.  Perhaps they will one day.  But if you hold that out as your sole aim to start with, you’re going to get absolutely nowhere. So instead of talking about racial purity, you talk about identity, and about the needs and the rights and the duty to preserve and enhance the identity of our own people.  My primary identity quite simply is there (points to veins in wrist). That’s the thing that counts.”

The clip was shown in BBC1’s Panorama: Under the Skin first broadcast on November 25, 2001.

The transcript is available here: http://news.bbc.co.uk/hi/english/static/audio_video/programmes/panorama/transcripts/transcript_25_11_01.txt

5 Although these words are frequently attributed to Wilde himself, they actually belong to one of his characters. To Lord Henry Wotton who says “To me, beauty is the wonder of wonders. It is only shallow people who do not judge by appearances. The true mystery of the world is the visible, not the invisible.” Taken from Chapter 2 of Wilde’s once scandalous novel The Picture of Dorian Gray.

6 The “Class Sketch” was first broadcast on April 7, 1966 in an episode of David Frost’s satirical BBC show The Frost Report. It was written by Marty Feldman and John Law, and performed by John Cleese, Ronnie Barker and Ronnie Corbett in descending order of height!

7 Anticipations of the Reaction of Mechanical and Scientific Progress upon Human Life and Thought (1901), is one of H.G.Wells’ earliest blueprints for the future. Set in 2000, a youthful Wells (aged 34) suggested an altogether more matter of fact solution to the problem of what he then called “the People of the Abyss” than a promise of education, education, education (the commentary is my own of course):

“It has become apparent that whole masses of human population are, as a whole, inferior in their claim upon the future, to other masses, that they cannot be given opportunities or trusted with power as superior peoples are trusted, that their characteristic weaknesses are contagious and detrimental in the civilizing fabric, and that their range of incapacity tempts and demoralises the strong. To give them equality is to sink to their level, to protect and cherish them is to be swamped in their fecundity…”

Which is putting it most politely! Oh dear, oh dear! What has happened to the clarion call for freedom and equality (and here I mean equality of opportunity, since to be fair Wells was ever the egalitarian, consistently keener on meritocracy than any of the more radical ideals of wealth redistribution). Might it be that the young Mr Wells was showing off his truer colours? Let us go on a little:

“The new ethics will hold life to be a privilege and a responsibility, not a sort of night refuge for base spirits out of the void; and the alternative in right conduct between living fully, beautifully, and efficiently will be to die.”

Just who are the hideous hoards who Wells so pities and despises (in roughly equal measures)? Let us read on:

“…the small minority, for example, afflicted with indisputably transmissible diseases, with transmissible mental disorders, with such hideous incurable habits of the mind as the craving for intoxication…”

But he’s jesting… isn’t he?

“And I imagine also the plea and proof that a grave criminal is also insane will be regarded by them [the men of the New Republic] not as a reason for mercy, but as an added reason for death…”

Death? Why not prison and rehabilitation…?

“The men of the New Republic will not be squeamish either, in facing or inflicting death, because they will have a fuller sense of the possibilities of life than we possess…”

Ah, I see, yes since put like that… yes, yes, death and more death, splendid!

“All such killing will be done with an opiate, for death is too grave a thing to be made painful or dreadful, and used as a deterrent for crime. If deterrent punishments are to be used at all in the code of the future, the deterrent will neither be death, nor mutilation of the body, nor mutilation of the life by imprisonment, nor any horrible things like that, but good scientifically caused pain, that will leave nothing but memory…”

An avoidance of nasty old pain… that’s good I suppose.

“…The conscious infliction of pain for the sake of pain is against the better nature of man, and it is unsafe and demoralising for anyone to undertake this duty. To kill under the seemly conditions of science will afford is a far less offensive thing.”

Death, yes, a more final solution, of course, of course…

This is horrifying, of couse, especially in light of what followed historically.

Deep down Wells was an unabashed snob, though hardly exceptional for his time. Less forgivably, Wells was a foaming misanthropist (especially so when sneering down on the hoi polloi). But mostly he longed to perfect the human species, and as a young man had unflinchingly advocated interventions no less surgical than those needed to cure any other cancerous organ. But then of course, it was once fashionable for intellectual types to seek scientific answers to social problems: programmes of mass-sterilisation and selective reproduction.

His Fabian rival George Bernard Shaw had likewise talked of selective breeding in his own quest to develop a race of supermen, whilst Julian Huxley, Aldous’s big brother, was perhaps the foremost and pioneering advocate of eugenics, later coining the less soiled term ‘transhumanism’ to lessen the post-Nazi stigma. Judged in the broader historical context therefore, Wells was simply another such dreaming ideologue.

That Wells was also one of the first to use the term “new world order” maybe of little lasting significance, however totalitarian his visions for World Socialism, but importantly Wells was never in the position to realise his grander visions, in spite of being sufficiently well-connected to arrange private meetings with President Franklin D. Roosevelt, who entertained him over dinner, and with Joseph Stalin at the Kremlin. Finally, he was unable to inspire enough significant others to engage in his “open conspiracy”.

All extracts below are taken from Anticipation of the Reaction of Mechanical and Scientific Progress upon Human Life and Thought, Chapman & Hall, 1901

8

Like most of his contemporaries, family and friends, he regarded races as different, racial characteristics as signs of the maturity of a society, and racial purity as endangered not only by other races but by mental weaknesses within a race. As a young politician in Britain entering Parliament in 1901, Churchill saw what were then known as the “feeble-minded” and the “insane” as a threat to the prosperity, vigour and virility of British society.

The phrase “feeble-minded” was to be defined as part of the Mental Deficiency Act 1913, of which Churchill had been one of the early drafters. The Act defined four grades of “Mental Defective” who could be confined for life, whose symptoms had to be present “from birth or from an early age.” “Idiots” were defined as people “so deeply defective in mind as to be unable to guard against common physical dangers.” “Imbeciles” were not idiots, but were “incapable of managing themselves or their affairs, or, in the case of children, of being taught to do so.” The “feeble-minded” were neither idiots nor imbeciles, but, if adults, their condition was “so pronounced that they require care, supervision, and control for their own protection or the protection of others.”

Extract taken from a short essay called “Churchill and Eugenics” written by Sir Martin Gilbert, published on May 31, 2009 on the Churchill Centre website. http://www.winstonchurchill.org/support/the-churchill-centre/publications/finest-hour-online/594-churchill-and-eugenics

9 “Population reduction” is another leftover residue of the old eugenics programme but freshly justified on purportedly scientific and seemingly less terrible neo-Malthusian grounds – when previous “population reduction” was unashamedly justified and executed on the basis of the pseudoscience of eugenics, the pruning was always done from the bottom up, of course.

10 Aside from being the invention of pioneering eugenicist Francis Galton, the IQ test was an pseudo-scientific approach that first appeared to be validated thanks to the research of Cyril Burt who had devised ‘twin studies’ to prove the heritability of IQ. However, those studies turned out to be fraudulent:

“After Burt’s death, striking anomalies in some of his test data led some scientists to reexamine his statistical methods. They concluded that Burt manipulated and probably falsified those IQ test results that most convincingly supported his theories on transmitted intelligence and social class. The debate over his conduct continued, but all sides agreed that his later research was at least highly flawed, and many accepted that he fabricated some data.”

From the current entry in Encyclopaedia Britannica. http://www.britannica.com/EBchecked/topic/85886/Sir-Cyril-Burt

11

Eugenics is now rightly abjured, and if only for its abominable record for cruelty. But the cruelty of the many twentieth century programmes of eugenics was hardly incidental. Any attempt to alter human populations to make them fit an imposed social structure by means of the calculated elimination and deliberate manipulation of genetic stock automatically reduces people to the same level as farm animals.

It should be remembered too that what the Nazis had tried to achieve by mass murder across Europe was only novel in terms of its extremely barbarous method. Eugenics programmes to get rid of “inferior” populations by forced sterilisation having been introduced earlier in America and surreptitiously continuing into the 1970s. For instance, there was a secret programme for the involuntary sterilisation of Native American women long after the World War II.

http://muse.jhu.edu/login?auth=0&type=summary&url=/journals/american_indian_quarterly/v024/24.3lawrence.html

12 From the same Guardian article entitled “Down with meritocracy” written by Michael Young, published in June, 2001.

13 Van Gogh famously sold one painting during his lifetime, Red Vineyard at Arles. A painting that now resides at the Pushkin Museum in Moscow. The rest of Van Gogh’s more than 900 paintings were not sold nor came to public attention until after his death.

14

“Many forms of Government have been tried and will be tried in this world of sin and woe. No one pretends that democracy is perfect or all-wise. Indeed, it has been said that democracy is the worst form of government except all those other forms that have been tried from time to time.”

— Winston Churchill in a speech to the House of Commons, November 11, 1947.

15 Tony Blair speaking in Singapore on January 7, 1996.

16 The source for this definition is given as the Longman Business English Dictionary (although the link is lost). http://lexicon.ft.com/Term?term=stakeholder-society

Leave a comment

Filed under analysis & opinion, « finishing the rat race », financial derivatives, neo-liberalism, Noam Chomsky

all work and no play

The following article is Chapter Six of a book entitled Finishing The Rat Race which I am posting chapter by chapter throughout this year (and beyond). Since blog posts are stacked in a reverse time sequence (always with the latest at the top), I have decided that the best approach is to post the chapters in reverse order.

All previously uploaded chapters are available (in sequence) by following the link above or from category link in the main menu, where you will also find a brief introductory article about the book itself and why I started writing it.

*

BOSWELL, “But, sir, the mind must be employed, and we grow weary when idle.”
JOHNSON, “That is, sir, because others being busy, we want company; but if we were all idle, there would be no growing weary; we should all entertain one another… But no man loves labour for itself.”
1

*

Leaving aside the various species of bats and whales, very nearly all mammals are land-dwelling creatures. In fact, nearly all animals – meaning quadrupeds – spend their lives earthbound. For millennia humans too occupied the same earthbound sphere alongside fellow ground-dwelling organisms. So consider then the following: at this precise moment upwards of six thousand scheduled airliners are aloft in our skies, and at peak times as many as ten thousand are flying high above the clouds. Each of these airborne vessels is packed with many hundred perfectly ordinary human beings sat in rows, hurtling above our heads at altitudes exceeding thirty thousand feet and speeds above 500 miles per hour. This sum equates to literally millions of people airborne at each and every moment of each and every day – a significant proportion of the entire human population!

Now consider this: prior to December 17th 1903, only a handful of our species had ever lifted off the surface of the planet by any means at all and not a single human being had ever experienced powered flight. But then, on that fateful day, Orville and Wilbur Wright made three successful flights between them. On his first take-off, Orville covered 120 feet, remaining airborne for just 12 seconds. On the final flight, he valiantly managed 200 feet, all at an altitude of only ten feet. A century on, we have Airbus – take note the humdrum name of the company! – and the launch of its A380, the world’s largest passenger jet, which accommodates between 525 and 850 individuals, and is capable of flying approximately 10,000 miles nonstop. Thus, thanks to technology we have grown wings and been transformed into a semi-airborne species; entirely forgetting to be astonished by this remarkable fact is perhaps the final measure of our magnificent achievement.

*

“The world is undergoing immense changes. Never before have the conditions of life changed so swiftly and enormously as they have changed for mankind in the last fifty years. We have been carried along – with no means of measuring the increasing swiftness in the succession of events. We are only now beginning to realize the force and strength of the storm of change that has come upon us.

These changes have not come upon our world from without. No meteorite from outer space has struck our planet; there have been no overwhelming outbreaks of volcanic violence or strange epidemic diseases; the sun has not flared up to excessive heat or suddenly shrunken to plunge us into Arctic winter. The changes have come through men themselves. Quite a small number of people, heedless of the ultimate consequence of what they did, one man here and a group there, have made discoveries and produced and adopted inventions that have changed all the condition, of social life.”

These are the opening paragraphs from a lesser-known work by H.G. Wells. The Open Conspiracy, an extended essay written in 1928, was the first of Wells’ most earnest attempts to set the world to rights. Stumbling across it one day, it struck me that this voice from ninety years ago still chimes. I couldn’t help wondering indeed if we aren’t still in the midst of those same “immense changes”, being swept along by an, as yet, undiminished “storm of change”.

Wells, who uses the word ‘change’, in various formulations, no less than seven times (in a mere eight sentences), goes on to compare our modern wonders to the seven wonders of the ancient world, intending to emphasise their novel potency:

“Few realized how much more they were than any “Wonders.” The “Seven Wonders of the World” left men free to go on living, toiling, marrying, and dying as they had been accustomed to for immemorial ages. If the “Seven Wonders” had vanished or been multiplied three score it would not have changed the lives of any large proportion of human beings. But these new powers and substances were modifying and transforming – unobtrusively, surely, and relentlessly – very particular of the normal life of mankind.”

Wells had been trained as a scientist, and more than this, a scientist at a time when science was reaching its apogee. At the Royal College of Science2, he had studied biology under the tutelage of T. H. Huxley, the man who most publicly defended Darwin’s theory. In the debates against the Bishop of Oxford, Samuel Wilberforce, it was Huxley who challenged and defeated the permitted orthodoxy of divine creation by showing how Science makes a better account of our origins than religious authority; so in an important sense, Huxley must be seen as one of the pioneers of this scientific revolution. With religion rather abruptly and rudely dismissed, it was open to the scientists and technologists to lead us all to salvation.

Wells was keen to get involved, if only as one of science and technology’s most passionate and outspoken advocates.  Growing up in late Victorian Britain, he was well acquainted with how systems of mass production had mostly superseded manual methods to become the predominant form of industrial process. Likewise, he had witnessed the spread of agricultural machines for planting seeds and harvesting crops, and of automotive machines transporting loads and providing ever more reliable and comfortable means for human transit. These innovations had led to a dramatic increase both in production and, more importantly, in productivity, and machine processes were set to become ever more versatile and reliable.

Wells was amongst the first to seriously consider how these new modes of manufacture with their greater efficiencies and capacities for heavier constructions, not to mention for longer range transportation and communication, would bring rapid and sweeping changes to ordinary life. Most importantly, he understood that since technology potentially allowed the generation of almost limitless power, its rise would unstoppably alter human affairs forever, and by extension, impact upon the natural world too.

Quite correctly, Wells went on to forecast an age to come (our age), in which ordinary lives are transformed to an extent so far beyond the technological transformations of past ages that life is unutterably and irreversibly altered. Yet the widespread access to these “wonders”, as he insistently calls them, causes us to regard them as so ordinary that we seldom, if ever, stop to wonder about them.

For machines are nowadays embedded quite literally everywhere – one is in fact translating the movement of my fingertips into printed words, whilst another happens to be reproducing the soulful precision of Alfred Brendel’s rendition of one of Franz Schubert’s late sonatas on a machine of still older conception (the piano) via yet another machine that preserves sound in the form of electrical impulses. Thanks to machines of these kinds, not only the sheet-music – those handwritten frequency-time graphs so painstakingly drafted, perhaps by candlelight, and very certainly using only a feather quill and inkpot – but thousands upon thousands of musical (and other) performances can be conjured up with literally “a click”. The snapping fingers of an emperor could never have summoned such variety. But then the internet is a wonder far exceeding even H.G. Wells’ far-seeing imagination.

*

More than a century ago, the poet, satirist and social commentator Oscar Wilde was another who looked forward to a time of such “wonders”. For Wilde, as for Wells, they presented reasons to be cheerful:

“All unintelligent labour, all monotonous, dull labour, all labour that deals in dreadful things, and involves unpleasant conditions, must be done by machinery. Machinery must work for us in coal mines, and do all sanitary services, and be the stoker of steamers, and clean the streets, and run messages on wet days, and do anything that is tedious and distressing… There is no doubt at all that this is the future of machinery; and just as trees grow while the country gentleman is asleep, so while Humanity will be amusing itself, or enjoying cultivated leisure – which, and not labour, is the aim of man – or making beautiful things, or reading beautiful things, or simply contemplating the world with admiration and delight, machinery will be doing all the necessary and unpleasant work. The fact is that civilization needs slaves… [But] Human slavery is wrong, insecure and demoralizing. On mechanical slavery, on the slavery of the machine, the future of the world depends.”3

Wilde and Wells were optimists, but cautious ones, and each foretold new dangers that potentially lay in wait for us. Wells wrote:

“They [the new “wonders”] increased the amount of production and the methods of production. They made possible “Big-Business,” to drive the small producer and the small distributor out of the market. They swept away factories and evoked new ones. They changed the face of the fields. They brought into the normal life, thing by thing and day by day, electric light and heating, bright cities at night, better aeration, new types of clothing, a fresh cleanliness. They changed a world where there had never been enough into a world of potential plenty, into a world of excessive plenty.”4

Wells believed that the very successes which brought about large-scale manufacturing and distribution, as well as commensurate developments in fields such as agriculture, sanitation and medicine, ones that were already extending the average life-expectancy, might still feasibly bring heavier burdens to bear on the planet. Left unchecked, he argued, our species would finish using up everything, whilst, exponentially crowding ourselves out of existence. So these new “wonders” were a double-edged sword. And then what of “excessive plenty” – of too much of a good thing – how do we avoid replacing one set of miseries with another? Such were Wells’ concerns, but then Wells owed a great deal to the eternal pessimist Thomas Malthus.

By contrast, at the dusk of the Victorian era, Wilde is not much bothered as Wells is, by the prospect of society overrun by a burgeoning and profligate mass of humanity, but by how we can ensure the new prosperity, so long awaited and desperately overdue, could be fairly distributed. After all, progress had until then been primarily technological in form and not social, and it appeared to Wilde that the costs of industrialisation were still hugely outweighing its benefits.

The centuries of Industrial Revolution had claimed so many victims. Not only those trapped inside the mills and the mines, the wage-slaves working all the hours God sends for subsistence pay, but those still more benighted souls incarcerated in the workhouses, alongside their malnourished children, who from ages six upwards might be forced underground to sweat in the mines or else to clamber about in the more choking darkness of chimneystacks.5 Industrial development meant that for the majority of adults and children (boys and girls) life was sunk into a routine of unremitting hardship and ceaseless backbreaking labour, as the poor were ruthlessly sacrificed to profit their masters – one big difference today, of course, is that our own sweatshops are more distant.

To abolish this class-ridden barbarism, Wilde therefore proposed an unapologetically radical solution:

“Up to the present, man has been, to a certain extent, the slave of machinery, and there is something tragic in the fact that as soon as man had invented a machine to do his work he began to starve. This, however, is, of course, the result of our property system and our system of competition. One man owns a machine which does the work of five hundred men. Five hundred men are, in consequence, thrown out of employment, and having no work to do, become hungry and take to thieving. The one man secures the produce of the machine and keeps it, and has five hundred times as much as he should have, and probably, which is of more importance, a great deal more than he really wants. Were that machine the property of all, everyone would benefit by it.”6

*

In case Wilde’s enthusiasm for collective ownership encourages you think it, then please be assured that he was not exactly a Leninist (as you will see), nor, in any traditional sense, was he a fully-fledged Marxist. In fact, if anything Wilde was an anarchist, heaping special praise on Peter Kropotkin, whom he once described as: “a man with a soul of that beautiful white Christ which seems coming out of Russia.”7

Now it is interesting and worthwhile, I think, to compare Wilde’s views, writing just a few decades earlier, with those of H.G. Wells, for both held notionally left-leaning sympathies and both were broadly hopeful; each underscoring the special importance of science and technology when it came to achieving such desirable goals as ending poverty and rebuilding a fairer society. For in some regards, Wilde’s perspective is orthogonally different to Wells – and it is Wells who made the better communist (though he remained deeply antagonistic towards Marx for other reasons).

For Wells was an unflinching collectivist, and thus forever seeking solutions in terms of strict autocratic control. For instance, in one of the concluding chapters of The Open Conspiracy, Wells outlines “seven broad principles” that will ensure human progress of which the sixth reads as follows:

“The supreme duty of subordinating the personal career to the creation of a world directorate capable of these tasks [ones that will ensure the betterment of mankind] and to the general advancement of human knowledge, capacity, and power”8.

Wilde, on the contrary, unswervingly insisted that above all else the sovereign rights of the individual must be protected. That personal freedom must never be horse-traded, since “the true personality of man”, as he puts it, is infinitely more precious than any amount of prospective gains in comfort and security. This is precisely where Wilde is at his most prescient, foreseeing the dangers of socialist authoritarianism a full two decades before the Russian revolution, and instinctively advising a simple cure:

“What is needed is Individualism. If the Socialism is Authoritarian; if there are governments armed with economic power as they are now with political power; if, in a word, we are to have Industrial Tyrannies, then the last state of man will be worse than the first.”9

So compare Wilde’s earlier views to those of Wells fifty years on, by which time the Soviet model was up and running, and yet he is still advocating the need for a more widespread and overarching central authority: ultimately, a world government to coerce and co-ordinate the masses into the new age of socialism; even to the point of eradicating misfits for the sake of the greater good.

For Wells, every answer for resolving humanity’s problems involved the implementation of top-down governance, with the patterns of individual behaviour controlled by means of an applied political force-field, whereas Wilde was equally insistent that individuals are not uniformly alike like atoms, and must be permitted, so far as is humanly possible, to organise ourselves. It is a fundamental difference in outlook that is reflected in their attitudes towards work.

*

The inherent value of work is rarely questioned by Wells. In his earlier fictional work A Utopian World he answers his own inquiry “will a Utopian be free to be idle?” as follows:

“Work has to be done, every day humanity is sustained by its collective effort, and without a constant recurrence of effort in the single man as in the race as a whole, there is neither health nor happiness. The permanent idleness of a human being is not only burthensome to the world, but his own secure misery.”10

Wells is expressing a concern that once the labouring masses are relieved of their back-breaking obligation to work, they may “develop a recalcitrance where once there was little but fatalistic acquiescence”:

“It is just because labour is becoming more intelligent, responsible, and individually efficient that it is becoming more audible and impatient in social affairs. It is just because it is no longer mere gang labour, and is becoming more and more intelligent co-operation in detail, that it now resents being treated as a serf, housed like a serf, fed like a serf, and herded like a serf, and its pride and thoughts and feelings disregarded. Labour is in revolt because as a matter of fact it is, in the ancient and exact sense of the word, ceasing to be labour at all.”11

For these reasons, Wells senses trouble ahead, whereas for Wilde, these same changes in modes of employment serve as further reasons to be cheerful:

“[And] as I have mentioned the word labour, I cannot help saying that a great deal of nonsense is being written and talked nowadays about the dignity of labour. There is nothing necessarily dignified about manual labour at all, and most of it is absolutely degrading. It is mentally and morally injurious to man to do anything in which he does not find pleasure, and many forms of labour are quite pleasureless activities, and should be regarded as such. To sweep a slushy crossing for eight hours on a day when the east wind is blowing is a disgusting occupation. To sweep it with joy would be appalling. Man is made for something better than disturbing dirt. All work of that kind should be done by machine.”12

In his essay, Wilde, unlike Wells, is unabashed in confessing to his own Utopianism, writing:

“Is this Utopian? A map of the world that does not include Utopia is not worth even glancing at, for it leaves out one country at which Humanity is always landing. And when humanity lands there, it looks out, and, seeing a better country, sets sail. Progress is the realization of Utopias.”13

But then, both Wilde and Wells were dreaming up Utopias during an age when dreaming about Utopia remained a permissible intellectual pursuit. So it is just that Wilde’s dream is so much grander than any visions of Wells. Wells was certainly an astute forecaster and could see with exceptional acuity what immediately awaited humanity around the next few corners, but Wilde, on the other hand, sought to navigate across a wider ocean. He did not wish to be constrained by the tedious encumbrances of his own time, and regarded the complete abolition of hard labour as an absolutely essential component of a better future. Even then, he was far from alone.

*

Writing in the thirties, Bertrand Russell was another outspoken advocate of cultured laziness. Russell, who is now venerated by some almost as a secular saint was nothing of the sort. Many of his views on politics and society were highly disagreeable and he was arguably one of the dreariest philosophers ever published, but this aside he was a supreme mathematician. It is noteworthy therefore that in order to support his own expressed desire for reducing the average workload, he did a few very simple sums. These led him to what he regarded as the most important, yet completely overlooked, lesson to be learned from the Great War.

At a time when the majority of the able-bodied population were busily fighting or else engaged in other means of facilitating the destructive apparatus of war, new modes of production had maintained sufficiency, and yet, as Russell pointed out, the true significance of this outstanding triumph of the new technologies was altogether masked by the vagaries of economics. He writes:

“Modern technique has made it possible to diminish enormously the amount of labour required to secure the necessaries of life for everyone. This was made obvious during the war. At that time all the men in the armed forces, and all the men and women engaged in the production of munitions, all the men and women engaged in spying, war propaganda, or Government offices connected with the war, were withdrawn from productive occupations. In spite of this, the general level of well-being among unskilled wage-earners on the side of the Allies was higher than before or since. The significance of this fact was concealed by finance: borrowing made it appear as if the future was nourishing the present. But that, of course, would have been impossible; a man cannot eat a loaf of bread that does not yet exist. The war showed conclusively that, by the scientific organization of production, it is possible to keep modern populations in fair comfort on a small part of the working capacity of the modern world. If, at the end of the war, the scientific organization, which had been created in order to liberate men for fighting and munition work, had been preserved, and the hours of the week had been cut down to four, all would have been well. Instead of that the old chaos was restored, those whose work was demanded were made to work long hours, and the rest were left to starve as unemployed.”

And so to the sums – easy stuff for a man who had previously tried to fathom a complete axiomatic system for all mathematics:

“This is the morality of the Slave State, applied in circumstances totally unlike those in which it arose. No wonder the result has been disastrous. Let us take an illustration. Suppose that, at a given moment, a certain number of people are engaged in the manufacture of pins. They make as many pins as the world needs, working (say) eight hours a day. Someone makes an invention by which the same number of men can make twice as many pins: pins are already so cheap that hardly any more will be bought at a lower price. In a sensible world, everybody concerned in the manufacturing of pins would take to working four hours instead of eight, and everything else would go on as before. But in the actual world this would be thought demoralizing. The men still work eight hours, there are too many pins, some employers go bankrupt, and half the men previously concerned in making pins are thrown out of work. There is, in the end, just as much leisure as on the other plan, but half the men are totally idle while half are still overworked. In this way, it is insured that the unavoidable leisure shall cause misery all round instead of being a universal source of happiness. Can anything more insane be imagined?”

His conclusion is that everyone could and would work a lot less hours, if only the system permitted us to:

“If the ordinary wage-earner worked four hours a day, there would be enough for everybody and no unemployment – assuming a certain very moderate amount of sensible organization. This idea shocks the well-to-do, because they are convinced that the poor would not know how to use so much leisure.”14

It was still only 1932 remember – technology’s “wonders” have moved on a lot since Russell’s day…

*

Apis mellifera, the honey-bearing bee, is the paragon of industriousness. It’s a pleasure just to watch them humming their way from flower to flower. Working all the hours the apian god sends, without a care in the world. We ascribe tremendous social virtue to our arthropodous familiars, the busy, busy bees. However, if we are to judge bees fairly then we ought properly to consider more critically what it is that our conscientious little friends actually get up to day in, day out…

For though we say that the bees are “at work” – the infertile females who carry out the majority of tasks technically denominated as “workers” – their most celebrated activity, the foraging for nectar from flowers, can hardly be considered a “real job” at all. Unless by “real job” we allow that gorging oneself on the sweetest food available automatically qualifies as work. For, after supping up an abdomenful of nectar (I exaggerate a little for effect), these “workers” then return home to empty the contents of their bellies, as any professional drinker might. Back at the hive, their sister bees also collaborate in the transformation of the incoming nectar, collectively “manufacturing” honey by means of repeated consumption, partial digestion and regurgitation – and apologies to anyone who has suddenly lost their appetite for honey, but bear in mind that milk and eggs are no less strange when you stop to think about them.

By chance, it happens that humans (and other creatures) are partial to the sticky end product of a bee’s binge drinking session. I personally love it. And so we steal away their almost intoxicating amber syrup and attach an attractive price tag to it. The bees receive compensation in the form of sugar, and being apparently unaware of our cheap deception, are extolled as paragons of virtue.

In fact, whenever we take to judging or appraising human conduct of any kind, there is a stubborn tendency to take direction either from Religion, or, if Religion is dismissed, to look for comparisons from Nature. If doing something “isn’t natural”, a lazy kind of reasoning goes, then evidently – evidentially, in fact – there must be something wrong with it. For it cannot be right and proper to sin against Religion or to transgress against Nature. Thus, behaviour that is unorthodox and deviant in relationship to a received normal is denounced, in accordance with strict definition indeed, as perversion.

This fallacious “appeal to nature” argument also operates in reverse: that whenever a particular behaviour is thought virtuous or worthwhile, then – and generally without the slightest recourse to further identifiable evidence – ipso facto, it becomes “natural”. Although of the tremendous variety of human activities, work seems outstanding in this regard. For throughout historic times, societies have consistently upheld that work is self-evidently “natural”; the Protestant “work ethic” is perhaps the most familiar and unmistakeably religious variant of a broader sanctification of labour. Although it is surely worth noting that God’s punishment for Adam’s original sin was that he should be expelled from Paradise “to till the ground from whence he was taken.”15 (Most probably booming “the world doesn’t owe you a living, my son!” before slamming the gates to paradise shut.) Protestant mill-owners, of course, found it convenient to overlook how hard labour was God’s original punishment.

But then, atheistic societies have been inclined to extol work more highly still, and not simply because it is “natural” (the commonest surrogate for Religion), but because atheism is inherently materialist, and since materials depend upon production, productivity is likewise deemed more virtuous and worthwhile. Thus, under systems both Capitalist and Communist, work reigns supreme.

Stalin awarded medals to his miners and his manufacturers – and why not? Medals for production make more sense than medals for destruction. Yet this adoration of work involves a doublethink, with Stalin, for example, on the one hand glorifying the hard labour of labour heroes like, most famously, Alexey Stakhanov, and meanwhile dispatching his worst enemies to the punishment of hard labour in distant work camps, as did Mao and as did Hitler. “Arbeit macht frei” is an horrific lie, yet in some important sense the Nazi leaders evidently believed their own lie, for aside from war and genocide, the Nazi ideology once again extolled work above all else. In the case of Communism, the exaltation of the means of production was to serve the collective ends; in Fascism, itself the twisted apotheosis of Nature, work being natural ensures it is inherently a still greater good.

Yet oddly, whenever you stop to think about it, very little modern humans do is remotely natural, whether or not it is decent, proper and righteous. Cooking food isn’t natural. Eating our meals out of crockery by means of metal cutlery isn’t remotely natural either. Sleeping in a bed isn’t natural. Wearing socks, or hats, or anything else for that matter, isn’t natural… just ask the naturists! And structuring our lives so that our activities coincide with a predetermined time schedule isn’t the least bit natural. Alarm clocks aren’t natural folks! Wake up!

But work is indeed widely regarded as an especially (one might say uniquely) exemplary activity, as well as a wholesomely natural one. Consider the bees, the ants, or whatever other creature fits the bill, and see how tremendously and ungrudgingly productive they all are. See how marvellously proactive and business-like – such marvellous efficiency and purpose! In reality, however, the bees, ants and all the other creatures are never working at all – not even “the workers”. Not in any meaningful sense that corresponds to our narrow concept of “working”. The bees, the ants and the rest of the critters are all simply being… being bees, being ants. Being and “playing”, if you prefer: “playing” certainly no less valid as a description than “working”, and arguably closer to reality once understood from any bee or ant’s perspective (presuming they have one).

No species besides our own (an especially odd species) willingly engages in drudgery and toil; the rest altogether more straightforwardly simply eat, sleep, hunt, drink, breathe, run, swim and fly. The birds don’t do it! The bees don’t do it either! (Let’s leave the educated fleas!) Nature natures and this is all. It is we who anthropomorphise such natural activities and by attaching inappropriate labels transform ordinary pleasures into such burdensome pursuits that they sap nature of vitality. So when Samuel Johnson says, “No man loves labour for itself!” he is actually reminding us all of our true nature.

*

Whether or not we welcome it, “manpower” (humanpower that is), like horsepower before, is soon to be superseded by machine-power. Indeed, a big reason this profound change hasn’t made a greater impact already is that manpower (thanks to contemporary forms of wage slavery and the more distant indentured servitude of sweatshop labour) has remained comparatively cheap. For now the human worker is also more subtle and adaptable than any automated alternative. All of this, however, is about to be challenged, and the changeover will come with unfaltering haste.

To a considerable extent our switch to automation has already happened. On the domestic front, the transfer of labour is rather obvious, with the steady introduction and accumulation of so many labour-saving devices. For instance, the introduction of electric washing machines, which eliminate the need to use a washboard, to hand rinse or squeeze clothes through a mangle, spares us a full day of labour per week. When these became automatic washer dryers, the only required task was to load and unload the machine. In my own lifetime the spread of these, at first, luxury appliances, is now complete throughout the Western world. Meantime, the rise and rise of factory food and clothing production means ready meals and socks are so inexpensive that fewer of us actually bother to cook and scarcely anyone younger than me even remembers what darning is. The bored housewife was very much a late twentieth century affliction – freed from cooking and cleaning there was suddenly ample time for stuffing mushrooms.

Outside our homes, however, the rise of the machine has had a more equivocal impact. Indeed, it has been counterproductive in many ways, with new technologies sometimes adding to the workload instead of subtracting from it. The rise of information technologies is an illustrative example: the fax machine, emails, the internet and even mobile phones have enabled businesses to extend working hours beyond our traditional and regular shifts, and in other ways, work has been multiplied as the same technologies unnecessarily interfere to the detriment of real productive capacity.

Today’s worker is faced with more assessments to complete, more paperwork (albeit usually of a digital form), more evaluation, and an ever-expanding stack of office emails to handle – enough demands for swift replies to circulars and a multitude of other paper-chasing obligations that we spend half our days stuck in front of a monitor or bent over the office photocopier. Every member of “the team” now recruited to this singular task of administrative procedures.

But these mountains of paper (and/or terabytes of zeroes and ones) needing to be reprocessed into different forms of paper and/or digital records are only rising in response to the rise of the office. In fact, it is this increase in bureaucracy which provides the significant make-weight to mask the more general underlying decline in gainful (meaning productive) employment. Yet still, this growth in administration is a growth that only carries us so far, and a growth that can and ultimately will be eliminated, if not for perfectly sound reasons of practicability, then by automation. Ultimately, office workers are no more immune to this process of technological redundancy than the rest of us.

First broadcast by Channel 4 in 1993, the final episode of Tim Hunkin’s wonderful “Secret Life of the Office” served up a humorous take on the social engineering that led to the Twentieth Century’s rise of the office:

*

That the robots are coming is no longer science fiction, any more than the killer robots circling high over Pakistan and Yemen armed with their terrifyingly accurate automated AGM-114 Hellfire missiles, are science fiction. In fact, all our future wars will be fought by means of killer robots, and, unless such super-weapons are banned outright or, at the very least, controlled by international treaties, subsequent generations of these ‘drones’ will become increasingly autonomous – the already stated objective is to produce fully autonomous drones; an horrific prospect. It is also a prospect that perhaps most graphically illustrates how sophisticated today’s robotic systems have become, even if, as with all cutting-edge technology, the military enjoys the most advanced systems. In short, the grim robots fleets are with us, and set to become swarms unless nations act to outlaw their deployment, whereas more beneficial robotic descendants still wait more placidly in the wings. The arrival of both fleets heralds a new age – one for the better and one decidedly for the worse.

Of course, the forthcoming workforce of robots might also be for the worse. Yet the choice is ultimately ours, even if we cannot hold off that choice indefinitely, or even for very much longer. For all our robotic rivals (once perfected) hold so many advantages over a human workforce. Never grumbling or complaining, never demanding a pay rise or a holiday, and, in contrast to human drones, never needing any sleep at all, let alone scheming against their bosses or dreaming up ways to escape.

And the new robots will not stick to manufacturing, or cleaning, or farming the land, or moving goods around in auto-piloted trucks (just as they already fly planes), but soon, by means of the internet, they will be supplying a host of entirely door-to-door services – indeed, a shift in modes of distribution is already beginning to happen. In the slightly longer term, robots will be able to provide all life’s rudimentary essentials – the bare necessities, as the song goes. Quietly, efficiently and ungrudgingly constructing and servicing the essential infrastructure of a fully functioning civilisation. Then, in the slightly longer term, robots will be able to take care of the design, installation and upgrading of everything, including their own replacement robots. In no time, our drudgery (as well as the mundane jobs performed by those trapped inside those Third World sweatshops) will have been completely superseded.

This however leads us to a serious snag and a grave danger. For under present conditions, widespread automation ensures mass redundancy and long-term ruin for nearly everyone. And though there are few historical precedents, surely we can read between the historical lines, to see how societies, yielding to the dictates of their ruling elites (in our times, the bureaucrats and technocrats working at the behest of unseen plutocrats), will likely deal with those superfluous populations. It is unwise to expect much leniency, especially in view of the current dismantlement of existing social safety nets and welfare systems. The real clampdown on the “useless eaters” is only just beginning.

It is advisable, therefore, to approach this arising situation with eyes wide open, recognising such inexorable labour-saving developments for what they are: not merely a looming threat but potentially, at least, an extraordinary and unprecedented opportunity. However, this demands a fresh ethos: one that truly values all human life for its own sake and not merely for its productive capacity. More specifically, it requires a steady shift towards reduced working hours and greatly extended holidays: a sharing out of the ever-diminishing workload and a redistribution of resources (our true wealth), which will of course remain ample in any case (the robots will make sure of that).

This introduction of a new social paradigm is now of paramount concern, because if we hesitate too long in making our transition to a low work economy, then hard-line social and political changes will instead be imposed from above. Moves to counter what will be perceived as a crisis of under-employment will mean the implementation of social change but only to benefit the ruling establishment, who for abundantly obvious reasons will welcome the rise in wealth and income disparity along with the further subjugation of the lower classes – the middle class very much included.

As physicist Stephen Hawking said in response to the question “[D]o you foresee a world where people work less because so much work is automated?” and “Do you think people will always either find work or manufacture more work to be done?”

“If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.”16

It is an answer that closely echoes Wilde’s foresight of more than a century ago; the difference being one of placing stress. Hawking emphasises the threat of what he calls the “second option”, whereas Wilde encourages us to press ahead in order to realise Hawking’s “a life of luxurious leisure” for everyone.

Of course, there will always be a little useful work that needs doing. Robots will ultimately be able perform all menial, most manual and the vast majority of mental tasks far more efficiently than a human brain and hand, but there will still be the need and the place for the human touch. In education, in medicine and nursing, care for the elderly and sick, and a host of other, sometimes mundane tasks and chores: emotionally intricate, kindly and compassionate roles that are indispensible to keeping all our lives ticking pleasantly along. The big question for our times, however, is really this: given the cheapness and abundance of modern labour-saving equipment, how is it that, even in the western world, instead of contracting, working hours are continuing to rise? The question for tomorrow – one that the first question contains and conceals – is this: given complete freedom and unrestricted choice, what would we actually prefer to be doing in our daily lives? As Bertrand Russell wrote:

“The wise use of leisure, it must be conceded, is a product of civilization and education. A man who has worked long hours all his life will become bored if he becomes suddenly idle. But without a considerable amount of leisure a man is cut off from many of the best things. There is no longer any reason why the bulk of the population should suffer this deprivation; only a foolish asceticism, usually vicarious, makes us continue to insist on work in excessive quantities now that the need no longer exists…”

“Modern methods of production have given us the possibility of ease and security for all; we have chosen, instead, to have overwork for some and starvation for others. Hitherto we have continued to be as energetic as we were before there were machines; in this we have been foolish, but there is no reason to go on being foolish forever.”17

*

I was about twelve when I took my first flight. It was onboard a Douglas DC9 and I was travelling to Vienna on an exchange trip. I was so excited and not afraid at all – or at least not afraid of the flight. Indeed, I recall how this was the main question older relatives kept asking and I found their obsession puzzling more than anything. But as I have grown older I have sadly developed a fear of flying. This is annoying in the extreme. Why now… when I’m middle-aged and have so much less to lose? But fear is only seldom a purely rational impulse.

Not that it is half so irrational as we are told to have a severe anxiety about being catapulted inside a thin metal capsule six miles up and at close to the speed of sound. Statistics are one thing but being in the presence of sheer physical danger is another. That said, fear of flying is surely as much about loss of control as anything. For why else did my own fear of flying worsen as I got older? Children are more accustomed than adults to feeling powerless, and so better able to relish the excitement of situations totally outside of their control.

Whole societies – or at least majority sections of societies – also suffer with phobias. Like our private fears, these collective fears held by social groups are frequently rooted in some sense of an impending loss of control. Fear of foreigners, fear of financial collapse, and fear of “terror”. But seldom considered is another societal phobia: our collective ‘fear of flying’. Flying in the poetic sense, that is: of fully letting go of the mundane. Instead, it seems our common longing is to be grounded: an understandable desire.

Why else, scarcely a century since the Wright Brothers’ miraculous first flights, do today’s air passengers find flying (that ancient dream) so tiresome that our commercial airlines serve up non-stop distractions to divert attention away from the direct experience? Indeed, listening to those familiar onboard announcements bidding us a pleasant flight, we are inclined (and very likely reclined) to hear the incidental underlying message: “we are sorry to put you through the dreary inconvenience of this journey”.

We fly and yet we don’t fly – or not as those who first dreamt of flight imagined. Flight has instead been transformed from visionary accomplishment into a nuisance and taken entirely for granted by the clock watchers impatiently kicking our heels beneath the slow-turning departure boards.

And just why are today’s airports such sterile and soul-destroying anti-human spaces? Presumably because this is again what modern humans have come to expect! The same can be said for so many facets of modern live. If we can transform the miracle of flight into a chore, then it follows that we can turn just about any activity into one.

Next chapter…

*

In 1958 Mike Wallace interviewed psychoanalyst and social critic, Erich Fromm. What Fromm says about society, materialism, relationships, religion, and happiness is remarkably prescient, as is his analysis of a growing alienation as we become diminished to the role of products in an age of consumerism:

*

Addendum: the future of work and Universal Basic Income

Due to its historical roots in workers’ movements18, the political left has tended to hold a somewhat inimical position when it comes to appraising the value of work. The understandable and perfectly legitimate elevation of the worker has had a countervailing effect in terms of accentuating the virtuousness of work per se, thereby adding to the weight of received wisdom that to endure toil and hardship is somehow intrinsically valuable. This is why the left has fallen into the habit of making a virtue out of the central object of the oppression it faces.

So what is the goal of the political left (of socialism, if you prefer)? What is its aim, if not, so far as it is possible, to fully emancipate the individual? For whatever dignifies and ennobles labour, and however understandable it may be as a strategy, to celebrate work for its own sake, disguises the base truth that only seldom is it edifying, and more often just a millstone, frequently a terrible one, which, if we are ever to become truly “free at last”, ought to be joyfully laid aside.

In 2013 Anthropologist David Graeber, professor of anthropology at LSE, wrote an excoriating essay on modern work for Strike! magazine. “On the Phenomenon of Bullshit Jobs” was read over a million times and the essay translated in seventeen different languages within weeks. Embedded below is a lecture Graeber gave to the RSA (Royal Society for the encouragement of Arts, Manufactures and Commerce) to expand on this phenomenon, and explore how the proliferation of meaningless jobs – more associated with the 20th-century Soviet Union than latter-day capitalism – has impacted modern society:

Since writing most of the above chapter the Zeitgeist has shifted remarkably. Suddenly technological unemployment is treated as a serious prospect and debated as a part of a wider political discourse on future trends. Introduced into this new debate, especially on the left, is the proposal for a ‘universal basic income’ i.e., money provided to everyone by the state to cover basic living expenses. Importantly this payment would be provided irrespective of how many hours a person works and has no other (discernable) strings attached.

UBI is certainly a very bold initiative as well as a plausible solution to the diminishing need for human workers in the coming hi-tech era. Unsurprisingly, I very much welcome it, at least in principle, but wish also to offer a small note of caution. Before large numbers of us are to able to live solely by means of a state provided UBI it will be essential to adjust societal norms relating to work. There can be no stigma in idleness. For if UBI is seen as merely a state handout and its recipients as welfare dependents, then we put them all into severe danger.

After all, work historically equates to status and money and until this ingrained relationship is eroded away, anyone subsisting on UBI alone would rather quickly sink to the level of a second-class citizen. Which is why I propose the better approach to UBI must aim to advance by taking baby steps: reducing days and hours, increasing holidays, lowering pensionable age, as well as expanding education – we must in fact think of eventually offering the luxury of lifelong education for all. Given where we start from today, to attempt to leap to it with one giant stride is surely too much of a risk. If UBI is truly our goal then we might reach it best by trimming work back until it barely exists at all.

*

Please note that for the purposes of ‘publishing’ here I have taken advantage of the option to incorporate hypertext links and embed videos – in order to distinguish additional commentary from the original text all newly incorporated text has been italised.

*

1 Quotes taken from The Life of Samuel Johnson, LL.D by James Boswell (1791). In the original version, the section substituted by ellipsis reads as follows: “There is, indeed, this in trade:– it gives men an opportunity of improving their situation. If there were no trade, many who are poor would always remain poor.”

2 Now part of Imperial College (my own alma mater).

3 Extract taken from The soul of man under socialism by Oscar Wilde (first published 1891).

4 The Open Conspiracy was published in 1928, subtitled “Blue Prints for a World Revolution”. These extracts are taken from Chapter 1 entitled “The present crisis in human affairs”. Interestingly, in a letter to Wells, albeit a begging letter, Bertrand Russell said of the work: “… I do not know of anything with which I agree more entirely”. The Open Conspiracy was later revised and republished as “What Are We to Do with Our Lives?” in 1931. http://www.voltairenet.org/IMG/pdf/Wells_The_Open_Conspiracy.pdf

5 Many boys and girls suffocated and others fell to their deaths. This was not helped by the practice of master sweeps to light a fire beneath them in order to force them to climb faster.

6 Quote taken from The Open Conspiracy.

7

“Two of the most perfect lives I have come across in my own experience are the lives of [the French Symbolist poet, Paul] Verlaine and of Prince Kropotkin: both of them men who have passed years in prison: the first, the one Christian poet since Dante; the other, a man with a soul of that beautiful white Christ which seems coming out of Russia.”

Taken from “De Profundis”, meaning literally “from the depths”; Wilde’s celebrated cri de coeur was intended, in part at least, as an extended letter and impassioned rebuke to his lover Lord Alfred Douglas. It was written during his imprisonment in Reading Gaol between January and March 1897, and has since been publicly released in various expurgated versions, the first of which was published in 1905. A complete version was finally released in 1962.

8

From The Open Conspiracy by H.G. Wells. The full set of seven “broad principles” reads as follows:

(1) The complete assertion, practical as well as theoretical, of the provisional nature of existing governments and of our acquiescence in them;

(2) The resolve to minimize by all available means the conflicts of these governments, their militant use of individuals and property, and their interferences with the establishment of a world economic system;

(3) The determination to replace private, local or national ownership of at least credit, transport, and staple production by a responsible world directorate serving the common ends of the race;

(4) The practical recognition of the necessity for world biological controls, for example, of population and disease;

(5) The support of a minimum standard of individual freedom and welfare in the world; and

(6) The supreme duty of subordinating the personal career to the creation of a world directorate capable of these tasks and to the general advancement of human knowledge, capacity, and power;

(7) The admission therewith that our immortality is conditional and lies in the race and not in our individual selves.

In light of what was about to come, this last item of the seven is perhaps the most perturbing. Wells introduces it as follows:

“And it is possible even of these, one, the seventh, may be, if not too restrictive, at least unnecessary. To the writer it seems unavoidable because it is so intimately associated with that continual dying out of tradition upon which our hopes for an unencumbered and expanding human future rest.”

9 Extract from The soul of man under socialism by Oscar Wilde (first published 1891).

10 From A Modern Utopia by H. G. Wells (published 1905). The same passage continues:

“But unprofitable occupation is also intended by idleness, and it may be considered whether that freedom also will be open to the Utopian. Conceivably it will, like privacy, locomotion, and almost all the freedoms of life, and on the same terms – if he possess the money to pay for it.”

11 Extract from The Open Conspiracy by H.G. Wells (first published 1928).

12 Extract from The soul of man under socialism by Oscar Wilde (first published 1891).

13 Ibid.

14 Extract taken from In Praise of Idleness by Bertrand Russell (1932). Note that Russell’s reference to pin manufacture is a deliberate allusion to Adam Smith’s famous hypothetical pin factory in which he illustrated the benefits of ‘division of labour’ in The Wealth of Nations.

15 From Genesis 3:23 (KJV)

16 In answer to a question posed during a Reddit Ask Me Anything session on October 8, 2015. https://www.reddit.com/r/science/comments/3nyn5i/science_ama_series_stephen_hawking_ama_answers/cvsdmkv

17 Extract taken from In Praise of Idleness by Bertrand Russell (1932).

18 Without an upwelling of righteous indignation amongst the oppressed rank and file of working people, no leftist movement would ever have arisen and gained traction. Yet, the political left also owes its origins to the early co-operative movements, a spontaneous awakening of enlightenment humanists, to the Romantics, and most importantly, to fringe religious groups. Tony Benn famously said that the formation of the Labour Party in Britain owed “more to Methodism than Marx”.

In 1832 six agricultural labourers formed a friendly society to protest against their meagre wages. George Loveless, a Methodist local preacher, was the leader of this small union – the other members included his brother James (also a Methodist preacher), James Hammett, James Brine, Thomas Standfield (Methodist and co-founder of the union) and Thomas’s son John. These men were subsequently arrested, convicted and sentenced to transportation. Three years later, and following a huge public outcry which involved a march on London and petitions to parliament, they were issued pardons and allowed to return to England as heroes. This small band of men is now collectively remembered as the Tolpuddle Martyrs.

But the origins of socialism in Britain can be really traced as far back as the English Civil War and indeed earlier again to Wat Tyler’s Peasants’ Revolt of 1381, when the workers of the Middle Ages, inspired by the teachings of the radical priest John Ball, took their demands directly to the King Richard II who reneged on his concessions and had them hunted down.

1 Comment

Filed under analysis & opinion, « finishing the rat race », financial derivatives, neo-liberalism

lessons in nonsense

The following article is Chapter Seven of a book entitled Finishing The Rat Race which I am posting chapter by chapter throughout this year. Since blog posts are stacked in a reverse time sequence (always with the latest at the top), I have decided that the best approach is to post the chapters in reverse order.

All previously uploaded chapters are available (in sequence) by following the link above or from category link in the main menu, where you will also find a brief introductory article about the book itself and why I started writing it.

*

 Tis strange how like a very dunce,
Man, with his bumps upon his sconce,
Has lived so long, and yet no knowledge he
Has had, till lately, of Phrenology—
A science that by simple dint of
Head-combing he should find a hint of,
When scratching o’er those little pole-hills
The faculties throw up like mole hills.

Thomas Hood1

*

I am a teacher and so people often ask if like teaching, and sometimes I say I do, but then at other times I tell them I don’t. That’s work basically, except for an exceptional few who truly love, as opposed to merely tolerate, all aspects of the work they have to do. Having said that, teaching is a suitable occupation for me. It keeps me thinking about a favourite subject, and introduces me to some new and interesting people, albeit in rather formal circumstances.

Naturally enough I told myself that I’d never become a teacher – many teachers will say the same, at least when they’re being honest. But that’s work again, unless you’re one of the fortunate few. So what’s my beef? Well, just that really. Here I am being honest with you and yet I know that what I’m saying isn’t enough. Okay, let me expound more fully.

A few years ago I was offered redundancy and accepted. So I was back on the shelf. Needing another job and to give myself any realistic chance of success, I’d have to recast myself somewhat. Imagine if I turned up at the interview and I said more or less what I’ve just told you.

“Tell us why you want the job”, they’d ask, and my honest answer: “I need some money. I’m a decent teacher and I have a firm grasp of my subject. This could be one of the best offers I’ll get…” Well it just won’t do. No, as I say, I’d need to recast myself. Something more like this:

“I’m a highly experienced professional, looking for an exciting new challenge. I enjoy working as part of a team. I have excellent communication skills. I have excellent organisational skills. I have excellent people skills. I have excellent skills in personally organising communications. I have excellent skills in communicating to organised persons. I have excellent skills in organising communications personnel. Because of the outcomes-based nature of my teacher-training programme, I have developed a thorough understanding of the collection of evidence and portfolio-based approach to assessment. I’m very good at filing. I welcome the opportunity to work with students of different ages, cultures, ethnicities, genders and sexual orientations. I believe that I am ideally suited to the post of part-time classroom assistant and I want to have your babies…”

Well okay then, just try getting a job if you say otherwise.

*

I used to work in the public education sector. I ought perhaps to protect the name of the establishment itself, so let’s just say that for almost a decade I lectured A-level physics to a mix of students, with a range of abilities and nationalities, in a typical northern town… which covers the CV more or less.

As with every other college and university today, we were quite literally in the business of education; further education colleges having been “incorporated” by John Major’s government under the Further and Higher Education Act (FHEA) of 1992. Once at a meeting I was informed of my monetary value to the institution (which wasn’t much). Because the most important thing was that the college had to break even, although, as time went on, it rarely did.

Being in business also meant dealing with competition – primarily from other local schools and colleges. “Promotion”, then, which happens to be one of “the four Ps of marketing”2, involved pitching our unique selling points – in this case, a national BTEC diploma in forensic science which was ideal for attracting budding students away from the latest series of CSI: Crime Scene Investigation and daytime re-runs of Quincy.

Meanwhile, an impressive new body of staff dedicated to marketing and publicity had to be gradually assembled, and then another sizeable team assigned to deal with “student services”. It was the marketing department who coined our corporate mission statement: “Meeting learner needs and aspiring to excellence”, which as a dedicated workforce we committed to memory, to draw upon for inspiration during dreary afternoon classes in Key Skills Information Technology.

But no college is just a business, and in spite of appealing to a foreign market (a small number of students having been attracted from as far afield as China and Hong Kong), by far the biggest part of each year’s fresh cohort were home students, with funding provided out of the public purse. So the regulatory agency Ofsted with its own teams of inspectors would come now and then to tick their own assessment boxes. The “quality of learning provision” was not apparently guaranteed by market forces alone, because our adopted business model only went so far – markets are generally supposed to ensure quality too, but not in education.

So to ensure that our annual government targets were being reached, new management roles in “quality assurance” also opened up. The further paperwork, combined with already tight budgets made tighter by administrative growth, meant it was harder again to actually balance the books, or at least to reduce the losses. Eventually, a firm of management consultants were hired, and then another firm, putting together reports that were either promptly forgotten or used to justify the multiplication of methods for cutting costs: these included laying off more teaching staff and generating yet more paperwork. A vicious circle justified on the basis of ‘quality’ and ‘efficiency’ had resulted in conditions for both staff and students that simply got worse and worse.

So it’s funny to remember a time, not very long ago, when colleges had operated with hardly any management or administrative staff at all. The odd secretary, a few heads of section, and a principal were quite sufficient to keep the wheels turning in most educational establishments. Whereas, as the very model of modern FE college, plagued by bureaucratic waste and inefficiency, hampered at every turn by tiers of micro-management, there was insufficient funding for the real business of education. John Major’s incorporation of the FE sector had also led to year-on-year declines in real-terms wages for the teaching staff, who were increasingly made to feel like an unwanted overhead. “Struggling to survive and steadily achieving less” is not a mission statement, but it would at least have been more honest and to the point. Or, alternatively, I suppose we could have gone with: “do we look bothered?”

*

In one way, the problem here goes back all the way to Isaac Newton, and then to just a little before him. It was Newton, after all, who had decisively proved a truth that, whenever I pause to reflect on it, I still find rather startling: that the universe behaves according to elegant mathematical laws. Little surprise then, that following the unprecedented success of Newton’s approach to establishing universal laws that had so elegantly replaced the everyday disorder of earlier natural philosophies, those working in other fields, would also try out the Newtonian approach of quantifying, theorising and testing: intent upon finding equivalent fundamental laws that operate within their own specialisms. Scientists were to become the high-priests and priestesses of our post-Newtonian age, so what better model to follow?

But why does science work at all? Is it simply that by applying careful measurement and numerical analysis, we might make smarter decisions than by using common sense alone, or that the universe really is in some sense mathematically accountable? That it works because God is inexplicably into algebra and geometry. The truth is that no-one knows.

But if the universe were not conducive to such logical and numerical analysis, then natural phenomena could be measured, data collected and collated, and yet all of this cataloguing would be to no avail. For outcomes can be forecast, within limits that can be precisely determined too, only because maths accurately accounts for the behaviours of atoms, and forces, and so forth. God may or may not play dice (and the jury is perhaps still out when it comes to the deeper philosophic truth of quantum mechanics) but when you stop and think about it, it’s strange enough that the universe plays any game consistently enough for us to discover the rules to it. “The most incomprehensible thing about the world,” said Einstein, “is that it is comprehensible.”

So what of the experts in the other widely varied disciplines? Disciplines rather more susceptible to the capriciousness of our human follies and foibles. Ones that are now called the ‘social sciences’, and following these to still lower rungs, the so-called theories of management and business. Taking their lead from Newton, experts in all these fields have turned to quantification, to the collection and collation of data, setting off with these data to formulate theories which are in some sense assumed universal – ‘theory’, in any case, being a word that takes a terrible bashing these days.

In Science, the measure of any theory is in two things: predictability and repeatability, because any scientific theory must allow some way for itself to be tested – and here I mean tested to destruction. If rocks didn’t fall to Earth with constant acceleration then Newton would be rejected. If the Earth didn’t bulge at the equator, if the tides didn’t rise and fall as they do, and if for other reasons Newton couldn’t account for the extraordinary multiplicity of natural phenomenon, then Newton must step aside – as Newton finally has done (to an extent). But where is the predictability and repeatability in the theories of the social sciences or taught in the business and management schools?

About two centuries ago in the early eighteen hundreds, a German physician named Franz Joseph Gall noticed that the cerebral cortex (the so-called ‘grey matter’) of humans was significantly larger than in other animals. Naturally enough, he drew the conclusion that it must be this exceptional anatomical feature that made humans intellectually, and thus morally, superior.

Gall also became convinced that the physical features of the cortex were directly reflected in the shape and size of the skull. Concluding that since the shape of the outside of the cranium is related to the shape of the inside, and thus to the general structure of the cerebral cortex, then the bumps on someone’s head ought to be a potentially decipherable indicator of the way that person thinks, and therefore a sign of their innate character.

Gall’s ideas led to the discipline known phrenology – the reading of the bumps on your bonce – which became a popular and rather serious area for study. Throughout the Victorian era, but especially during the first half century of the nineteenth century, there were phrenological