[This is part two of a four-part series. Part one can be found here.]

Separationism

It is often remarked that Western philosophy is more individualist than Eastern philosophy. With a particular meaning of “Western individualism” in mind, this post explores some of its historical manifestaitons and developments.

I consider three main themes. The first concerns the apprehension of persons (and things) as entities fundamentally distinct from one another; the second concerns the Scientific Revolution’s analytic-reductionist approaches to philosophy and natural philosophy; and the third is the tradition of adversarialism in Western culture.

As we shall see, at times these factors developed in concert. I’ll argue that this connexion is inevitable, as their foundations share an underlying unity of perspective. For want of a better term, and in thrall to the beauty of the oxymoron, I’ll call that unity “separationism”, meaning the tendency to separate and delineate.

The Soul and its Afterlife

The notion that people are distinct from one another is next to self-evident. In the Western established discourse, this is strongly associated with the existence of a personal “soul” – the metaphysical essence of a person.

For instance, Socrates (4th century BC) reasons at length about the indivisibility and individuality of the eternal human soul. And he shows his abstract cogitations have practical implications for his mortal audience, the temporary custodians of their souls. Socrates challenges his followers to reflect critically on whether their existing notions of purity and honour and so forth are really in the best interests of their soul, and how they might employ reason to become more responsibile stewards.

This line of thought passed intact into the theology of the Abrahamic religions (Judaism, Christianity, and Islam). Whether God is protrayed as judgemental or forgiving, revelatory or inscrutable, He is nevertheless held to be the author and cultivator of humanity, personally concerned with its fate and foibles. Given this interest, each individual must venerate God, both directly and through their behaviour, in order to secure reward for their soul in the afterlife.

Thus, adherents of prominent Western traditions have claimed that the individuality of the soul is the essential delimitation of personhood. In consequence, there flourished a broad scorn for the mortal body, whose miserable limitations and flaws invariably taint the purity of the soul.

Individuality Survives

Despite today’s ongoing disengagement from the conventional strictures of organised religion, and despite the concomitant abasement of the soul, recapitulations of the above sentiments are still very familiar: individuality, personal agency, and the necessity of conquering and controlling our bodies (mind over matter), remain ingrained. Indeed, they are continually reinforced.

+ -
What might Nietzsche say about this? [Click the disc!]

[Some readers have advised me to rein in my tangential style, and focus more on getting to the point. I must say that for me the tangents frequently are the point; but it's a fair criticism. I'm experimenting now with putting my more flagrant asides in collapsible sections to indicate that they can be skipped with only moderately dire consequences.]


Friedrich Nietzsche might have seen this as evidence of our innate tendency to seek the abstract, to pry ourselves away from our lived breathing reality, and to torment ourselves with the perfect and the divine.


In a seeming contradiction to this tendency, the Rennaissance brought humanism to art, science and philosophy, reinventing the human body (and more broadly the natural world) as fit objects of study and celebration. This was the beginning of our liberation from religious doctrine and from the abnegation of ourselves.


However, I reckon Nietzsche would complain that this shift was superficial, that it did not go nearly far enough: for though we continue to dispense with a mysterious God, still we are enslaved by our obsession with mystery itself.

The fact that to me (a Westerner), such paradigms seem self-evident to the point of tautology is not an indication of universal truth but a matter of cultural psyche. I hope this will become clearer when the contrasts with Eastern philosophy are made. But taking my claim on faith for the moment, it is interesting to understand exactly why the personal soul persists in all but name.

To this end, it’s worth reflecting on the genesis of modern secular paradigms, during the intellectual and cultural movements known as the Rennaissance, the Scientific Revolution, and the Enlightenment. With reference to the Scientific Revolution in particular, I hope to trace some aspects of the soul’s evolution.

The Scientific Revolution

First, some background. The Scientific Revolution, spanning the mid-16th to late-17th century and focussed in Western Europe, marks the development of a new relationship with knowledge. Of course only a fool would to try to summarise it in a few paragraphs.

Though the Middle Ages certainly saw gradual advancements in military and maritime technology, the Scientific Revolution taught us that, for the first time since the collapse of the Roman Empire, we could learn new things. This came after the (scientific) Rennaissance, where the re-discovery of many Classical texts had lead to vast expansion of philosophical and scientific canon.

But once the excitement subsided a little, scholars began to realise that the ancients sometimes got things wrong. At first, few dared to publicly admit such heresy; but it gradually came to be accepted that blind obedience to the writings of Classical thinkers was just as bad as blind obedience to religious scripture, and that it should be possible to adopt their methods of inquiry and exploration, rather than relying on their findings alone.

Furthermore, the approaches used in this exploration emphasised the use of mathematical arguments. As my beloved Johannes Kepler (astronomer, mathematician and physicist) wrote in 1618,

Nothing can be known completely except by quantities: the conclusions of mathematics are most certain and indubitable.

And according to my reviled (but eloquent) Galileo Galilei in 1623,

The Universe cannot be read until we have learned the language in which it is written: the language of mathematics.

Unsurprisingly, this interest in numerical arguments evolved alongside increasing use and refinement of quantitive measurement techniques. It’s worth emphasising that though such approaches seem natural to us today, prior to the 16th century, and even in the time of Socrates and Plato, such they would have seemed superficial and even absurd.

+ -
What might Plato say about this?

A Platonist would scoff that one will understand nothing of the nature of a flower from numerical measurements, e.g. of its size or weight. Figures bring us no closer to a concept of "flowerhood", to the pure Form of a flower of which any physical manifestation is but a poor distorted shadow and an epistemological distraction.

Indeed, Aristotle (who was extremely influential even before the Rennaissance) believed that the only trustworthy conclusions were those arrived at through deductive reasoning. The tangible world may provide stimulus for a line of thought, but the phrase “empirical evidence” could never be regarded as conclusive.

Though there were certainly prominent thinkers who continued the deductive tradition – such as Immanuel Kant and René Descartes (more on him later) – the Scientific Revolution bred a generation of staunch empiricists like Francis Bacon and Robert Boyle, whose approach to knowledge was founded upon careful experimentation and observation of the physical world. Empirical evidence came to be seen as a form of knowedge in its own right, even when divorced from an over-arching Aristotelian teleological framework. Kepler again:

Without proper experiments, I conclude nothing.

Analysis and Reduction

+ -
Leonardo Da Vinci's notebooks.

In the 19th century, historians and archivists employed in cataloguing the work of Leonardo Da Vinci diligently categorised the contents of his notebooks under headings such as "art", "engneering" and "anatomy". If this required splitting a single page into multiple categories, so be it.


It was only later that historians came to ascribe value to the *unity* of these works – to the fact that, before the separation was undertaken, Leonardo had grasped and created it all together, free from post-hoc distinctions. Today there is a prevailing sentiment that the most honest accounting of our intellectual heritage requires us to interpret the original work in a manner befitting its author. The sundered pages were reunited.


The contrasting approaches of Leonardo, of the post-Revolution scholars, and finally of the post-post-Revolution scholars, nicely illustrate one of the changes in approach that took place during the Scientific Revolution.

The Scientific Revolution witnessed the adoption of reductionist goals, to be achieved via the process of analysis. A reductionist seeks simple underlying rules which can explain a diversity of complex phenomena; and analysis denotes the splitting of complicated phenomena into smaller components, with a view to identifying the factors of primary consequence. Here are a couple of illustrative examples.

In chemistry, analytic experimental techniques developed by Boyle, Joseph Proust and others, attempted to probe the fundamental structure of matter. By performing repeated experiments where all conditions were held fixed, except for one which was carefully varied in small steps, the properties of the material under scrutiny could be related unambiguously to the varying parameter. On the basis of their findings, they advanced the (reductionist) idea that all the materials we find in the world are in fact made from a limited variety of fundamental, indivisible components called atoms.

In medicine, physicians began to question the humoral theory of health (which is holistic in the sense that requires one to treat the whole body, inded the whole person, at once [see expandable section below]), and began to consider each organ and tissue of the body independently, trying to pin down its exact function and to connect it with specific diseases. Through such investigations, they discovered important unities – for example, the similarity between human physiology, and that of other mammals and animals.

+ -
The four humours

Humourism was the principal medical tradition in Europe from the time of Hippocrates until the eighteenth century – a span of two millennia. It held that human health was determined by the interplay of four fundamental bodily fluids (humours): blood, phlegm, bile and black bile. Disease arose from an imbalance of the humours, and doctors would devise lifestyle and medicinal interventions to re-establish equilibrium. Diet and daily routine were closely regulated; diuretics, emetics and laxatives would be prescribed; and various forms of blood-letting (e.g. using leeches) was common.


The humours' natural balance was thought also to manifest in a person's character and predispositions (hence adjectives such as phlegmatic and melancholy). In fact, humourism was an integral part of a coherent theory of the Universe and each individual's place within it: it was connected to music, astrology, material textures, seasons, profession, and more. A physician would have to take all of these factors – mental, physical, and circumstantial – into account when deciding on a course of treatment.


Nevertheless, the relative simplicity and coherence of the framework, coupled with its impressive compass, is at least a partial explanation for humourism's longevity in medicine. [Incidentally, the decline of its popularity, in favour of modern medical concepts and techniques, was accompanied by an increase in the complexity of medicine and consequently a growing inequality in the relationship between patient and physician. But that's a story for another time.]

Mechanism and Descartes

The connexion of all this to the persistence of individualism and separationism (defined above) will hopefully begin to clarify at the end of this section (you may have already seen the hallmarks of separationism in the analytic method).

But continuing for the moment with the history of medicine, in the Early Modern era before the Scientific Revolution, the animation of living things including humans was understood to stem from interactions between the body and the “vital principle” – a mysterious energy which, though not identical with previous interpretations of the soul, nevertheless possessed many of the same characteristics.

The analytic-redictionist paradigm that came to challenge this theory was “mechanistic” physiology, which sought to translate the workings of the body into the language of machines: the heart became a pump, limbs were viewed as levers, etc. It was thought that such language would yield insights not only into the function of our bodily systems, but also into therapies for its malfunctioning components.

+ -
And that put paid to the vital principle once and for all?

No. "Vitalism" was still a valid model of living matter, and was still popular in mainstream science until the late nineteenth century. It fell out of favour for several reasons, one of which was the discovery of the Law of Energy Conservation. Increasingly refined experiments on the energy balance of living systems, conducted for instance by physicist and cognitive psychlogist Hermann von Helmholtz, found there was no need to invoke extra-physical substances to explain the motility of organisms.


Today, versions of vitalism can still be found in alternative medical systems, such as homeopathy.

Given the soul-like nature of the vital principle, it’s perhaps unsurprising that the new mechanistic physiology had consequences for metaphysics: indeed, it provided part of the inspiration for Descartes’ dualistic theory of matter and mind. In crude terms, this theory held that the material body was merely an elaborate clockwork whose sole purpose was to provide a conduit for the mind (i.e. the seat of consciousness, the ego, the person). Mind, for its part, was simply a substance distinct from physical matter (albeit one with very interesting properties).

+ -
Mind and free will

Descartes held that, unlike matter, mind cannot be subject to deterministic natural laws, for otherwise the future and the past could be uniquely determined by the present, and the "self-evident" fact of free will would be undermined.

Descartes’ theory contrasts with both the vitalist view and the soul-centred view, because the mind is not equivalent to the soul or the vital principle – and consequently their respective definitions of what constitutes “the body” also differs. Descartes diminishes the role of the body in personhood (it is just a vehicle); whereas in both the the soul-centred and the vitalist theory, a person (the subject of experiences, the ego) required both components. The soul / vital principle represented the eternal / non-physical essence of the person, but not the person themself. (Descartes also rejected claims about the afterlife and other religious connotation in connection with mind.) Yet despite these differences, the individuality of persons remained intact – it was now ascribed to the ego rather than to the soul.

Anyway, with the body thus reduced to a mere puppet of an imperfect mind, our connexion to the physical world around us became that little bit more tenuous. Descartes’ famous dictum, “I think, therefore I am”, which is rooted perhaps in vestigial Abrahamic contempt for the fallibility of the mortal realm, establishes the existence of the self as the primary principle of existence. (It is a small step from here to the “solipsistic” conclusion that it is the only principle of existence: I think, therefore I am the Universe.)

A Pause to Breathe

Let’s re-cap. I started out talking about what I termed “separationism”, which merely describes an outlook which tends towards demarcation and categorisation. Now we have seen several instances where separationism has governed Western conceptions of the world.

Perhaps the most obvious manifestation of separationism is in personal individualism – the conviction that people are entities distinct from one another. According to the likes of Socrates and the Abrahamic God the distinction was encapsulated in eternal indivisible souls. The later semi-idealist theories of Descartes, while dispensing with the soul, still retained the essential element of distinguishing each ego as intrinsically different from all the others.

(Subsequent Enlightenment principles of personal liberty and self-determination are also clear descendants of this paradigm. In fact, I tentatively claim that any philosophy founded on the Aristotelian deductive approach will fit in, or at least not contradict, this aspect of individualism.)

Another thing these theories have in common is the separation of body from soul or body from mind (to re-iterate, the meaning of body changes subtly between the two cases), and the need to prioritise the soul / mind over the theologically / intellectually irrelevant body: we must exert whatever influence we can muster to keep the body from straying. I will return to this combatorial relationship in the [collapsable] section after next.

+ -
What might an existentialist say?

The existentialist philosophies of Søren Kierkegaard and others are marked by the torment of a dreadful weight – the weight of individual responsibility for each moment one's life. So in a sense they were extreme individualists. (But one could also argue that Kierkegaard was a trans-individualist; for though he defined human existence in terms of each individual's curation of its self, he simultaneously strove to explore as many selves as possible. Or something like that.)

Finally, the method of analysis which regained popularity during the scientific revolution is separationist to the core. The analytic approach of thinking has proved very successful and quite robust, and persists to this day in the majority of natural sciences.

The Conundrum of Objective Reality

Moving on, I have to admit that I’ve been selective in the history I’ve presented so far. You may well ask, What about the materialists? Or more broadly, How do reductionists fit into this? Because surely anyone who is interested in finding the unifying principles underlying seemingly disparate phenomena cannot be a separationist?

A critic may add, moreover, that the Scientific Method is an approach which explicitly rejects qualitive experience in favour of objective criteria upon which everyone can agree. Surely such a programme has no room for Descartes’ egoism, or for individualism of any kind.

These are fair points; but I think they might miss the bigger picture. Though I have claimed that particular philosophical theories or historical figures have been separationist, I do not claim this for all theories and figures. I do, however, claim something which is arguably deeper: namely that separationism is a mode of thinking which underlies much of Western philosophy.

For instance, even though the Scientific Revolution arguably rejected “individualism” in the literal sense by appealing to an objective truth beyond human fallibility, even though it prescribes the overthrow of the individual self, this rejection and overthrow was inextricable from the practice of separation, because the scientific method is analytic. I’m saying that modern knowledge in the Western world rests on tacit assumptions about how we even try to understand anything – i.e. through a deeply separationist paradigm.

If this still isn’t clear, I’m hoping things will improve in the next installment when I can make comparisons with Eastern philosophy. For now, you can take my argument or leave it.

+ -
Individualism and Conflict

Me Versus Not-Me; Me Versus Nature

[This section considers some ways that conflict is related to what's been discussed so far. But the post is already long, and since this isn't quite so relevant to the main story, I'm treating it as a hefty aside.]


A near-inevitable consequence of individualism – by which I mean the distinctness of persons – is the separation of the world into the complementary categories of "me" and "not me". This might seem to be a sufficient condition for the existence of conflict (i.e. once there is "me" and "not me" conflict is inevitable), but in some circumstances I think it's only a necessary condition (i.e. conflict requires "me" and "not me", but still might not happen even if they exist). Here's an example.


Prior to the Enlightenment, the Abrahamic religions regarded human suffering as a gift from God, a test of devotion and an assurance of compensation in the afterlife. But as thinkers who became preoccupied with sub-divine matters (or even rejected them outright), they found these comforting stories about the meaningfulness of suffering receding from them. Now they had to develop an entirely new relationship with suffering, seeing it not as a gift, but as a matter of conflict with one's circumstances. Thus, whereas hardships inflicted on one's individual person formerly gave cause for rejoicement (i.e. no conflict), the advent of humanism rendered them unwelcome.


Conflict arose in more straight-forward forms as well. Pre-Rennaissance Europe had endured centuries of obedience to doctrines both religious and scientific. The gradual adoption of classical *methods* of inquiry (rather than just the antiquated *results* of such inquiries), which emphasised the questioning of authority and the advancement of knowledge, elevated the notion of conflictual "overcoming" in new ways. The emerging Scientific Method, for instance, contains the overthrow of old ideas at its core. (And though science today is mostly civil, the bitter disputes between e.g. Isaac Newton and Robert Hooke certainly have their counterparts in the modern literature.)


The combination of (a) the growing rejection of the divinity of adverse circumstances, and (b) improved technologies and methodologies to overcome them (courtesy of science), was a lucky one indeed – at least for the educated elite. Now humaity had both impetus and framework for working towards improving its circumstances. Such "arrogance" (as it may have been seen by puritanical reactionaries) already has precedent in the Western tradition. Take, for example, the Greek myth of Prometheus, who steals the technology of fire from the gods and gives it to humankind. Thus equipped, we were partially liberated from the whims of the gods, and could take charge of our own destiny.


Of course, one of the most prominent examples of conflict arising from individualism is captialism and its colonial consequences. I'm not going there this time.

Summary

My discussion in this post touched upon two themes: (1) the separateness of people, the logical ramifications of which we have seen in the metaphysics of Descartes and the ethics of Kierkegaard. And (2) scientific reductionism and analysis, which provided a framework for intellectual exploration through the atomisation of complexity.

Each theme has origins in ancient philosophies; and more relevant to the world today is their re-emergence during the Renaissance and Scientific Revolution. But most importantly, each is founded independently on a substratum of separationism, which is the very lens through which theories pertaining to (1) and (2) are constructed.

In the next post, I shall attempt to contrast this stream of Western thought with the broadly non-separationist traditions of China and India. I solemnly promise this will be shorter.