Collecting books


Until about five years ago, I would hesitate to buy books if I had other, unfinished books that I was currently reading. It seemed irresponsible to “start on something new” without finishing things that were in progress. This is the kind of attitude that leads you to visit every single room and see every single exhibit in a museum, exhausting yourself (thus precluding visits to other museums for a while). In retrospect, this was an unwise approach.

Umberto Eco (I learn via Nassim Taleb), and others before him, advocates the notion of an antilibrary. Books that one has not read are clearly more valuable than books that one has read. So simple, and so obvious. One should fill one’s shelves with unread books.

Of course this does not mean indiscriminate acquisition, though. We should curate, buy books on the basis of potential value – at present or at some time in the future. Look for links between books, associations, counterpoints, juxtapositions. Thus we build a space – both literary and physical – that is instantly accessible, offering up its riches. We can immediately jump from book to book, trace connections and make new ones, a quadratically increasing number of potential contrasts…

Talking to a new acquaintance for ten hours does not hold ten times as much “utility” or interest as talking to him or her for one hour. Trying to exhaust or deplete one person before moving on to make another acquaintance would be rude, clumsy, pointless and tiring. Although we may sometimes wish to converse with someone for days or weeks immediately upon meeting them, sometimes a few minutes is enough to have a crucial insight.

A metaphor, and an obvious insight now, but one that bears repetition. Finally: it is important that the collection is physical, concrete shelves with physical volume and mass. No digital interfaces, however convenient, can make up for the lack of physicality. They are complementary at best.

Historical noise? Simulation and essential/accidental history

Scientists and engineers around the world are, with varying degrees of success, racing to replicate biology and intelligence in computers. Computational biology is already simulating the nervous systems of entire organisms. Artificial intelligence seems to be able to replicate more tasks formerly thought to be the sole preserve of man each year. Many of the results are stunning. All of this is done on digital circuits and/or Turing-Church computers (two terms that for my purposes here are interchangeable — we could also call it symbol manipulation). Expectations are clearly quite high.

What should we realistically hope for? How far can these advances actually go? If they do not culminate in “actual” artificial biology (AB) and artificial intelligence (AI), then what will they end in – what logical conclusion will they reach, what kind of wall would they run up against? What expectations do we have of “actual” AB and AI?

These are extremely challenging questions. When thinking about them, we ought to always keep in mind that minds and biology are both, as far as science knows, open-ended systems, open worlds. This in the sense that we do not know all existing facts about them (unlike classical mechanics or integer arithmetic, which we can reduce to sets of rules). For all intents, given good enough equipment, we could make an indefinite amount of observations and data recordings from any cell or mind. Conversely, we cannot, starting from scratch, construct a cell or a mind starting from pure chemical compounds. Even given godlike powers in a perfectly controlled space, we wouldn’t know what to do. We cannot record in full detail the state of a (single!) cell or a mind, we cannot make perfect copies, and we cannot configure the state of a cell or mind with full precision. This is in stark contrast to digital computation, where we can always make an indefinite number of perfect copies, and where we know the lower bound of all relevant state – we know the smallest detail that matters. We know that there’s no perceivable high-level difference between having a potential difference of 5.03 volts or 5.04 volts in our transistors on the lowest level.

(Quantum theory holds that ultimately, energy can only exist in discrete states. It seems that one consequence would be that a given volume of matter can only represent a finite amount of information. For practical purposes this does not affect our argument here, since measurement and manipulation instruments in science are very far from being accurate and effective at a quantum level. It may certainly affect our argument in theory, but who says that we will not some day discover a deeper level that can hold more information?)

In other words, we know the necessary and sufficient substrate (theoretical and hardware basis) for digital computation, but we know of no such substrate for minds or cells. Furthermore, there are reasons to think that any such substrate would lie much deeper, and at a much smaller scale, than we tend to believe. We repeatedly discover new and unexpected functions of proteins and DNA. Junk DNA, a name that has more than a hint of hubris to it, was later found to have certain crucial functions – not exactly junk, in other words.

Attempts at creating artificial minds and/or artificial biology are attempts at creating detached versions of the original phenomena. They would exist inside containers, independently of time and entropy, as long as the sufficient electrical charge or storage integrity is maintained. Their ability to affect the rest of the universe, and to be affected by it, would be very strictly limited (though not nonexistent – for example, memory errors may occur in a computer as a result of electromagnetic interference from the outside). We may call such simulations unrooted or perhaps hovering. This is the quality that allows digital circuits to preserve information reliably. Interference and noise is screened out, removed.

In attempting to answer the questions posed above, we should think about two alternative scenarios, then.

Scenario 1. It is possible to find a sufficient substrate for biology and/or minds. Beneath a certain level, no further microscopic detail is necessary in the model to replicate the full range of phenomena. Biology and minds are then reduced to a kind of software; a finite amount of information, an arrangement of matter. No doubt such a case would be comforting to many of the logical positivists at large today. But it would also have many strange consequences.

Each of us as a living organism, society around us, and every entity has a history that stretches back indefinitely far. The history of cells needs a long pre-history and evolution of large molecules to begin. A substrate, in the above sense, exists and can be practically used if and only if large parts of history are dispensable. If we could create a perfect artificial cell on some substrate (in software, say) in a relatively short time span, say an hour, or, why not, less than a year, then it means that nature took an unnecessarily long way to get to its goal. (Luckily, efficient, rational, enlightened humans have now come along and found a way to cut out all that waste!) Our shorter way to the goal would then be something that cuts out all the accidental features of history, leaving only the essential parts in place. So the practically usable substrate, which allows for shortcuts in time, then seems to imply a division between essential and accidental history of the thing we wish to simulate! (I say “practically” usable, since an impractical alternative is a working substrate that requires as much time as natural history in the “real” world. In this scenario, getting to the first cell on the substrate takes as long as it did in reality starting from, say, the beginning of the universe. Not a practical scenario, but an interesting thought experiment.) Note that if we are able to somehow run time faster in the simulation than in reality, then it would also mean that parts of history (outside the simulation) are dispensable: some time would have been wasted on unecessary processes.

Scenario 2. Such a substrate does not exist. If no history is accidental, if the roundabout historical process taken by the universe to reach the goal of, say, the first cell or first mind, is actually the only way that such things can be attained, then this scenario would be implied. This scenario is just as astounding as the first, since it implies that each of us depends fully on all of the history and circumstances that led up to this moment.

In deciding which of the two scenarios is more plausible, we should note that both biology and minds seem to be mechanisms for recording history in tremendous detail. Recording ability gives them advantages. This, I think, speaks in favour of the second scenario. The “junk DNA” problem becomes transposed to history itself (of matter, of nature, of societies, of the universe). Is there such a thing as junk history, events that are mere noise?

In writing the above, my aim has not been to discourage any existing work or research. But the two possibilities above must be considered and could point the way to the most worthwhile research goals for AI and AB. If the substrates can be found, then all is “well”, and we would need to truly grapple with the fact that we ourselves are mere patterns/arrangements of building blocks, mere software, body and mind. If the substrates can not be found, as I am inclined to think, then perhaps we should begin to think about completely new kinds of computation, which could somehow incorporate the parts that are missing from mere symbol manipulation. We should also consider much more seriously how closed-world systems, such as the world of digital information, can coexist harmoniously with what would be open-world systems, such as biology and minds. It seems that these problems are scarcely given any thought today.

Worlds on display

In fashion shop interiors, I often see objects that suggest a certain environment, assemblages that seem to be taken from a different setting altogether. For example, very old sewing machines to suggest craftsmanship (even as the clothes are made in China with the latest equipment). Or piles of old books, sometimes surprisingly carefully selected (who picks them out?), or even musical instruments. There may be exceptions, but I think it’s fair to say that in the majority of cases, the manufacturing, design and retail process, as well as the customers themselves, have no relation to these objects other than the fact that they are physically present in the shops.

The practice of erecting an assemblage of objects to suggest a world that is in actuality not present might be called citing or quoting a world (a world being a referential totality of beings, in the Heidegger sense). The little world, or worldlet, is on a little stage, like a picture in a frame.

A parallel practice occurs in, for example, furniture shops. Certain shops, in Tokyo at least, carry genuinely old and worn furniture. Once I saw a big used work table from France that had no doubt supported a fair amount of actual work, perhaps some kind of craft. Now it is on sale for use in a large, fashionable home (judging by the price and additional items in the shop). In this fashionable home, the work table will quote a world just like the books and sewing machines do in fashion shops. Presumably, this will all be considered tasteful.

It would not be as tasteful if the owner of the home set up an actual work table in his living room and did heavy carpentry or welding on it, only to later sweep the work aside and serve dinner to his guests among the scratches and dust (not even if the table was properly cleaned). But it would be more honest. A quoted world at a comfortable distance — contained and framed — can sometimes be appreciated by polite society where a living, actual world could not.


Heidegger’s question

Why is Heidegger interesting?

For Heidegger, the question that philosophy should concern itself with above all is the meaning of being. What is the meaning of being? What does it mean for something to be? Before this question, language itself begins to break down.

What is it to be? This question is not the same as “what is” or “what kinds of beings are there”? The latter would be questions about particular beings – ontical questions, in Heidegger’s words. The meaning of being itself would be an ontological question – indeed, the question that precedes all ontology. What is ontically closest is ontologically farthest: we somehow make use of the concept “being” constantly in our everyday life, but maybe for that very reason, it is very hard to theorise about it and become conscious of what it is.

Aristotle understood beings as substances with properties. This seems to lead quite directly to our western subject-verb-object languages, and to predicate logic, as we know it, for example, in computer science and mathematics. The stone is hard. hard(stone). P(x).

In Heidegger’s view, since Aristotle, the sciences have been busy constructing ontologies of this kind – enumerating categories, describing what kind of things there are and what properties they have, and how the properties can be manipulated – but always in forgetfulness of being. The very core that such ontologies are meant to illuminate was left in the dark.

Being and Time is Heidegger’s major work. It is very well worth the effort it takes to get through it (I recommend Hubert Dreyfus’ Berkeley lectures for anyone attempting this on their own). Here, Heidegger tries to relate being and time to each other in such a way that they each become each other’s horizons. Being becomes intelligible in terms of time, and time becomes intelligible in terms of being. Dasein – humans, beings such as ourselves – is the being which always already has an understanding of being, and lives in it. The questioning departs from this implicit, pre-ontological understanding.

If it is said that ‘Being’ is the most universal concept, this cannot mean that it is the one which is clearest or that it needs no further discussion. It is rather the darkest of all.

When we interrogate Dasein in order to gain an understanding of being, are we asking about humans, or about the universe? For Heidegger, the separation of the two is not possible. Any understanding of the universe that we experience – we humans, we as Dasein – is always dependent on our practices, our intellectual history, the understanding of being that we always already have. An objective, in the sense of being utterly separated, understanding of the universe is not possible (which is not to say that scientific efforts to be objective have no value – on the contrary). Inquiring about being seems to be simultaneously about the conditions for understanding ourselves and about the conditions for understanding the universe. These are not two separate domains.

Heidegger’s account is not always crystal clear, but it does open up dramatically new perspectives on the world, on science, on life. It shows us that the everyday understanding of so much that we take for granted is utterly obscure.


Jung and Heidegger

Part 2 of Heidegger’s Being and Time devotes considerable effort to building up and establishing the notion of authentic resoluteness. Heidegger’s Dasein may strive to be authentically resolute. I cannot claim to fully understand this concept, but it involves notions such as being-towards-death, maintaining openness to anxiety, and choosing to have a conscience. Somehow, through anxiety and confrontation with death or the Nothing (instead of fleeing in the face of these confrontations, as most people usually do), Dasein becomes able to exist authentically.

C G Jung’s psychology is largely about the process of individuation, which is the mind’s natural growth and progress towards becoming an integrated whole. For Jung, psychological health is largely about resolving obstacles to the individuation process. A big part of this process is the integration of the mind’s unconscious contents (such as the Self) with the conscious contents. This integration seems to not mean that they become a homogenous unity, but rather that they become interwoven and are allowed to influence each other in a natural way.

My hunch, which I cannot argue very convincingly, is that this kind of existential, phenomenological philosophy (Heidegger) and this kind of psychology (Jung) sometimes aim at the same affects, phenomena or states of mind – whichever we choose to call it. Jung makes a big point of differentiating between symbols and concepts. The Self is not a concept but a symbol: it is too large to fully grasp with the conscious mind. Heidegger’s Nothing (or even Being) sometimes looks like this kind of symbol too: something that cannot be grasped by concepts but which is essential for all concepts to be intelligible as such in the first place, a source of intelligibility, the fount from which other notions flow. Turning this around and twisting it a bit,  the unconscious can be said to be a kind of nothing, a shadow, and we only have a conscious and definite personality in so far as we also have a shadow to go with it. Our (Jungian) shadow seems to enable our definite character almost in the same way that the Nothing enables beings to stand out “as radically other with respect to the nothing” (What is Metaphysics).

This is mere speculation, but if I am right, then we are led to ask: how is it that Heidegger, who builds his castles (I think) on a kind of language craft and on labyrinthine but highly effective prose, can achieve the same thing that Jung achieves with methods such as dream analysis and active imagination? Could these methods, which seem so different at first, really be aiming at the same goal?