Mysteries of the scientific method

mitScientific method can be understood as the following steps: formulating a hypothesis, designing an experiment, carrying out experiments, and drawing conclusions. Conclusions can feed into hypothesis formulation again, in order for a different (related or unrelated) hypothesis to be tested, and we have a cycle. This feedback can also take place via a general theory that conclusions contribute to and hypotheses draw from. The theory gets to represent everything we have learned about the domain so far. Some of the steps may be expanded into sub-steps, but in principle this cycle is how we generally think of science.

This looks quite simple, but is it really? Let’s think about hypothesis formulation and drawing conclusions. In both of these steps, the results are bounded by our imagination and intuition. Thus, something that doesn’t ever enter anybody’s imagination will not be established as scientific fact. In view of this, we should hope that scientists do have vivid imaginations. It is easy to imagine that there might be very powerful findings out there, on the other side of our current scientific horizon, that nobody has yet been creative enough to speculate about. It is not at all obvious that we can see the low hanging fruit or even survey this mountainous landscape well – particularly in an age of hyper-specialisation.

But scientists’ imaginations are probably quite vivid in many cases – thankfully. Ideas come to scientists from somewhere, and some ideas persist more strongly than others. Some ideas seduce scientists to years of hard labour, even when the results are meagre at first. Clearly this intuition and sense that something is worth investigating is absolutely crucial to high quality results.

A hypothesis might be: there is a force that make bodies with mass attract to each other, in a way that is inversely proportional to the squared distance between them. To formulate this hypothesis we need concepts such as force, bodies, mass, distance, attraction. Even though the hypothesis might be formulated in mere words, these words all depend on experience and practices – and thus equipment (even if the equipment used in some cases is simply our own bodies). If this hypothesis is successfully proven, then a new concept becomes available: the law of gravity. This concept in turn may be incorporated into new hypotheses and experiments, paving the way for ever higher and more complex levels of science and scientific phenomena.

Our ability to form hypotheses, to construct equipment and to draw conclusions, seem to be human capacities that are not easy to automate.

Entities such as matter, energy, atoms and electrons become accessible – I submit – primarily through the concepts and equipment that give access to them. In a world with an alternate history different from ours, it is conceivable that entirely different concepts and ideas would explain the same phenomena that are explained by our physics. For science to advance, new equipment and new concepts need to be constructed continually. This process is almost itself an organic growth.

Can we have automated science? Do we no longer need scientific theory? (!?) Can computers one day carry out our science for us? Only if either: a) science is not an essentially human activity, or b) computers become able to take on this human essence, including the responsibility for growing the conceptual-equipmental boundary. Data mining in the age of “big data” is not enough, since this (as far as I know) operates with a fixed equipmental boundary. As such, it would only be a scientific aid and not a substitute for the whole process. Can findings that do not result in concepts and theories ever be called scientific?

If computer systems ever start designing and building new I/O-devices for themselves, maybe something in the way of “artificial science” could be achieved. But it is not clear that the intuition guiding such a system could be equivalent to the human intuition that guides science. It might proceed on a different path altogether.

Collecting books


Until about five years ago, I would hesitate to buy books if I had other, unfinished books that I was currently reading. It seemed irresponsible to “start on something new” without finishing things that were in progress. This is the kind of attitude that leads you to visit every single room and see every single exhibit in a museum, exhausting yourself (thus precluding visits to other museums for a while). In retrospect, this was an unwise approach.

Umberto Eco (I learn via Nassim Taleb), and others before him, advocates the notion of an antilibrary. Books that one has not read are clearly more valuable than books that one has read. So simple, and so obvious. One should fill one’s shelves with unread books.

Of course this does not mean indiscriminate acquisition, though. We should curate, buy books on the basis of potential value – at present or at some time in the future. Look for links between books, associations, counterpoints, juxtapositions. Thus we build a space – both literary and physical – that is instantly accessible, offering up its riches. We can immediately jump from book to book, trace connections and make new ones, a quadratically increasing number of potential contrasts…

Talking to a new acquaintance for ten hours does not hold ten times as much “utility” or interest as talking to him or her for one hour. Trying to exhaust or deplete one person before moving on to make another acquaintance would be rude, clumsy, pointless and tiring. Although we may sometimes wish to converse with someone for days or weeks immediately upon meeting them, sometimes a few minutes is enough to have a crucial insight.

A metaphor, and an obvious insight now, but one that bears repetition. Finally: it is important that the collection is physical, concrete shelves with physical volume and mass. No digital interfaces, however convenient, can make up for the lack of physicality. They are complementary at best.

Historical noise? Simulation and essential/accidental history

Scientists and engineers around the world are, with varying degrees of success, racing to replicate biology and intelligence in computers. Computational biology is already simulating the nervous systems of entire organisms. Artificial intelligence seems to be able to replicate more tasks formerly thought to be the sole preserve of man each year. Many of the results are stunning. All of this is done on digital circuits and/or Turing-Church computers (two terms that for my purposes here are interchangeable — we could also call it symbol manipulation). Expectations are clearly quite high.

What should we realistically hope for? How far can these advances actually go? If they do not culminate in “actual” artificial biology (AB) and artificial intelligence (AI), then what will they end in – what logical conclusion will they reach, what kind of wall would they run up against? What expectations do we have of “actual” AB and AI?

These are extremely challenging questions. When thinking about them, we ought to always keep in mind that minds and biology are both, as far as science knows, open-ended systems, open worlds. This in the sense that we do not know all existing facts about them (unlike classical mechanics or integer arithmetic, which we can reduce to sets of rules). For all intents, given good enough equipment, we could make an indefinite amount of observations and data recordings from any cell or mind. Conversely, we cannot, starting from scratch, construct a cell or a mind starting from pure chemical compounds. Even given godlike powers in a perfectly controlled space, we wouldn’t know what to do. We cannot record in full detail the state of a (single!) cell or a mind, we cannot make perfect copies, and we cannot configure the state of a cell or mind with full precision. This is in stark contrast to digital computation, where we can always make an indefinite number of perfect copies, and where we know the lower bound of all relevant state – we know the smallest detail that matters. We know that there’s no perceivable high-level difference between having a potential difference of 5.03 volts or 5.04 volts in our transistors on the lowest level.

(Quantum theory holds that ultimately, energy can only exist in discrete states. It seems that one consequence would be that a given volume of matter can only represent a finite amount of information. For practical purposes this does not affect our argument here, since measurement and manipulation instruments in science are very far from being accurate and effective at a quantum level. It may certainly affect our argument in theory, but who says that we will not some day discover a deeper level that can hold more information?)

In other words, we know the necessary and sufficient substrate (theoretical and hardware basis) for digital computation, but we know of no such substrate for minds or cells. Furthermore, there are reasons to think that any such substrate would lie much deeper, and at a much smaller scale, than we tend to believe. We repeatedly discover new and unexpected functions of proteins and DNA. Junk DNA, a name that has more than a hint of hubris to it, was later found to have certain crucial functions – not exactly junk, in other words.

Attempts at creating artificial minds and/or artificial biology are attempts at creating detached versions of the original phenomena. They would exist inside containers, independently of time and entropy, as long as the sufficient electrical charge or storage integrity is maintained. Their ability to affect the rest of the universe, and to be affected by it, would be very strictly limited (though not nonexistent – for example, memory errors may occur in a computer as a result of electromagnetic interference from the outside). We may call such simulations unrooted or perhaps hovering. This is the quality that allows digital circuits to preserve information reliably. Interference and noise is screened out, removed.

In attempting to answer the questions posed above, we should think about two alternative scenarios, then.

Scenario 1. It is possible to find a sufficient substrate for biology and/or minds. Beneath a certain level, no further microscopic detail is necessary in the model to replicate the full range of phenomena. Biology and minds are then reduced to a kind of software; a finite amount of information, an arrangement of matter. No doubt such a case would be comforting to many of the logical positivists at large today. But it would also have many strange consequences.

Each of us as a living organism, society around us, and every entity has a history that stretches back indefinitely far. The history of cells needs a long pre-history and evolution of large molecules to begin. A substrate, in the above sense, exists and can be practically used if and only if large parts of history are dispensable. If we could create a perfect artificial cell on some substrate (in software, say) in a relatively short time span, say an hour, or, why not, less than a year, then it means that nature took an unnecessarily long way to get to its goal. (Luckily, efficient, rational, enlightened humans have now come along and found a way to cut out all that waste!) Our shorter way to the goal would then be something that cuts out all the accidental features of history, leaving only the essential parts in place. So the practically usable substrate, which allows for shortcuts in time, then seems to imply a division between essential and accidental history of the thing we wish to simulate! (I say “practically” usable, since an impractical alternative is a working substrate that requires as much time as natural history in the “real” world. In this scenario, getting to the first cell on the substrate takes as long as it did in reality starting from, say, the beginning of the universe. Not a practical scenario, but an interesting thought experiment.) Note that if we are able to somehow run time faster in the simulation than in reality, then it would also mean that parts of history (outside the simulation) are dispensable: some time would have been wasted on unecessary processes.

Scenario 2. Such a substrate does not exist. If no history is accidental, if the roundabout historical process taken by the universe to reach the goal of, say, the first cell or first mind, is actually the only way that such things can be attained, then this scenario would be implied. This scenario is just as astounding as the first, since it implies that each of us depends fully on all of the history and circumstances that led up to this moment.

In deciding which of the two scenarios is more plausible, we should note that both biology and minds seem to be mechanisms for recording history in tremendous detail. Recording ability gives them advantages. This, I think, speaks in favour of the second scenario. The “junk DNA” problem becomes transposed to history itself (of matter, of nature, of societies, of the universe). Is there such a thing as junk history, events that are mere noise?

In writing the above, my aim has not been to discourage any existing work or research. But the two possibilities above must be considered and could point the way to the most worthwhile research goals for AI and AB. If the substrates can be found, then all is “well”, and we would need to truly grapple with the fact that we ourselves are mere patterns/arrangements of building blocks, mere software, body and mind. If the substrates can not be found, as I am inclined to think, then perhaps we should begin to think about completely new kinds of computation, which could somehow incorporate the parts that are missing from mere symbol manipulation. We should also consider much more seriously how closed-world systems, such as the world of digital information, can coexist harmoniously with what would be open-world systems, such as biology and minds. It seems that these problems are scarcely given any thought today.

Worlds on display

In fashion shop interiors, I often see objects that suggest a certain environment, assemblages that seem to be taken from a different setting altogether. For example, very old sewing machines to suggest craftsmanship (even as the clothes are made in China with the latest equipment). Or piles of old books, sometimes surprisingly carefully selected (who picks them out?), or even musical instruments. There may be exceptions, but I think it’s fair to say that in the majority of cases, the manufacturing, design and retail process, as well as the customers themselves, have no relation to these objects other than the fact that they are physically present in the shops.

The practice of erecting an assemblage of objects to suggest a world that is in actuality not present might be called citing or quoting a world (a world being a referential totality of beings, in the Heidegger sense). The little world, or worldlet, is on a little stage, like a picture in a frame.

A parallel practice occurs in, for example, furniture shops. Certain shops, in Tokyo at least, carry genuinely old and worn furniture. Once I saw a big used work table from France that had no doubt supported a fair amount of actual work, perhaps some kind of craft. Now it is on sale for use in a large, fashionable home (judging by the price and additional items in the shop). In this fashionable home, the work table will quote a world just like the books and sewing machines do in fashion shops. Presumably, this will all be considered tasteful.

It would not be as tasteful if the owner of the home set up an actual work table in his living room and did heavy carpentry or welding on it, only to later sweep the work aside and serve dinner to his guests among the scratches and dust (not even if the table was properly cleaned). But it would be more honest. A quoted world at a comfortable distance — contained and framed — can sometimes be appreciated by polite society where a living, actual world could not.


Heidegger’s question

Why is Heidegger interesting?

For Heidegger, the question that philosophy should concern itself with above all is the meaning of being. What is the meaning of being? What does it mean for something to be? Before this question, language itself begins to break down.

What is it to be? This question is not the same as “what is” or “what kinds of beings are there”? The latter would be questions about particular beings – ontical questions, in Heidegger’s words. The meaning of being itself would be an ontological question – indeed, the question that precedes all ontology. What is ontically closest is ontologically farthest: we somehow make use of the concept “being” constantly in our everyday life, but maybe for that very reason, it is very hard to theorise about it and become conscious of what it is.

Aristotle understood beings as substances with properties. This seems to lead quite directly to our western subject-verb-object languages, and to predicate logic, as we know it, for example, in computer science and mathematics. The stone is hard. hard(stone). P(x).

In Heidegger’s view, since Aristotle, the sciences have been busy constructing ontologies of this kind – enumerating categories, describing what kind of things there are and what properties they have, and how the properties can be manipulated – but always in forgetfulness of being. The very core that such ontologies are meant to illuminate was left in the dark.

Being and Time is Heidegger’s major work. It is very well worth the effort it takes to get through it (I recommend Hubert Dreyfus’ Berkeley lectures for anyone attempting this on their own). Here, Heidegger tries to relate being and time to each other in such a way that they each become each other’s horizons. Being becomes intelligible in terms of time, and time becomes intelligible in terms of being. Dasein – humans, beings such as ourselves – is the being which always already has an understanding of being, and lives in it. The questioning departs from this implicit, pre-ontological understanding.

If it is said that ‘Being’ is the most universal concept, this cannot mean that it is the one which is clearest or that it needs no further discussion. It is rather the darkest of all.

When we interrogate Dasein in order to gain an understanding of being, are we asking about humans, or about the universe? For Heidegger, the separation of the two is not possible. Any understanding of the universe that we experience – we humans, we as Dasein – is always dependent on our practices, our intellectual history, the understanding of being that we always already have. An objective, in the sense of being utterly separated, understanding of the universe is not possible (which is not to say that scientific efforts to be objective have no value – on the contrary). Inquiring about being seems to be simultaneously about the conditions for understanding ourselves and about the conditions for understanding the universe. These are not two separate domains.

Heidegger’s account is not always crystal clear, but it does open up dramatically new perspectives on the world, on science, on life. It shows us that the everyday understanding of so much that we take for granted is utterly obscure.