Tag: nature


Method and object. Horizons for technological biology

March 22nd, 2016 — 10:32pm

(This post is an attempt at elaborating the ideas I outlined in my talk at Bio-pitch in February.)

The academic and investigative relationship to biology – our discourse about biology – is becoming increasingly technological. In fields such as bioinformatics and computational biology, the technological/instrumental relationship to nature is always at work, constructing deterministic models of phenomena. By using these models, we may repeatedly extract predictable results from nature. An example would be a cause-effect relationship like: exposing a cell to heat causes “heat shock proteins” to be transcribed and translated.

The implicit understanding in all of these cases is that nature can be turned into engineering. Total success, in this understanding, would amount to one or both of the following:

  1. Replacement/imitation as success. If we can replace the phenomena under study by its model (concretely, a machine or a simulation), we have achieved success.
  2. Control as success. If we can consistently place the phenomena under study in verifiable, fully defined states, we have achieved success. (Note that this ideal implies that we also possess perfect powers of observation, down to a hypothetical “lowest level”).

These implicitly held ideals are not problematic as long as we acknowledge that they are mere ideals. They are very well suited as horizons for these fields to work under, since they stimulate the further development of scientific results. But if we forget that they are ideals and begin to think that they really can become realities, or if we prematurely think that biology really must be like engineering, we might be in trouble. Such a belief conflates the object of study with our relatedness to that object. It misunderstands the role of the equipment-based relationship. The model – and associated machines, software, formulae. et cetera – is equipment that constitutes our relatedness to the phenomena. It cannot be the phenomena themselves.

Closely related to the ideals of replacement and control is the widespread application of abstraction and equality in engineering-like fields (and their application to new fields that are presently being clad in the trappings of engineering, such as biology). Abstraction and equality – – the notion that two entities, instances, moments, etc., are in some way the same – allow us to introduce an algebra, to reason in the general and not in specifics. And this is of course what computers do. It also means that two sequences of actions (laboratory protocols for example), although they are different sequences, or the same sequence but at different instances in time, can lead to the same result. Just as 3+1 and 2+2 both “equal” 4. In other words, history becomes irrelevant, the specific path taken no longer means very much. But it is not clear that this can ever truly be the case outside of an algebra, and that is what risks being forgotten.

We might call all this the emergence of technological biology, or technological nature, the conquest of biology by λόγος, et cetera. The principal danger seems to be the conflation of method with object, of abstraction with the specific. And here we see clearly how something apparently simple – studying RNA expression levels in the software package R, for example – opens up the deepest metaphysical abysses. One of the most important tasks right now, then, would be the development of a scientific and technological culture that keeps the benefits of the technological attitude without losing sight of a more basic non-technological relatedness. The path lies open…

Comment » | Bioinformatics, Computer science, Philosophy

Historical noise? Simulation and essential/accidental history

June 24th, 2015 — 4:58pm

Scientists and engineers around the world are, with varying degrees of success, racing to replicate biology and intelligence in computers. Computational biology is already simulating the nervous systems of entire organisms. Artificial intelligence seems to be able to replicate more tasks formerly thought to be the sole preserve of man each year. Many of the results are stunning. All of this is done on digital circuits and/or Turing-Church computers (two terms that for my purposes here are interchangeable — we could also call it symbol manipulation). Expectations are clearly quite high.

What should we realistically hope for? How far can these advances actually go? If they do not culminate in “actual” artificial biology (AB) and artificial intelligence (AI), then what will they end in – what logical conclusion will they reach, what kind of wall would they run up against? What expectations do we have of “actual” AB and AI?

These are extremely challenging questions. When thinking about them, we ought to always keep in mind that minds and biology are both, as far as science knows, open-ended systems, open worlds. This in the sense that we do not know all existing facts about them (unlike classical mechanics or integer arithmetic, which we can reduce to sets of rules). For all intents, given good enough equipment, we could make an indefinite amount of observations and data recordings from any cell or mind. Conversely, we cannot, starting from scratch, construct a cell or a mind starting from pure chemical compounds. Even given godlike powers in a perfectly controlled space, we wouldn’t know what to do. We cannot record in full detail the state of a (single!) cell or a mind, we cannot make perfect copies, and we cannot configure the state of a cell or mind with full precision. This is in stark contrast to digital computation, where we can always make an indefinite number of perfect copies, and where we know the lower bound of all relevant state – we know the smallest detail that matters. We know that there’s no perceivable high-level difference between having a potential difference of 5.03 volts or 5.04 volts in our transistors on the lowest level.

(Quantum theory holds that ultimately, energy can only exist in discrete states. It seems that one consequence would be that a given volume of matter can only represent a finite amount of information. For practical purposes this does not affect our argument here, since measurement and manipulation instruments in science are very far from being accurate and effective at a quantum level. It may certainly affect our argument in theory, but who says that we will not some day discover a deeper level that can hold more information?)

In other words, we know the necessary and sufficient substrate (theoretical and hardware basis) for digital computation, but we know of no such substrate for minds or cells. Furthermore, there are reasons to think that any such substrate would lie much deeper, and at a much smaller scale, than we tend to believe. We repeatedly discover new and unexpected functions of proteins and DNA. Junk DNA, a name that has more than a hint of hubris to it, was later found to have certain crucial functions – not exactly junk, in other words.

Attempts at creating artificial minds and/or artificial biology are attempts at creating detached versions of the original phenomena. They would exist inside containers, independently of time and entropy, as long as the sufficient electrical charge or storage integrity is maintained. Their ability to affect the rest of the universe, and to be affected by it, would be very strictly limited (though not nonexistent – for example, memory errors may occur in a computer as a result of electromagnetic interference from the outside). We may call such simulations unrooted or perhaps hovering. This is the quality that allows digital circuits to preserve information reliably. Interference and noise is screened out, removed.

In attempting to answer the questions posed above, we should think about two alternative scenarios, then.

Scenario 1. It is possible to find a sufficient substrate for biology and/or minds. Beneath a certain level, no further microscopic detail is necessary in the model to replicate the full range of phenomena. Biology and minds are then reduced to a kind of software; a finite amount of information, an arrangement of matter. No doubt such a case would be comforting to many of the logical positivists at large today. But it would also have many strange consequences.

Each of us as a living organism, society around us, and every entity has a history that stretches back indefinitely far. The history of cells needs a long pre-history and evolution of large molecules to begin. A substrate, in the above sense, exists and can be practically used if and only if large parts of history are dispensable. If we could create a perfect artificial cell on some substrate (in software, say) in a relatively short time span, say an hour, or, why not, less than a year, then it means that nature took an unnecessarily long way to get to its goal. (Luckily, efficient, rational, enlightened humans have now come along and found a way to cut out all that waste!) Our shorter way to the goal would then be something that cuts out all the accidental features of history, leaving only the essential parts in place. So the practically usable substrate, which allows for shortcuts in time, then seems to imply a division between essential and accidental history of the thing we wish to simulate! (I say “practically” usable, since an impractical alternative is a working substrate that requires as much time as natural history in the “real” world. In this scenario, getting to the first cell on the substrate takes as long as it did in reality starting from, say, the beginning of the universe. Not a practical scenario, but an interesting thought experiment.) Note that if we are able to somehow run time faster in the simulation than in reality, then it would also mean that parts of history (outside the simulation) are dispensable: some time would have been wasted on unecessary processes.

Scenario 2. Such a substrate does not exist. If no history is accidental, if the roundabout historical process taken by the universe to reach the goal of, say, the first cell or first mind, is actually the only way that such things can be attained, then this scenario would be implied. This scenario is just as astounding as the first, since it implies that each of us depends fully on all of the history and circumstances that led up to this moment.

In deciding which of the two scenarios is more plausible, we should note that both biology and minds seem to be mechanisms for recording history in tremendous detail. Recording ability gives them advantages. This, I think, speaks in favour of the second scenario. The “junk DNA” problem becomes transposed to history itself (of matter, of nature, of societies, of the universe). Is there such a thing as junk history, events that are mere noise?

In writing the above, my aim has not been to discourage any existing work or research. But the two possibilities above must be considered and could point the way to the most worthwhile research goals for AI and AB. If the substrates can be found, then all is “well”, and we would need to truly grapple with the fact that we ourselves are mere patterns/arrangements of building blocks, mere software, body and mind. If the substrates can not be found, as I am inclined to think, then perhaps we should begin to think about completely new kinds of computation, which could somehow incorporate the parts that are missing from mere symbol manipulation. We should also consider much more seriously how closed-world systems, such as the world of digital information, can coexist harmoniously with what would be open-world systems, such as biology and minds. It seems that these problems are scarcely given any thought today.

4 comments » | Bioinformatics, Computer science, Philosophy

Is our ability to detect fractals underdeveloped?

March 8th, 2013 — 3:38pm

IMG_2725

Fractals appear in many places in biology and ecology, in society, in man-made artefacts. Yet the concept itself is quite new. Fractal phenomena existed for a long time before Benoit Mandelbrot formally investigated them as such. Amazingly, the Greeks, who did so much, do not seem to have had the notion of a fractal.

In the age of software, we can easily understand that fractals simply are the result of a function applied to its own output at different levels of scale. We know what that function is if we have written the software ourselves, but it might not be so easy to know what it might be if a fractal is detected in nature, say.

It seems that today we have instruments for observing all kinds of basically linear things at many different scales: microscopes, telescopes, oscilloscopes and so on. Yet, there is no good instrument for detecting self-similar phenomena that appear at multiple different orders of magnitude. For example, how could I look for fractals in the genome? In the organisation of my local community? What methods should I use to extract the process that generates the self-similarity?

We are very comfortable with thinking about linear quantities and smooth shapes, but applying linear methods to fractal phenomena will often miss the point. This is one of the essential points that we may take from Nassim Taleb’s Antifragile.

Comment » | Bioinformatics, Philosophy

The limitations and fundamental nature of systems are not understood

December 22nd, 2012 — 7:31pm

Recently, I’ve become more and more aware of the limitations of conscious thought and formal models of entities and systems. We don’t understand how political systems make decisions, how world events occur, or even how we choose what to wear on any particular day. Cause and effect doesn’t exist in the form it is commonly imagined. We do not know what our bodies are capable of. We certainly don’t understand the basis of biology or DNA. Aside from the fact that there are so many phenomena we cannot explain yet, the models of chemistry and physics are an artificial mesh that is superimposed upon a much messier world. They work within reason, up to and including the phenomena that they can predict, but to confuse them with reality is insanity. In this vein it is interesting to also contemplate, for instance, that we don’t understand all the capabilities that a computer might have. Its CPU and hardware, while highly predictable, are fashioned out of the sub-conceptual and non-understood stuff that the world is made of. One day we may stumble upon software that makes them do something highly unexpected.

What’s the purpose of all this negative arguing then? What I want to get at when I say that we don’t understand this and we don’t understand that is a new, deeper intellectual honesty and a willingness to face the phenomena anew, raw, fresh, as they really appear to us. There’s a world of overlooked stuff out there.

Comment » | Bioinformatics, Computer science, Philosophy, Software development

Complex data: its origin and aesthetics

June 4th, 2012 — 10:28pm

Kolmogorov complexity is a measure of the complexity of data. It is simple to define but appears to have deep implications. The K. complexity of a string is defined as the size of the simplest possible program, with respect to a given underlying computer, that can generate the data. For example, the string “AAAAAA” has lower complexity than “ACTGTT”, since the former can be described as “Output ‘A’ 6 times”, but the latter has no obvious algorithmic generator. This point becomes very clear if the strings are very long. If no obvious algorithm is available, one has no option but to encode the whole string in the program.

In this case, when writing the “program” output ‘A’ 6 times, I assumed an underlying computer with the operations “repetition” and “output”. Of course a different computer could be assumed, but provided that a Turing-complete computer is used, the shortest possible algorithm is unlikely to have a very different length.

An essential observation to make here is that the output of a program can be much longer than the program itself. For example, consider the program “output ‘A’ 2000 times”. K. complexity has an inverse relation to compression. Data with low K. complexity is generally very easy to compress. Compression basically amounts to constructing a minimal program that, when run, reproduces the given data. Data with high K. complexity cannot, per definition, be compressed to a size smaller than the K. complexity itself.

Now that the concept is clear, where does data with high K. complexity come from? Can we generate it? What if we write a program that generates complex programs that generate data? Unfortunately this doesn’t work – it seems that, because we can embed an interpreter for a simple language within a program itself, a program-generating program doesn’t create data with higher K. complexity than the size of the initial, first-level program. A high complexity algorithm is necessary, and this algorithm must be produced by a generating process that cannot itself be reduced to a simple algorithm. So if a human being were to sit down and type in the algorithm, they might have to actively make sure that they are not inserting patterns into what they type.

But we can obtain vast amounts of high complexity data if we want it. We can do it by turning our cameras, microphones, thermometers, telescopes and seismic sensors toward nature. The data thus recorded comes from an immensely complex process that, as far as we know, is not easily reduced to a simple algorithm. Arguably, this also explains aesthetic appeal. We do not like sensory impressions that are easily explained or reduced to simple rules. On first glance at least, hundreds of blocks of identical high density houses are less attractive than low density houses that have grown spontaneously over a long period of time (although we may eventually change our minds). Objects made by artisans are more attractive than those produced in high volumes at low cost. Life is more enjoyable when we can’t predict (on some level, but not all) what the next day will contain.

The deep aesthetic appeal of nature may ultimately have as its reason the intense complexity of the process that generates it. Even a view of the sky, the desert or the sea is something complex, not a case of a single color repeated endlessly but a spectrum of hues that convey information.

 

Comment » | Computer science, Philosophy

Back to top