Tag: science


Method and object. Horizons for technological biology

March 22nd, 2016 — 10:32pm

(This post is an attempt at elaborating the ideas I outlined in my talk at Bio-pitch in February.)

The academic and investigative relationship to biology – our discourse about biology – is becoming increasingly technological. In fields such as bioinformatics and computational biology, the technological/instrumental relationship to nature is always at work, constructing deterministic models of phenomena. By using these models, we may repeatedly extract predictable results from nature. An example would be a cause-effect relationship like: exposing a cell to heat causes “heat shock proteins” to be transcribed and translated.

The implicit understanding in all of these cases is that nature can be turned into engineering. Total success, in this understanding, would amount to one or both of the following:

  1. Replacement/imitation as success. If we can replace the phenomena under study by its model (concretely, a machine or a simulation), we have achieved success.
  2. Control as success. If we can consistently place the phenomena under study in verifiable, fully defined states, we have achieved success. (Note that this ideal implies that we also possess perfect powers of observation, down to a hypothetical “lowest level”).

These implicitly held ideals are not problematic as long as we acknowledge that they are mere ideals. They are very well suited as horizons for these fields to work under, since they stimulate the further development of scientific results. But if we forget that they are ideals and begin to think that they really can become realities, or if we prematurely think that biology really must be like engineering, we might be in trouble. Such a belief conflates the object of study with our relatedness to that object. It misunderstands the role of the equipment-based relationship. The model – and associated machines, software, formulae. et cetera – is equipment that constitutes our relatedness to the phenomena. It cannot be the phenomena themselves.

Closely related to the ideals of replacement and control is the widespread application of abstraction and equality in engineering-like fields (and their application to new fields that are presently being clad in the trappings of engineering, such as biology). Abstraction and equality – – the notion that two entities, instances, moments, etc., are in some way the same – allow us to introduce an algebra, to reason in the general and not in specifics. And this is of course what computers do. It also means that two sequences of actions (laboratory protocols for example), although they are different sequences, or the same sequence but at different instances in time, can lead to the same result. Just as 3+1 and 2+2 both “equal” 4. In other words, history becomes irrelevant, the specific path taken no longer means very much. But it is not clear that this can ever truly be the case outside of an algebra, and that is what risks being forgotten.

We might call all this the emergence of technological biology, or technological nature, the conquest of biology by λόγος, et cetera. The principal danger seems to be the conflation of method with object, of abstraction with the specific. And here we see clearly how something apparently simple – studying RNA expression levels in the software package R, for example – opens up the deepest metaphysical abysses. One of the most important tasks right now, then, would be the development of a scientific and technological culture that keeps the benefits of the technological attitude without losing sight of a more basic non-technological relatedness. The path lies open…

Comment » | Bioinformatics, Computer science, Philosophy

The inexhaustible wealth of appearance, information and specificity

December 13th, 2015 — 2:36pm

IMG_0001

When perceiving an object, for example a chair, the statement “this is X” (this is a chair) is almost entirely uninteresting. The concept by which we identify the object is a mere word, and in a sense entirely devoid of meaning.

That concept does help us align this object with other entities in space and time. It sets expectations about what has been done and what can be done to and with it, and it links the object to social practices. But none of these things are very interesting. After all, we understand quite well what society expects from chairs.

What is more interesting is all the other statements we could make about a particular chair, that is, all the qualities, information, phenomena and experiences that do not fit the general concept of a chair. Call this the chair’s particularity. It may be unusually sturdy or rickety. It may evoke a sense of sorrow or longing for a person who used to sit on it. It may make us think about economics. Its shape may even have something spiritual about it. It may, if it is a chair in an abandoned house, be decomposing. And even this is just scratching the surface.

In all likelihood, we are able to produce an unbounded number of interesting statements about this locus that is the chair. (Recall the famous school assignment about writing a story several hundred words long about the face of a coin.) And this would hold true both when we speak freely, metaphorically and poetically, and when we restrict ourselves to testable, scientific (in the modern sense) statements. New metaphors can always be invented, new scientific equipment may always be constructed. These additional modes of relatedness to the locus provide, perhaps, the basis for new statements.

How are we to understand this fundamental overflowing, this exuberant blossoming, the profound potential wealth that we draw upon and realise when we articulate statements about an entity such as this chair? It is not part of the concept “chair”. This concept is overlaid as an afterthought in order to make the surplus of impressions manageable and graspable. We are used to economising the use of our consciousness, dispensing it only sparingly, through the shielding, buffering and deflection that concepts afford us.

For Heidegger, being is the basis of intelligibility, a carrier of meaning. Language and intelligibility exists only on the basis of primordial being. He makes it his task to inquire as to what this being is.

For Georges Bataille, all activity that involves redistribution of energy, human and otherwise, accumulates a surplus that necessarily must be released in some way.

Myths and archetypes repeat themselves throughout history and society, in constantly renewed forms which are both always the same and always made from different specific constitutent parts. They can always be repeated in a different way. The hero myth exists in every culture (see for example Jung or Campbell). Conversely, this myth in all its specific detail is always different each time it appears.

In difference and repetition, Deleuze argues that conceptual machinery is constantly at work, extracting difference from whatever the underlying basis is.

Genetic material successfully reproduces and preserves itself, and perhaps prospers, only through the continual introduction of difference and variation at an appropriate rate.

The digital world, on the other hand, denies the possibility of generating an unbounded number of statements from some entity (such as a record in a database). In fact, its essence is the possibility of perfect copying, which happens only when the information being carried is strictly circumscribed and limited.

All these concepts, it seems, have something in common – the interaction between a specific form and the possibility of an infinite number of variations of and departures from that form.

4 comments » | Philosophy

Mysteries of the scientific method

November 7th, 2015 — 10:48am

mitScientific method can be understood as the following steps: formulating a hypothesis, designing an experiment, carrying out experiments, and drawing conclusions. Conclusions can feed into hypothesis formulation again, in order for a different (related or unrelated) hypothesis to be tested, and we have a cycle. This feedback can also take place via a general theory that conclusions contribute to and hypotheses draw from. The theory gets to represent everything we have learned about the domain so far. Some of the steps may be expanded into sub-steps, but in principle this cycle is how we generally think of science.

This looks quite simple, but is it really? Let’s think about hypothesis formulation and drawing conclusions. In both of these steps, the results are bounded by our imagination and intuition. Thus, something that doesn’t ever enter anybody’s imagination will not be established as scientific fact. In view of this, we should hope that scientists do have vivid imaginations. It is easy to imagine that there might be very powerful findings out there, on the other side of our current scientific horizon, that nobody has yet been creative enough to speculate about. It is not at all obvious that we can see the low hanging fruit or even survey this mountainous landscape well – particularly in an age of hyper-specialisation.

But scientists’ imaginations are probably quite vivid in many cases – thankfully. Ideas come to scientists from somewhere, and some ideas persist more strongly than others. Some ideas seduce scientists to years of hard labour, even when the results are meagre at first. Clearly this intuition and sense that something is worth investigating is absolutely crucial to high quality results.

A hypothesis might be: there is a force that make bodies with mass attract to each other, in a way that is inversely proportional to the squared distance between them. To formulate this hypothesis we need concepts such as force, bodies, mass, distance, attraction. Even though the hypothesis might be formulated in mere words, these words all depend on experience and practices – and thus equipment (even if the equipment used in some cases is simply our own bodies). If this hypothesis is successfully proven, then a new concept becomes available: the law of gravity. This concept in turn may be incorporated into new hypotheses and experiments, paving the way for ever higher and more complex levels of science and scientific phenomena.

Our ability to form hypotheses, to construct equipment and to draw conclusions, seem to be human capacities that are not easy to automate.

Entities such as matter, energy, atoms and electrons become accessible – I submit – primarily through the concepts and equipment that give access to them. In a world with an alternate history different from ours, it is conceivable that entirely different concepts and ideas would explain the same phenomena that are explained by our physics. For science to advance, new equipment and new concepts need to be constructed continually. This process is almost itself an organic growth.

Can we have automated science? Do we no longer need scientific theory? (!?) Can computers one day carry out our science for us? Only if either: a) science is not an essentially human activity, or b) computers become able to take on this human essence, including the responsibility for growing the conceptual-equipmental boundary. Data mining in the age of “big data” is not enough, since this (as far as I know) operates with a fixed equipmental boundary. As such, it would only be a scientific aid and not a substitute for the whole process. Can findings that do not result in concepts and theories ever be called scientific?

If computer systems ever start designing and building new I/O-devices for themselves, maybe something in the way of “artificial science” could be achieved. But it is not clear that the intuition guiding such a system could be equivalent to the human intuition that guides science. It might proceed on a different path altogether.

1 comment » | Bioinformatics, Computer science, Philosophy

Science and non-repeatable events

May 29th, 2014 — 11:33am

Scientific method is fundamentally concerned with repeatable events. The phenomena that science captures most easily may be described using the following formula: once conditions A have been established, if B is done, then C happens. 

This kind of science is a science of reactions, of the reactive. But what about a science of the active? Is such a science possible?

To phrase what I have in mind in a different way, suppose that there are events in our universe that are not reproducible or repeatable. They would not be the consequence of some stimulus or trigger. But neither would they be the act of some imaginary god. They might simply be part of the same underlying, mysterious generator that is responsible for what we call scientific laws (patterns of reproducibility). (So far we have inferred some of the properties of this generator, but we are very far from apprehending it or understanding its totality and boundaries. Intellectual humility is crucial.) Would science be able to record and theorise about such events? Certainly not. Modern scientific method is firmly aimed at eliminating irreproducible results.

To put it in still another way, we are able to verify determinism in those cases where it holds up, but we are always unable to verify the absence of cases (in the past or in the future) where the deterministic rules break down.

This is a quandary, since it does seem that the world contains phenomena that are difficult to reproduce. The belief that the world can ultimately be reduced to a set of deterministic rules is not at all uncontroversial (and perhaps many physicists have given it up already). Particularly in biology, we constantly struggle to understand phenomena in terms of such rules. However, we can perhaps see biology residing at the boundary between the reactive/deterministic and the active/irreproducible. Gradual determinism? —

5 comments » | Bioinformatics, Computer science, Philosophy

Is our ability to detect fractals underdeveloped?

March 8th, 2013 — 3:38pm

IMG_2725

Fractals appear in many places in biology and ecology, in society, in man-made artefacts. Yet the concept itself is quite new. Fractal phenomena existed for a long time before Benoit Mandelbrot formally investigated them as such. Amazingly, the Greeks, who did so much, do not seem to have had the notion of a fractal.

In the age of software, we can easily understand that fractals simply are the result of a function applied to its own output at different levels of scale. We know what that function is if we have written the software ourselves, but it might not be so easy to know what it might be if a fractal is detected in nature, say.

It seems that today we have instruments for observing all kinds of basically linear things at many different scales: microscopes, telescopes, oscilloscopes and so on. Yet, there is no good instrument for detecting self-similar phenomena that appear at multiple different orders of magnitude. For example, how could I look for fractals in the genome? In the organisation of my local community? What methods should I use to extract the process that generates the self-similarity?

We are very comfortable with thinking about linear quantities and smooth shapes, but applying linear methods to fractal phenomena will often miss the point. This is one of the essential points that we may take from Nassim Taleb’s Antifragile.

Comment » | Bioinformatics, Philosophy

Back to top