Tag: epistemology

Entering into bioinformatics

April 14th, 2012 — 10:37pm

As of now, I have been working with bioinformatics in the Mizuguchi lab at NIBIO, Osaka, for about two weeks. The lab environment is stimulating and I feel quite fortunate to be here.

It is interesting to compare computer science and bioinformatics with just the hindsight of this short period. In computer science and electronics, we study systems that have been built from the ground up with well known components at various scales. They have been designed with certain high level functionalities in mind, and it is always well known how these high level functions are being realised and what makes them tick. In biology, in contrast, we encounter systems designed by nature. These systems, organisms, have certain high level functions we are aware of. We are also aware of some of the low level functions, such as molecules, atoms and cells (although we may have only a partial understanding of some of these). The problem in biology is now to explain what makes a high level function tick or not tick, how to steer it, enhance it, or suppress it. The intermediate steps are not always revealed to us, and we must painstakingly tease them out with experiments. As always with empirical science, we can never be sure that we’ve obtained the whole picture.

This difference – the fact that we must reconstruct all the design principles and intermediate mechanisms for organisms, but not for computers – leads to different styles of teaching and thinking. Biology texts appear to be very top down and focus on what has been observed and what it appears to be. Technology texts can be bottom up, building up a complex design smoothly by adding one layer at a time, starting from the core, having the confidence that nothing is being omitted. The contrast is striking.

A limitation that both biology and computer science share is the problem of defining the exact capabilities of an organism or a system. In biology we often do not know, and I doubt if we ever will, considering how complex the genetic code is. In computer science, the capabilities of very simple programs can be completely understood, but understanding a nontrivial program — for example, verifying that it does what is desired and does not do what is not desired — usually requires nontrivial formal methods, if it is at all possible.

Comment » | Bioinformatics, Life

Platonism and the dominant decomposition

October 26th, 2011 — 2:33am

I’m in Portland, Oregon for the SPLASH conference. There’s a lot of energy and good ideas going around.

I gave a talk about my project, Poplar, at the FREECO workshop. At the same workshop there was a very interesting talk given by Klaus Ostermann, outlining some of the various challenges facing software composition. He linked composition of software components to concepts in classical logic, and informally divided composition into a light side and a dark side. On the light side are ideal concepts such as monotonicity (the more axioms we have, the more we can prove), absence of side effects and a single, canonical decomposition of everything. On the dark side are properties such as side effects, the absence of a single decomposition, knowledge that invalidates previously obtained theorems, and so on.

One of the ideas that resonated the most with me is the tyranny of the dominant decomposition. (For instance, a single type hierarchy). Being forced to decompose a system in a single way at all times implies only having a single perspective on it. Is this not platonism coming back to haunt us in programming languages? (Ostermann did indeed say that he suspects that mathematics and the natural sciences have had too much influence on programming). What we might need now is an antiplatonism in programming: we might need subjectivist/perspectivist programming languages. If components can view their peer components in different ways, depending on their domain and their interests (i.e. what kind of stakeholders they are), we might truly obtain flexible, evolvable, organic composition.

Comment » | Philosophy, Software development

Assessing research quality

April 28th, 2011 — 4:48pm

Academic research is difficult to evaluate. In order to know the significance of an article, a result or an experiment, one must know a lot about the relevant field. It is probably fair to say that few people read research articles in great depth unless they work in exactly the area the article is in. PhD theses might cite hundreds of articles, but it seems natural that not all of these articles will be read with the same degree of scrutiny by the author of the thesis.

Hence the trouble with obtaining funding for research. In order to obtain funding, you have to communicate something that seems incommunicable without the full commitment of the reader. Grant dispensers want to know a number on a scale: “what’s the quality of this paper between 0 and 1?”, but this quality number cannot be communicated separately from the full substance of the paper and its environs. And thus we end up with keywords, catchphrases that become associated with quality for short periods of time, as a way of bypassing this complexity, an approximate way of indicating that you are doing research on something worthwhile.

This reflects a broader problem in society of evaluating authorities. I cannot evaluate my doctor’s, or my dentist’s, or my lawyer’s work, since I don’t have the necessary competence. Accordingly, I base my trust on the person and some of their superficial attributes, instead of judging the work by itself. It seems that the same kind of thing becomes necessary sometimes in choosing what researchers to fund.

It also points to a faculty that must have evolved in human being since millennia: the capacity for evaluating important properties of things we do not understand well very quickly, for danger, nutrition, etc. Only that this faculty does not translate well to research…

Comment » | Computer science

Objective and subjective reality; perspectivism

March 31st, 2011 — 9:45pm

Nietzsche rejects the idea of an objective reality. He appears to give a generative status to the faculty of interpretation, in effect saying that the subject creates the world through her interpretations. Simultaneously, he champions the “intellectual conscience” and the value of scientific method and inquiry. How to make sense of this apparent contradiction?

It might be thought at first that the assertion that all judgments are subjective has some exceptions. After all, maybe we all agree that matters of taste and style are inherently more subjective than measurements of the length of a pencil or the weight of a stone. Maybe we would be tempted to posit a hierarchy of degrees of subjectivity. But Nietzsche rejects this too, emphatically expressing that there is no objective basis to which observations can be reduced, no judgment that is absolutely and irreducibly validated. For Nietzsche, the world seems to consist of multiple interlocking interpretations that support each other, a bit like an M. C. Escher drawing.

Elsewhere, Nietzsche invokes the death of God. Christoph Cox, in his “Nietzsche: Naturalism and Interpretation”, points out that the death of God as an idea has only been understood in its most shallow form if it is seen as a mere rejection of Christianity. For Nietzsche, Platonism, “the thing in itself”, the “forms”, “truth”, “paradise” and “objective facts” are all — maybe paradoxically — ways of rejecting reality, rejecting the world. They are dogma. The death of God, Cox asserts, is the end of all these various forms of dogma, not just of Christianity.

As for the intellectual conscience, Cox asserts that by this, Nietzsche simply means that one must question and attack one’s perspectives and interpretations as much as possible, and that a refusal to do this — an acceptance of dogmatic thought — would be a betrayal of the intellectual conscience. In this view, Nietzsche seems to state that in order to best know the world, we must entertain multiple parallel perspectives and harden each one as much as possible through questioning.

A naive questioning of objective truth can lead to a naive relativism, in which every assertion appears to be equally valuable or equally true. It is often on this account that social philosophers and thinkers of today are criticised, as champions of a destruction and levelling of all valuation, a mindless relativism. However, the idea of the intellectual conscience does seem as if it can point the way to new and quite concrete valuation. Nietzsche’s project is ultimately a constructive one which seeks to show a way forward.

What of science then, and its claims to empirical, objective truth, found through experiment and measurement? It seems that scientific thought and scientific findings are in no way invalidated through a Nietzschean epistemology. Science would merely have to forge relationships with other perspectives and find useful ways of relating to them, instead of claiming to be the sole valid way of viewing the world.

After all, what evidence is there that the world exists objectively and independent of the mind? And if there is no evidence either way, let us use Occam’s razor. Which alternative is the simplest explanation?

3 comments » | Philosophy

Back to top