The bounded infinity of language

Works of art, including film, painting, sculpture, literature and poetry, have a seemingly inexhaustible quality. As we keep confronting them, renewing our relationship with them over time, we continually extract more meaning from them. Some works truly appear to be bottomless. Reaching the bottom easily is, of course, a sure sign that a work will not have much lasting value.

Out of the forms listed above, (written) poetry and literature have the particular property that they are crafted out of a demonstrably finite medium: text. A finite alphabet, finite vocabulary, and a finite number of pages. As long as one disregards the effect of details such as paper quality, typography and binding, perfect copies can be made; the text can indeed be transcribed in its entirety without information loss. Somehow, reading Goethe on a Kindle is an experience that still holds power, although he presumably never intended his books to be read on Kindles (and some might argue that reading him in this way is ignoble).

How is it then that the evocative power of something finite can seem to be boundless? This curious property is something we might call the poetic or metaphorical qualities of a text. (Works of film, painting, sculpture and so on most likely also have this power, but it is trickier to demonstrate that they are grounded in a finite medium.) Through this mysterious evocative power, the elements that make up a work of art allow us to enter into an infinity that has been enclosed in a finite space. It will be argued that what is evoked comes as much from the reader as from the text, but this duality applies to all sensation.

With this in mind we turn, once again, to programming and formal “languages”. Terms in programming languages receive their meaning through a formal semantics that describes, mathematically, how the language is to be translated into an underlying, simpler language. This process takes place on a number of levels, and eventually the lowest underlying language is machinery. This grounds the power of a program to command electrons. But this is something different from the meaning of words in a natural language. The evocative power described above is clearly absent, and computer programs today do not transcend their essential finitude. With brute force, we could train ourselves to read source code metaphorically or poetically, but in most languages I know, this would result in strained, awkward and limited metaphors. (Perhaps mostly because programming languages to a large extent reference a world different from the human world.)

Consider how this inability to transcend finitude impacts our ability to model a domain in a given programming language. With an already formal domain, such as finance or classical mechanics, it is simple since what needs to happen is a mere translation. On the other hand, other domains, such as biology, resist formalisation  - and perhaps this is one of their essential properties. Here we would like to draw on the evocative, poetic, and metaphorical capacities of natural language – for the sake of program comprehension and perhaps also to support effective user interfaces – while also writing practical programs. But we have yet to invent a formal language that is both practical and evocative to the point that works of art could be created in it.

an ancient pond / a frog jumps in / the splash of water

(Bashou, 1686)

Print This Post Print This Post

Science and non-repeatable events

Scientific method is fundamentally concerned with repeatable events. The phenomena that science captures most easily may be described using the following formula: once conditions A have been established, if B is done, then C happens. 

This kind of science is a science of reactions, of the reactive. But what about a science of the active? Is such a science possible?

To phrase what I have in mind in a different way, suppose that there are events in our universe that are not reproducible or repeatable. They would not be the consequence of some stimulus or trigger. But neither would they be the act of some imaginary god. They might simply be part of the same underlying, mysterious generator that is responsible for what we call scientific laws (patterns of reproducibility). (So far we have inferred some of the properties of this generator, but we are very far from apprehending it or understanding its totality and boundaries. Intellectual humility is crucial.) Would science be able to record and theorise about such events? Certainly not. Modern scientific method is firmly aimed at eliminating irreproducible results.

To put it in still another way, we are able to verify determinism in those cases where it holds up, but we are always unable to verify the absence of cases (in the past or in the future) where the deterministic rules break down.

This is a quandary, since it does seem that the world contains phenomena that are difficult to reproduce. The belief that the world can ultimately be reduced to a set of deterministic rules is not at all uncontroversial (and perhaps many physicists have given it up already). Particularly in biology, we constantly struggle to understand phenomena in terms of such rules. However, we can perhaps see biology residing at the boundary between the reactive/deterministic and the active/irreproducible. Gradual determinism? –

Print This Post Print This Post

Innocent knowing


Knowledge can be associated with weight, heaviness, obligation, cynicism. Depending on one’s attitude, it can be seen as opposed to more “innocent” qualities such as beauty and play in many cases.

The more we know of our own history, and the more honestly we face it, the more gloomy we might become about the prospects for our future. This is true for individuals and for societies. Truly facing up to our past mistakes might erase all of our faith in a positive future. Thus a negative, passive kind of nihilism is born. On the basis of this, Nietzsche discusses, in “on the use and abuse of history for life”, and in other places, how ignorance actually preserves life and health in many cases. Truth can be a poison. The courage to leap ahead in defiance of the past, trying anew, which is essential for life, can be associated with stupidity as well as with heroism.

The connection between knowledge and guilt/heaviness goes even further back: in the bible, the fall of man in the garden of Eden is associated with the acquisition of knowledge and the shedding of ignorance.

Against this gloomy view Nietzsche later begins to formulate the ideal of “the gay science”, joyful knowing or “wild wisdom”. In Zarathustra he writes:

Three transformations of the spirit I name for you: how the spirit becomes a camel, and the camel a lion, and the lion at last a child.

The meaning of these transformations is mysterious and subject to much interpretation. But it is usually understood that becoming a “camel” involves taking on a heavy load – the burden of knowledge, the burden of history and wisdom. The lion involves attaining the power to create new values. Finally, the child is a regained innocence.

Innocence with knowledge, play with science, beauty with honesty. Is this not one of the most difficult and profound, and also most worthy, formulas to strive for?


Print This Post Print This Post

Small Tools for Bioinformatics

Pjotr Prins has published a Small Tools Manifesto for Bioinformatics, which is well worth a read for anyone who develops bioinformatics software.

In essence it’s about increased adoption of the Unix design philosophy. I fully support the manifesto, which in many ways is reminiscent of the ideas that me and Gabriel Keeble-Gagnere presented in our Friedrich paper at PRIB2012. The idea of designing software as small parts that can be recombined freely, instead of as a huge black box with a glossy surface, is an extremely powerful one, particularly in the research space.

Print This Post Print This Post

Scott Aaronson has misunderstood continental philosophy

It is first with delight and then with a growing feeling of sadness that I read Luke Muelhauser’s interview with the computer scientist Scott Aaronson at the Machine Intelligence Research Institute. As a computer scientist, Aaronson has contributed much to our understanding of complexity theory and other areas. He has even written popular science books on the field. I am happy to read that he seems to feel strongly about the links between computer science and philosophy. I agree with Aaronson about a lot of things. Certainly, computer science and philosophy are fields that cross-fertilise each other a lot, and my feeling is that this process is only getting started, much more can be done. Perhaps this mating of the two fields is even severely lagging behind what we would need in today’s world. Without a doubt, the study of formal models of rewriting and interpretation is extremely interesting and sheds light on questions about the nature of language, complexity, knowledge, understanding, communication, equipment, and the abilities of the human mind.

But then, just as I am about to call Aaronson one of my intellectual heroes, he stumbles:

By far the most important disease, I’d say, is the obsession with interpreting and reinterpreting the old masters, rather than moving beyond them.

And then he stumbles severely:

One final note: none of the positive or hopeful things that I said about philosophy apply to the postmodern or Continental kinds. As far as I can tell, the latter aren’t really “philosophy” at all, but more like pretentious brands of performance art that fancy themselves politically subversive, even as they cultivate deliberate obscurity and draw mostly on the insights of Hitler and Stalin apologists. I suspect I won’t ruffle too many feathers here at MIRI by saying this.

The unfortunate continental-analytic pseudo-divide

Who are the “Hitler and Stalin apologists”? I hope that this embarrassing epithet is not supposed to refer to Nietzsche and Marx, for example, since even a very casual reader of Nietzsche would quickly discover that he does not like nationalism or anti-semitism. Instead, his thinking was twisted and selectively misused by Nazi ideologists. It is true that thinkers like Heidegger and Foucault for a time supported Nazism and the Khomeini revolution, respectively, and there are other examples of controversial association. But using this as an excuse not to read these thinkers, let alone dismiss all of continental thinking, is very superficial.

This kind of comment itself is not normally worth a serious reply, and it seems Aaronson is just throwing it out a bit carelessly, expecting his audience to be people with views similar to his own. But since it is said by someone who is clearly very intelligent and who clearly wants to bridge philosophy and computer science (which I also want to do), I felt that I should counter the position I imagine that he is coming from. In doing so I will not be responding to Aaronson’s interview as a whole, which is basically an excellent read for the most part, full of interesting viewpoints. Instead, I will be focussing on these two unfortunate remarks only, and the misguided viewpoint that I believe generated them.

The artificial 20th century split between “continental” (French, German, etc) philosophy and “analytical” (mostly Anglo-Saxon) philosophy is extremely unfortunate and one hopes that it can be bridged one day. Aaronson exemplifies a general theme. He is anglo-saxon, a scientist and logician, has a limited interest in the humanities, and is thoroughly modern in that he has lost sight of the unitary origin of the scattered, fragmented array of academic fields and disciplines that we have today. The writers he likes are great ones but on one side of the continental-analytic divide only. He is doing great work, but he could potentially be doing so much more.

On solvent abuse

I believe that I understand Aaronson’s intellectual background to some degree. I studied for my undergraduate degree at Imperial College London, which, like MIT, is a place full of people who are very technically oriented. For me that was a great education in many ways, but it would not be an overstatement to say that very little attention was (and is, probably) given to the humanities there. This was by design. A certain deep but ultimately restricted kind of vision was cultivated there. The pure rationalist perspective functions exactly like bleach in the sense that it will disinfect, killing harmful bacteria, but it might kill healthy tissue too if applied too liberally. It also removes colour. For reasons unclear to me – perhaps partly as a reaction – my interest in humanities flickered to life during my final year there, however, and intensified when I begun my graduate studies here in Tokyo. I became very interested in the viewpoints that philosophy could offer me, and especially in continental writers. It is as someone who has made a difficult migration from a very restrictive logical/scientific viewpoint to a more inclusive one that I write these comments. My hope is that Aaronson will also make this leap and expand the range of his work to include the truly useful – if his funding agencies would let him, that is.

The unified root of knowledge

As Aaronson says, Einstein, Bohr, Godel and Turing had views outside of the scientific fields they are remembered for. It even seems that they might have been so successful in part because of their breadth. Blaise Pascal is remembered in some circles as a mathematician but we could equally well call him a philosopher who did some mathematics on the side. Francis Bacon thought not only scientifically but also meta-scientifically, imagining the limits of science and scientific method. The Pythagoreans approached mathematics not as something to be contemplated as a formal exercise at a desk with pen and paper, but as part of something esoteric and mystical. In ancient Greece, education emphasised an integrated, well-balanced body and mind and training in a wide range of theoretical and practical fields was important for one’s stature. The Greeks preferred this kind of multiplicity, and would have been horrified at the suggestion that the focus on specialties and separate disciplines that we have today should somehow be better. But today we have thoroughly rejected the idea that all knowledge and understanding is connected and stems from a single source.

In the first of the two remarks that I have singled out above, Aaronson complains that academic philosophy continually gets into the “hermeneutic trap” of reinterpreting the same passages by dead writers again and again. What is decisive here, as in so many things, is the attitude with which one carries out this interpretation. If this exercise is carried out for the sake of getting a grade at a modern university, passing a class, or for academic promotion, then the result can be nothing but junk, artificial, forced thinking and writing, and a bad reputation for the activity as a whole. The necessary attitude that gives this activity its true value is grounded in a desire to return to the origin – the root that many applied scientists mistakenly believe their branch of the tree constitutes – and then use the insights from there to bring society forward. The suggestion that this activity has no value is ridiculous. Would Aaronson also say that we don’t need to study history, that we should let every generation invent society anew? Maybe he’d recommend burning books older than 50 years? I’m very far from making some kind of blanket endorsement of conservatism, but I would certainly endorse a selective conservatism that critiques the past in order to learn from its experience and create a better future. Only the earnest interpretation of old texts can renew our connection with the origin of our thinking. (This is not to say that what goes on in humanities departments today is such an earnest interpretation, but that discussion belongs elsewhere.)

“Utility” and what is truly useful

Many of us moderns are obsessed with a particular notion of utility, which comes to dictate what is worth doing. Everybody understands that it is easy to fund computer science because it leads to applications, be they commercial, scientific or military, that can immediately be exchanged for money. (It is through luck that academics doing good work of true value are sometimes able to dress up their work as “useful” to the markets and funders. If this didn’t happen, institutional thinking would be even more diseased and withered than it already is.) It is difficult to fund a study of hermeneutics or existentialism, because the markets don’t care, consumers are not interested. But just as democracies are unable to make long term decisions but instead make decisions that will please voters today, what is “useful” from computer science in the short term, for example for fighting battles in Afghanistan or for making a new iPad, is not necessarily what is needed for the long term, which is: the furthering and evolution of culture, new, inspiring and vital visions for society and for the future, spiritual height. The suggestion by Clark Glymour that Aaronson refers to (but thankfully doesn’t endorse), that philosophy departments should be defunded unless they contribute something applicable to other disciplines, might be the single worst idea I have ever encountered.

Poetry, prose and contradiction; style as a conduit of meaning

Heidegger’s Being and Time is a very difficult text to read. Is it, to use Aaronson’s words, a pretentious brand of performance art? Is the difficulty there only for the sake of being difficult? To put it another way, is the difficulty accidental and contrived or is it essential?

Accidental difficulty should obviously be removed as much as possible from any work, so that it can be made more accessible. “As simple as possible, but no simpler”. But I contend that the difficulty in this and other, similar texts is an essential one. There is no simpler way of phrasing the argument. The arguments in mathematics, and to a large extent in computer science, can be phrased in a formal calculus and can be expressed with (apparent) elegance and simplicity. But philosophy would be severely limited if reduced to a formal calculus. The arguments made by Heidegger, for example, are in some way deeply bound up with language itself. In order to receive his teaching, it is necessary to feel and engage with his words and his phrasing. To reduce the arguments to simpler but apparently similar sentences would be to remove some of their essence. In other words, when reading this kind of work we should not insist on trying to separate “form” and “content”. This is to some degree true for all continental philosophers I’ve read, but especially clear with Heidegger – as anyone who has seriously tried to get through Being and Time would probably agree. There is also no doubt that this kind of writing sheds light on something. And who would dispute that illumination of the world and our conditions of existence is one essential aim of philosophy?

If one finds it difficult to read texts that do not present strictly logical arguments, but communicate meaning in other ways, then the only way around this difficulty would be to invest time, effort and patience into the reading process, just as one does when trying to understand or formulate a mathematical proof.

Reaching towards the extralogical

A typical reaction from someone who has spent too much time with “analytic” thinking exclusively and then encounters a continental thinker would be something like: “This makes no sense. I do not understand what facts are being stated or what propositions are being proven. The writer is even contradicting himself. How can anyone take this seriously?”.

In order to move beyond this kind of hasty judgment, it is necessary to step outside the realm of the mathematical. The following points may serve to indicate where this realm lies (here it is very much the case that it lies just before our eyes — actually, in our eyes, in our nerves, in our very being — and we do not see it).

1. There are things that can not be expressed in logic but are worth studying. The way that we approach ethics and “utility” is for the most part extralogical. One’s identity and sense of direction in life is extralogical. A logical system is not worth much without axioms or applications, i.e. without bridges into and out of it. Art is one of the most important sources of such bridges. Insisting on a fundamental separation of the artistic and the useful/valuable, in the way that Aaronson seems to do, is ridiculous.

2. Mathematics and even computer science depend vitally on artistic elements, however contrived, personal and inexpressible they might be, to receive their salience, their sense of height and gravity.

3. Do politics, world history, human society and biology move according to the rules of logic? Dubious. Should these things be enslaved to logic in an ideal world? Highly dubious!

4. Poetry can express meaning that cannot be captured in logical arguments. Poetry can circumscribe and indicate. Contradiction is one particular poetic element and as such it can carry meaning. This is one reason why it is not an argument against a philosophical text when it is self-contradicting.

5. Attitude, grasping, understanding, and vision that gives a particular kind of access to the world — these are complementary to and as important as facts that can be expressed as propositions. Questioning, having the ability to persist in uncertainty, is sometimes more valuable than definite propositions about something.


Computer science is now a rapidly growing scientific and cultural force, and computer scientists must be critical of their roots, their style of thinking, and their methods, to avoid making serious mistakes. Computer scientists should reach deeply into the humanities, just as the humanities should reach into computer science. One hopes that the Machine Intelligence Research Institute understands that machinery (and logic) is not an infinite space that encompasses everything intelligible. It is necessary to understand the boundaries of that space in order to work inside it and build good bridges to its exterior.

Having said all this, I feel somewhat guilty for having singled out Aaronson as a representative of a larger group of technologists who thumb their nose at the humanities (French and German humanities in particular). He is far from the worst in this category. My only excuse is that the sense of wasted potential that I get is especially great here – it would be sad if Aaronson went through the rest of his career never reaching into continental thinking. I would recommend Aaronson to read Nietzsche’s writings on appearance, masks, becoming and truth, and then reflect on complexity in the light of that, read Heidegger’s writings on being in order to get a new idea of what meaning is, and reflect on artificial intelligence in the light of that, and read Foucault’s writings on power, visibility and control, and reflect on the overall social role of computers in the light of that. As a bridge between mathematical and continental thinking, I recommend Manuel DeLanda, whose books truly touch both of the “continents”.

Thinking does not stop where logic ends, if indeed it has begun at that point.


Print This Post Print This Post