Category: Computer science


Dreyfus and Bostrom. Four AI assumptions and two books.

April 23rd, 2017 — 9:09pm

At first glance, Hubert Dreyfus’ 1992 book What Computers Still Can’t Do (WCSCD, originally published in 1972 as What Computers Can’t Do) seems untimely in the current business climate, which favours massive and widespread investment in AI (these days, often understood as being synonymous with machine learning and neural networks). However, being untimely may in fact allows us to act “against our time and thus hopefully also on our time, for the benefit of a time to come” (Nietzsche). And the book’s argument might in fact not be outdated, but simply forgotten in the frenzy of activity that is our present AI summer.

Dreyfus outlines four assumptions that he believes were (in many cases, still are) implicitly made by AI optimists.

The biological assumption. On some level, the (human) brain functions like a digital computer, processing discrete information.

The psychological assumption. The mind, rather than the brain, functions like a digital computer, even if the brain doesn’t happen to do so.

The epistemological assumption. Even if neither minds nor brains function like digital computers, then this formalism is still sufficient to explain and generate intelligent behaviour. An analogy would be that planets moving in orbits are perhaps not solving differential equations, but differential equations are adequate tools for describing and understanding their movement.

The ontological assumption. Everything essential to intelligent behaviour ­— such as information about the environment — can in principle be formalised as a set of discrete facts. 

These assumptions all relate to the limitations of computation (as we currently understand it) and of propositional logic.

Dreyfus is famous for interpreting thinkers such as Heidegger and Merleau-Ponty, and consistently draws upon these thinkers in his arguments. In fact, as he points out in WCSCD, the phenomenological school attacks the very long philosophical tradition that sees mind and world as strictly separate, and that assumes that the mind functions by way of a model that somehow can be reduced to logical operations (we can see why the field of AI has implicitly, and in many cases unwittingly, taken over this tradition). Historically, this tradition reached perhaps one of its purest expressions with Descartes. Indeed Being and Time, Heidegger’s major work, is very anti-Cartesian. Heidegger’s account of intelligibility demands that one (Dasein) is in a world which appears primarily as meaningful interrelated beings (and not primarily as atomic facts, or sources thereof, to be interpreted), and is historically in a situation, making projections on the basis of one’s identity. Here, calculation and correspondence-based theories of truth are derived and secondary things. There is no clear separation between world and “model” since there is no model, just the world and our ability to relate to it.

I will hazard a guess that most neuroscientists today would not take the first two assumptions seriously. In all kinds of biology and medicine, we regularly encounter new phenomena and mechanisms that could not be captured by the simple models we originally came up with, forcing us to revise our models. Making brains (bodies) and/or minds somehow isomorphic to symbolic manipulation seems wholly inadequate. More interesting, and much harder to settle unambiguously, are the epistemological and the ontological assumptions. If the epistemological assumption is false, then we will not be able to generate “intelligent behaviour” entirely in software. If the ontological assumption is false, then we will not be able to construct meaningful (discrete and isolated) models of the world.

The two latter assumptions are indeed the stronger ones out of these four. If the epistemological assumption turns out to be invalid, then the biological and psychological assumptions would necessarily also be invalid. The ontological assumption is closely related and similarly strong.

By contrast, Nick Bostrom‘s Superintelligence: Paths, Dangers, Strategies is a more recent (2014) and very different book. While they are certainly worth serious investigation, theories about a possible technological singularity can be somewhat hyperbolic in tone. But Bostrom comes across as very level-headed as he investigates how a superintelligence might be formed (as an AI, or otherwise), how it might or might not be controlled, and the political implications of such an entity coming into existence. For the most part, the book is engrossing and interesting, though clearly grounded in the “analytical” tradition of philosophy. It becomes more compelling because of the potential generality of its argument. Does a superintelligence already exist? Would we know if it did? Could it exist as a cybernetic actor, a composite of software, machines, and people? It is interesting to read the book, in parallel, as a speculation on (social, economic, geopolitical, technological, psychological or composites thereof) actors that may already exist but that are beyond our comprehension.

Bostrom’s arguments resemble how one might think about a nuclear arms race. He argues that the first superintelligence to emerge might have a decisive strategic advantage and, once in place, prevent (or be used to prevent) the emergence of competing superintelligences. At the same time it would bestow upon those who control it (if it can be controlled) a huge tactical advantage.

Even though Bostrom’s argument is mostly very general, at times it is obvious that much of the thinking is inspired by or based on the idea of AI as software running on a digital computer. To me this seemed implicit in many of the chapters. For example, Bostrom talks about being able to inspect the state of a (software agent’s) goal model, to be able to suspend, resume, and copy agents without information loss, to measure hedonic value, and so on. Bostrom in many cases implies that we would be able to read, configure and copy an agent’s state precisely, and sometimes also that we would be able to understand this state clearly and unambiguously, for example in order to evaluate whether our control mechanisms are working. Thus many of Bostrom’s arguments seem tightly coupled to the Church-Turing model of computation (or at least to a calculus/operational substrate that allows for inspection, modification and duplication of state). Some of his other arguments are, however, sufficiently general that we do not need to assume any specific substrate.

Bostrom, it seems to me, implicitly endorses at least the epistemological assumption throughout the book (and possibly also the ontological one). Even as he rightly takes pains to avoid stating specifically how technologies such as superintelligences or whole brain emulation would be implemented, it is clear that he imagines the formalism of digital computers as “sufficient to explain and generate intelligent behaviour”. In this, but perhaps not in everything he writes, he is a representative of current mainstream AI thinking. (I would like to add that even if he has wrongly taken over these assumptions, the extreme caution he advises us to proceed with regarding strong AI deserves to be taken seriously – the risks in practice are sufficiently great for us to be quite worried. I do not wish to undermine his main argument.)

It is thinkable but unlikely that in the near future, through a resounding success (which could be an academic, industrial or commercial one, for example), the epistemological assumption will be proven true. What I hold to be more likely (for reasons that have been gradually developed on this blog) is that current AI work will converge on something that may well be extremely impressive and that may affect society greatly, but that we will not consider to be human-like intelligence. The exact form that this will take remains to be discovered.

Hubert Dreyfus passed away in April 2017, while I was in the middle of writing this post. Although I never had the privilege of attending his lectures in person, his podcasted lectures and writings have been extremely inspirational and valuable to me. Thank you.

Comment » | Computer science, Philosophy

Rice fields and rain

October 5th, 2016 — 11:58am

img_5604

Humans primarily live in a world of beings, each of which has meaning. Meaningful beings appear to us interconnected, referencing practices and other beings in a referential totality. Buttons suggest pushing, chairs suggest sitting, a tractor suggests farming. A (Japanese) rice paddy may suggest the heavy labour that goes into the rice harvest each year, the tools and equipment that go with it, as well as the gradual depopulation of the village, since the young ones prefer a different line of work elsewhere. It may be part of the site and locus of an entire set of concerns and an outlook on life.

The world of beings is the one that is most immediate to us, and a world of molecules, atoms, energy or recorded data, useful as it may be, is something much further away. In each case it must be derived and renewed from the use of a growing and complex apparatus of equipment, practices and body of concepts, such as the traditions of physics or mathematics. Yet nobody would dispute that these worlds – the world of beings and the calculated world – are interrelated. In some cases they are even deeply intertwined.

But how can we reconcile the calculated world with the world of beings? How exactly do they influence each other? And if the calculated world is expanding aggressively, thanks to the spread of computational machinery and its servants, is the world of beings being pushed back? Receding? Are we abandoning it, since it is no longer good enough for us? Refusing to touch it, other than with thick gloves?

The calculated world concerns itself with propositions, true facts, formal models, records. A conceptual basis is needed to codify and engage with it. A record is formed when an observation is made, and the observer writes down what was observed. Initially, it retains an intimate connection with the world (of beings). The record is interpreted in light of the world and allowed to have its interplay with other beings. The observation “it rained heavily this week” is allowed to mean something in the context of farming, in the context of a possible worry about floods, or as a comment on an underwhelming holiday. Depending on who the reader is and what their concerns are, all these meanings can be grasped. The record may thus alter the reader’s outlook in a way similar to what direct experience of the rainfall would do.

At this level, the only facts we may record are that it rained or did not rain, and whether the rain was heavy or light. But given that we have some notion of space or time, as human beings do, repetition becomes possible. Scales for measuring time and space can be constructed, The rainfall can now be 27 or 45 mm. We are now further away from the world of farming, floods and holidays – “45 mm” of rain needs to be interpreted in order to be assigned any meaning. It has been stripped of most the world where it originated. The number 45 references only calculable repetition of an act of measurement. Enabled by the notions of space and time, already it tries to soar above any specific place or time to become something composable, calculable, nonspecific. Abstraction spreads its wings and flaps them gently to see if they will hold.

So on all the way up to probability distributions, financial securities, 27 “likes” in a day on social media and particle physics. At each level of the hierarchy, even when we purport to move “downward” into the “fundamentals” of things, layers of meaning are shed and a pyramid of proverbial ivory soars to the sky.

Spatial and temporal observations depend on measurement on linear scales, such as a stopwatch or a ruler. Such scales are first constructed through repeated alignment of some object with another object. Such repeated alignment depends on counting, which in turn depends on the most basic and most impoverished judgment: whether something is true or false, does or does not accord. Thus something can have the length of 5 feet or the duration of 3 hourglasses: it accords with the basic unit a certain number of times. This accordance is the heavily filtered projection of a being through another. The side of a plot of land is measured, in the most basic case, by viewing the land through a human foot – how many steps or feet suffice to get from one side to the other? Even though the foot is actually able to reveal many particularities of the land being measured – its firmness, its dampness, its warmth – the only record that this attitude cares to make is whether or not spatial distance accords, and how many times in succession it will accord. All kinds of measurement devices, all quantitative record making, follows this basic principle. Thus, the calculable facts are obtained by a severe discarding of a wealth of impressions. This severity is obvious to those who are being trained to judge quantitatively for the first time, but soon internalised and accepted as a necessity. Today, these are precisely the facts we are accustomed to calling scientific and objective.

But the accordance of beings with distance or time is, of course, very far from the only things we can perceive about them. The being emits particular shapes, configurations, spectra that make impressions on us and on other beings. Thus it is that we may perceive any kind of similarity – for example the notion that two faces resemble each other, that a dog resembles its owner, or that a constellation of stars looks like a warrior. We delight in this particularity, which in a way is the superfluous or excess substance of beings – it is not necessary for their perception but it forms and adds to it. Thus the stranger I met is the stranger with a yellow shirt and not merely the stranger. He can also be the stranger with a yellow shirt and unkempt hair, or the stranger with a yellow shirt and unkempt hair and a confident smile, and so on – any number of details may be recorded, any number of concepts may be brought into the description. These details are not synthetic or arbitrary. But they are also not independent of the one who observes. They would depend both on a richness that is of the being under observation, and on the observer’s ability to form judgments and concepts, to see metaphorically, creatively and truthfully.

Such impressions, which carry a different and perhaps more immediate kind of truth than the truth that we derive from calculations and records, may now have become second class citizens in the calculated world that grows all around us.

3 comments » | Computer science, Philosophy, Uncategorized

AI and the politics of perception

August 1st, 2016 — 11:36am

Elon Musk, entrepreneur of some renown, believes that the sudden eruption of a very powerful artificial intelligence is one of the greatest threats facing mankind. “Control of a super powerful AI by a small number of humans is the most proximate concern”, he tweets. He’s not alone among silicon valley personalities to have this concern. To reduce the risks, he has funded the OpenAI initiative, which aims to develop AI technologies in such a way that they can be distributed more evenly in society. Musk is very capable, but is he right in this case?

The idea is closely related to the notion of a technological singularity, as promoted by for example Kurzweil. In some forms, the idea of a singularity resembles a God complex. In C G Jung’s view, as soon the idea of God is expelled (for example by saying that God is dead), God appears as a projection somewhere. This because the archetype or idea of God is a basic feature of the (western, at least) psyche that is not so easily dispensed with. Jung directs this criticism at Nietzsche in his Zarathustra seminar. (Musk’s fear is somewhat more realistic and, yes, proximate, than Kurzweil’s idea, since what is feared is a constellation of humans and technology, something we already have.)

But if Kurzweil’s singularity is a God complex, then the idea of the imminent dominance of uncontrollable AI, about to creep up on us out of some dark corner, more closely resembles a demon myth.

Such a demon myth may not be useful in itself for understanding and solving social problems, but its existence may point to a real problem. Perhaps what it points to is the gradual embedding of algorithms deeply into our culture, down to our basic forms of perception and interaction. We have in effect already merged with machines. Google and Facebook are becoming standard tools for information finding, socialising, getting answers to questions, communicating, navigating. The super-AI is already here, and it has taken the form of human cognition filtered and modulated by algorithms.

It seems fair to be somewhat suspicious — as many are — of fiat currency, on the grounds that a small number of people control the money supply, and thus, control the value of everybody’s savings. On similar grounds, we do need to debate the hidden algorithms, controlled by a small number of people (generally not available for perusal, even on request, since they would be trade secrets), and pre-digested information that we now use to interface with the world around us almost daily. Has it ever been so easy to change so many people’s perception at once?

Here again, as often is the case, nothing is truly new. Maybe we are simply seeing a tendency that started with the printing press and the monotheistic church, taken to its ultimate conclusion. In any case I would paraphrase Musk’s worry as follows: control of collective perception by a small number of humans is the most proximate concern. How we should address this concern is not immediately obvious.

Comment » | Computer science, Philosophy

Method and object. Horizons for technological biology

March 22nd, 2016 — 10:32pm

(This post is an attempt at elaborating the ideas I outlined in my talk at Bio-pitch in February.)

The academic and investigative relationship to biology – our discourse about biology – is becoming increasingly technological. In fields such as bioinformatics and computational biology, the technological/instrumental relationship to nature is always at work, constructing deterministic models of phenomena. By using these models, we may repeatedly extract predictable results from nature. An example would be a cause-effect relationship like: exposing a cell to heat causes “heat shock proteins” to be transcribed and translated.

The implicit understanding in all of these cases is that nature can be turned into engineering. Total success, in this understanding, would amount to one or both of the following:

  1. Replacement/imitation as success. If we can replace the phenomena under study by its model (concretely, a machine or a simulation), we have achieved success.
  2. Control as success. If we can consistently place the phenomena under study in verifiable, fully defined states, we have achieved success. (Note that this ideal implies that we also possess perfect powers of observation, down to a hypothetical “lowest level”).

These implicitly held ideals are not problematic as long as we acknowledge that they are mere ideals. They are very well suited as horizons for these fields to work under, since they stimulate the further development of scientific results. But if we forget that they are ideals and begin to think that they really can become realities, or if we prematurely think that biology really must be like engineering, we might be in trouble. Such a belief conflates the object of study with our relatedness to that object. It misunderstands the role of the equipment-based relationship. The model – and associated machines, software, formulae. et cetera – is equipment that constitutes our relatedness to the phenomena. It cannot be the phenomena themselves.

Closely related to the ideals of replacement and control is the widespread application of abstraction and equality in engineering-like fields (and their application to new fields that are presently being clad in the trappings of engineering, such as biology). Abstraction and equality – – the notion that two entities, instances, moments, etc., are in some way the same – allow us to introduce an algebra, to reason in the general and not in specifics. And this is of course what computers do. It also means that two sequences of actions (laboratory protocols for example), although they are different sequences, or the same sequence but at different instances in time, can lead to the same result. Just as 3+1 and 2+2 both “equal” 4. In other words, history becomes irrelevant, the specific path taken no longer means very much. But it is not clear that this can ever truly be the case outside of an algebra, and that is what risks being forgotten.

We might call all this the emergence of technological biology, or technological nature, the conquest of biology by λόγος, et cetera. The principal danger seems to be the conflation of method with object, of abstraction with the specific. And here we see clearly how something apparently simple – studying RNA expression levels in the software package R, for example – opens up the deepest metaphysical abysses. One of the most important tasks right now, then, would be the development of a scientific and technological culture that keeps the benefits of the technological attitude without losing sight of a more basic non-technological relatedness. The path lies open…

Comment » | Bioinformatics, Computer science, Philosophy

Is bioinformatics possible?

February 21st, 2016 — 5:43pm

I recently gave a talk at the Bio-Pitch event at the French-Japanese institute. I was fortunate to be able to speak about some of the ideas I’ve been developing here among so many interesting projects (MetaPhorest, HTGAA, Yoko Shimizu, Tupac Bio, Bento Lab etc).

The topic of my talk was “Is bioinformatics possible”? A deliberate provocation, since of course many people including myself work with this every day. I simply mean to suggest that there are intrinsic problems in the field that are not usually discussed or thought about, and that it might be valuable to confront those problems.

The slides are available, if anyone is interested.

The bigger topic that is hinted at, but not discussed, might be the instrumental relationship of humans to nature. I hope to return to this problem soon.

1 comment » | Bioinformatics, Computer science, Philosophy

Back to top