Action, traces and perception

A sketch of the ways that concepts allow us to make sense of traces of action in the world (or simply of processes, if we do not wish to posit an actor).

Actions (or processes) leave traces. Traces of such processes include beings, such as houses, roads, animals and plants, and also non-beings, some of which may be potential beings, for example new species or scientific phenomena to be named in the future.

The intelligibility of traces depends on having access to meaningful concepts, such as the concept of an oak or an owl. Not only must we have developed the relevant concept in ourselves and become sufficiently familiar with it, but it must also present itself at the right time when we encounter pre-conceptual oak-indications or owl-indications (or traces of an oak-making process). Some doubt as to whether the traces are of an oak or of a different tree is allowed at first, but not later as the learner becomes more experienced in the world of trees.

What presents itself is not merely an instance of the concept “oak” but also qualities of the oak. It may be towering, withered, majestic or small. Weather conditions and parasites may have left all kinds of marks that interleave themselves with the basic impression. The oak’s particularity is inexhaustible. “I saw an oak” is in no way a complete account of what was seen. Indeed the task of seeing the oak itself may be time-consuming and difficult if taken seriously. A world where all oaks were merely pure instances of the oak concept would be a completely meaningless one.

If what is perceived is man-made, then it will be the perception of a process that contains in part a sequence of actions carried out by humans (but necessarily has its ultimate origin in a non-human process). Here the additional dimension of intent may be added to the act of perception. Through our understanding of ourselves and of our culture, we may be able to work out what was created and why, and for what purpose. The case of a neighbour redecorating their garden is comparable in quality to that of encountering a foreign culture and trying to understand its religious ceremonies and objects. In a time of conflict, we may look at the object as a source of potential hostility or friendliness.

Man-made objects will be the easiest ones to imitate since intent and human actions may be extracted from the traces. Seeing a man-made object will in many cases allow someone with sufficient pre-existing skill to create a similar object. Natural processes are considerably harder. We are as yet unable to manufacture oaks or owls from scratch (not the same as sowing an acorn or hatching an egg). Laboratories, biomedical and otherwise, are constantly at work translating the processes of nature into sequences of human actions (e.g. molecular cloning protocols). Thus science works by expanding the space of what is, or can be, man-made.

 

 

 

Interactive toxicogenomics

If you work in toxicology or drug discovery, you might be familiar with the database Open TG-GATEs, a large transcriptomics database that catalogues gene expression response to well-known drugs and toxins. This database was developed by Japan’s Toxicogenomics Project during many years, as a private-public sector partnership, and remains a very valuable resource. As with many large datasets, despite the open-ness, accessing and working with this data can require considerable work. Data must always be placed in a context, and these contexts must be continually renewed. One user-friendly interface to simplify access to this data is Toxygates, which I begun developing as a postdoc at NIBIOHN in the Mizuguchi Lab in 2012 (and am still the lead developer of). As a web application, Toxygates lets you look at data of interest in context, together with annotations such as gene ontology terms and metabolic pathways, as well as visualisation tools.

We are now releasing a new major version of Toxygates, which, among many other new features, allows you to perform and visualise gene set clustering analyses directly in the web browser. Gene sets can also be easily characterised through an enrichment function, which is supported by the TargetMine data warehouse. Last but not least, users can now upload their own data and cluster and analyse it in context, together with the Open TG-GATEs data.

Our new paper in Scientific Reports documents the new version of Toxygates and illustrates the use of the new functions through a case study performed on the hepatotoxic drug WY-14643. If you are curious, give it a try.

When I begun the development as a quick prototype, I had no idea that the project would still be evolving many years later. Toxygates represents considerable work and many learning experiences for me as a researcher and software engineer, and I’m very grateful to everybody who has collaborated with us, supported the project, and made our journey possible.

 

Dreyfus and Bostrom. Four AI assumptions and two books.

At first glance, Hubert Dreyfus’ 1992 book What Computers Still Can’t Do (WCSCD, originally published in 1972 as What Computers Can’t Do) seems untimely in the current business climate, which favours massive and widespread investment in AI (these days, often understood as being synonymous with machine learning and neural networks). However, being untimely may in fact allows us to act “against our time and thus hopefully also on our time, for the benefit of a time to come” (Nietzsche). And the book’s argument might in fact not be outdated, but simply forgotten in the frenzy of activity that is our present AI summer.

Dreyfus outlines four assumptions that he believes were (in many cases, still are) implicitly made by AI optimists.

The biological assumption. On some level, the (human) brain functions like a digital computer, processing discrete information.

The psychological assumption. The mind, rather than the brain, functions like a digital computer, even if the brain doesn’t happen to do so.

The epistemological assumption. Even if neither minds nor brains function like digital computers, then this formalism is still sufficient to explain and generate intelligent behaviour. An analogy would be that planets moving in orbits are perhaps not solving differential equations, but differential equations are adequate tools for describing and understanding their movement.

The ontological assumption. Everything essential to intelligent behaviour ­— such as information about the environment — can in principle be formalised as a set of discrete facts. 

These assumptions all relate to the limitations of computation (as we currently understand it) and of propositional logic.

Dreyfus is famous for interpreting thinkers such as Heidegger and Merleau-Ponty, and consistently draws upon these thinkers in his arguments. In fact, as he points out in WCSCD, the phenomenological school attacks the very long philosophical tradition that sees mind and world as strictly separate, and that assumes that the mind functions by way of a model that somehow can be reduced to logical operations (we can see why the field of AI has implicitly, and in many cases unwittingly, taken over this tradition). Historically, this tradition reached perhaps one of its purest expressions with Descartes. Indeed Being and Time, Heidegger’s major work, is very anti-Cartesian. Heidegger’s account of intelligibility demands that one (Dasein) is in a world which appears primarily as meaningful interrelated beings (and not primarily as atomic facts, or sources thereof, to be interpreted), and is historically in a situation, making projections on the basis of one’s identity. Here, calculation and correspondence-based theories of truth are derived and secondary things. There is no clear separation between world and “model” since there is no model, just the world and our ability to relate to it.

I will hazard a guess that most neuroscientists today would not take the first two assumptions seriously. In all kinds of biology and medicine, we regularly encounter new phenomena and mechanisms that could not be captured by the simple models we originally came up with, forcing us to revise our models. Making brains (bodies) and/or minds somehow isomorphic to symbolic manipulation seems wholly inadequate. More interesting, and much harder to settle unambiguously, are the epistemological and the ontological assumptions. If the epistemological assumption is false, then we will not be able to generate “intelligent behaviour” entirely in software. If the ontological assumption is false, then we will not be able to construct meaningful (discrete and isolated) models of the world.

The two latter assumptions are indeed the stronger ones out of these four. If the epistemological assumption turns out to be invalid, then the biological and psychological assumptions would necessarily also be invalid. The ontological assumption is closely related and similarly strong.

By contrast, Nick Bostrom‘s Superintelligence: Paths, Dangers, Strategies is a more recent (2014) and very different book. While they are certainly worth serious investigation, theories about a possible technological singularity can be somewhat hyperbolic in tone. But Bostrom comes across as very level-headed as he investigates how a superintelligence might be formed (as an AI, or otherwise), how it might or might not be controlled, and the political implications of such an entity coming into existence. For the most part, the book is engrossing and interesting, though clearly grounded in the “analytical” tradition of philosophy. It becomes more compelling because of the potential generality of its argument. Does a superintelligence already exist? Would we know if it did? Could it exist as a cybernetic actor, a composite of software, machines, and people? It is interesting to read the book, in parallel, as a speculation on (social, economic, geopolitical, technological, psychological or composites thereof) actors that may already exist but that are beyond our comprehension.

Bostrom’s arguments resemble how one might think about a nuclear arms race. He argues that the first superintelligence to emerge might have a decisive strategic advantage and, once in place, prevent (or be used to prevent) the emergence of competing superintelligences. At the same time it would bestow upon those who control it (if it can be controlled) a huge tactical advantage.

Even though Bostrom’s argument is mostly very general, at times it is obvious that much of the thinking is inspired by or based on the idea of AI as software running on a digital computer. To me this seemed implicit in many of the chapters. For example, Bostrom talks about being able to inspect the state of a (software agent’s) goal model, to be able to suspend, resume, and copy agents without information loss, to measure hedonic value, and so on. Bostrom in many cases implies that we would be able to read, configure and copy an agent’s state precisely, and sometimes also that we would be able to understand this state clearly and unambiguously, for example in order to evaluate whether our control mechanisms are working. Thus many of Bostrom’s arguments seem tightly coupled to the Church-Turing model of computation (or at least to a calculus/operational substrate that allows for inspection, modification and duplication of state). Some of his other arguments are, however, sufficiently general that we do not need to assume any specific substrate.

Bostrom, it seems to me, implicitly endorses at least the epistemological assumption throughout the book (and possibly also the ontological one). Even as he rightly takes pains to avoid stating specifically how technologies such as superintelligences or whole brain emulation would be implemented, it is clear that he imagines the formalism of digital computers as “sufficient to explain and generate intelligent behaviour”. In this, but perhaps not in everything he writes, he is a representative of current mainstream AI thinking. (I would like to add that even if he has wrongly taken over these assumptions, the extreme caution he advises us to proceed with regarding strong AI deserves to be taken seriously – the risks in practice are sufficiently great for us to be quite worried. I do not wish to undermine his main argument.)

It is thinkable but unlikely that in the near future, through a resounding success (which could be an academic, industrial or commercial one, for example), the epistemological assumption will be proven true. What I hold to be more likely (for reasons that have been gradually developed on this blog) is that current AI work will converge on something that may well be extremely impressive and that may affect society greatly, but that we will not consider to be human-like intelligence. The exact form that this will take remains to be discovered.

Hubert Dreyfus passed away in April 2017, while I was in the middle of writing this post. Although I never had the privilege of attending his lectures in person, his podcasted lectures and writings have been extremely inspirational and valuable to me. Thank you.

Brexit and globalisation

Two momentous events that took place last year were the election of Donald Trump to the presidency of the United States, and the UK’s referendum on EU membership that led to the “Brexit” decision to leave the union. The two are often lumped together and seen as symptoms of a single larger force, which they probably are. But in one respect they are different. The Trump presidency has an expiry date, but it is hard to see how Brexit might be reversed in the foreseeable (or even distant) future.

As a student and then an engineer in London during 2003-2007, one of the first vivid, intense impressions I got was that the UK was a much better integrated society than Sweden. Manifestly, people from all kinds of cultural backgrounds were – it seemed to the 19-year old me – living and working together smoothly on many social levels. During my life in Sweden until then, I had not ever seen immigration working out in this way. It was mostly seen and talked about as a problem that had to be addressed (and on a much smaller scale than what we have now).

This may of course reflect the fact that London has long been, until now, one of the most global cities in the world (Tokyo has nothing on it in this respect, although it has a massive energy and dynamic of a different kind), and the place I came from was rather rural. Countryside Britain was never as well integrated as London. World cities tend to be sharply different from the surroundings that support them. But on balance, the UK came across to me as a successfully global society.

In the years since, Sweden has, it seems to me, successfully integrated a lot of people and there are plenty of success stories. It has become a far more global society than it was in, say, 2003. At the same time, xenophobia has been on the rise, just as it has in the rest of Europe and the US, and now Swedish politics must, lamentably, reckon with a very powerful xenophobic party. Reactive (in the Nietzschean sense) forces are having a heyday. Ressentiment festers.

The global society is probably here to stay. The ways of life and work, the economic entities that now bestride the earth, are all firmly globalised. This is an ongoing process that may not end for some time. (However, this probably will never erase the importance of specific places and communities. To be rooted in something is in fact becoming ever more important.) But globalisation, to use that word, has plainly not brought prosperity to everyone. In fact, many have been torn out of prosperity by economic competition and technological advances. Witness American coal miners voting Trump. In my view, though not everyone will agree, a well-protected middle class is necessary to achieve a stable democratic society. Witness what happens when that protection is too far eroded. Neglecting this – which has been a failure of politics on a broad scale – is playing with fire. General frustration becomes directed at minorities.

Being somewhat confused ourselves, and living with weak or failing, if not xenophobic or corrupt, politicians and governments, we – western/globalised society – may need something that is utterly lacking: new ideology, new thinking, new dreams. Not a wishful return to the 90’s, 70’s or some other imagined lost paradise, but something that we can strive for positively, and in the process perhaps reconfigure our societies, politics and economies. For this to happen, people may need to think more, debate more, read more books, and be more sincere. Sarcasm and general resignation lead nowhere. One needs to look sincerely at one’s own history, inward into the soul, as well as outward.

A successful form of such new politics probably will not involve a departing from the global society. But it may involve a reconfiguration of one’s relationship with it. So as Theresa May’s government proceeds to negotiate the withdrawal of the UK from the EU – which must be a bitter, gruelling task for many of those involved – I hope that what she is initiating is such a reconfiguration. I hope that Britain can draw on its past success as a highly global society and constructively be part of the future of the West.

Synthesis is appropriation

In contemporary society, we make use of the notion that things may be synthetic. Thus we may speak of synthetic biology, “synthesizers” (synthetic sound), synthetic textile etc. Such things are supposed to be artificial and not come from “nature”.

However, the Greek root of the word synthesis actually seems to refer to the conjoining of pre-existing things, rather than something being purely man-made. But what does it mean to be purely man-made?

Furniture, bricks, bottles, roads and bread are all made in some sense; they are the result of human methods, tools and craft applied to some substrate. But they do not ever lose the character of the original substrate, and usually this is the point – we would like to see the veins of wood in fine furniture, and when we eat bread, we would like to ingest the energy, minerals and other substances that are accumulated in grains of wheat.

Products like liquid nitrogen or pure chlorine, created in laboratories, are perhaps the ones most readily called “synthetic”, or the ones that most readily would form the basis for something synthetic.  This owing to their apparent lack of specific character/particularity, such as the veins of wood or the minerals in wheat. On the other hand, it is apparent that they possess such non-character only from the point of reference of atoms as the lowest level. If we take into consideration ideas from string theory or quantum mechanics, most likely the bottom level shifts and the pure chlorine no longer seems so homogenous.

Accordingly, if we follow this line of thought to the end, as long as we have not established the bottom or ground level of nature – and it is questionable if we ever shall – all manufacture, all making and synthesis, is only a rearrangement of pre-existing specificity. Our crafts leave traces in the world, such as objects with specific properties, but do not ever bring something into existence from nothing.

Synthesis is appropriation: making is taking.