Interactive toxicogenomics

If you work in toxicology or drug discovery, you might be familiar with the database Open TG-GATEs, a large transcriptomics database that catalogues gene expression response to well-known drugs and toxins. This database was developed by Japan’s Toxicogenomics Project during many years, as a private-public sector partnership, and remains a very valuable resource. As with many large datasets, despite the open-ness, accessing and working with this data can require considerable work. Data must always be placed in a context, and these contexts must be continually renewed. One user-friendly interface to simplify access to this data is Toxygates, which I begun developing as a postdoc at NIBIOHN in the Mizuguchi Lab in 2012 (and am still the lead developer of). As a web application, Toxygates lets you look at data of interest in context, together with annotations such as gene ontology terms and metabolic pathways, as well as visualisation tools.

We are now releasing a new major version of Toxygates, which, among many other new features, allows you to perform and visualise gene set clustering analyses directly in the web browser. Gene sets can also be easily characterised through an enrichment function, which is supported by the TargetMine data warehouse. Last but not least, users can now upload their own data and cluster and analyse it in context, together with the Open TG-GATEs data.

Our new paper in Scientific Reports documents the new version of Toxygates and illustrates the use of the new functions through a case study performed on the hepatotoxic drug WY-14643. If you are curious, give it a try.

When I begun the development as a quick prototype, I had no idea that the project would still be evolving many years later. Toxygates represents considerable work and many learning experiences for me as a researcher and software engineer, and I’m very grateful to everybody who has collaborated with us, supported the project, and made our journey possible.

 

Dreyfus and Bostrom. Four AI assumptions and two books.

At first glance, Hubert Dreyfus’ 1992 book What Computers Still Can’t Do (WCSCD, originally published in 1972 as What Computers Can’t Do) seems untimely in the current business climate, which favours massive and widespread investment in AI (these days, often understood as being synonymous with machine learning and neural networks). However, being untimely may in fact allows us to act “against our time and thus hopefully also on our time, for the benefit of a time to come” (Nietzsche). And the book’s argument might in fact not be outdated, but simply forgotten in the frenzy of activity that is our present AI summer.

Dreyfus outlines four assumptions that he believes were (in many cases, still are) implicitly made by AI optimists.

The biological assumption. On some level, the (human) brain functions like a digital computer, processing discrete information.

The psychological assumption. The mind, rather than the brain, functions like a digital computer, even if the brain doesn’t happen to do so.

The epistemological assumption. Even if neither minds nor brains function like digital computers, then this formalism is still sufficient to explain and generate intelligent behaviour. An analogy would be that planets moving in orbits are perhaps not solving differential equations, but differential equations are adequate tools for describing and understanding their movement.

The ontological assumption. Everything essential to intelligent behaviour ­— such as information about the environment — can in principle be formalised as a set of discrete facts. 

These assumptions all relate to the limitations of computation (as we currently understand it) and of propositional logic.

Dreyfus is famous for interpreting thinkers such as Heidegger and Merleau-Ponty, and consistently draws upon these thinkers in his arguments. In fact, as he points out in WCSCD, the phenomenological school attacks the very long philosophical tradition that sees mind and world as strictly separate, and that assumes that the mind functions by way of a model that somehow can be reduced to logical operations (we can see why the field of AI has implicitly, and in many cases unwittingly, taken over this tradition). Historically, this tradition reached perhaps one of its purest expressions with Descartes. Indeed Being and Time, Heidegger’s major work, is very anti-Cartesian. Heidegger’s account of intelligibility demands that one (Dasein) is in a world which appears primarily as meaningful interrelated beings (and not primarily as atomic facts, or sources thereof, to be interpreted), and is historically in a situation, making projections on the basis of one’s identity. Here, calculation and correspondence-based theories of truth are derived and secondary things. There is no clear separation between world and “model” since there is no model, just the world and our ability to relate to it.

I will hazard a guess that most neuroscientists today would not take the first two assumptions seriously. In all kinds of biology and medicine, we regularly encounter new phenomena and mechanisms that could not be captured by the simple models we originally came up with, forcing us to revise our models. Making brains (bodies) and/or minds somehow isomorphic to symbolic manipulation seems wholly inadequate. More interesting, and much harder to settle unambiguously, are the epistemological and the ontological assumptions. If the epistemological assumption is false, then we will not be able to generate “intelligent behaviour” entirely in software. If the ontological assumption is false, then we will not be able to construct meaningful (discrete and isolated) models of the world.

The two latter assumptions are indeed the stronger ones out of these four. If the epistemological assumption turns out to be invalid, then the biological and psychological assumptions would necessarily also be invalid. The ontological assumption is closely related and similarly strong.

By contrast, Nick Bostrom‘s Superintelligence: Paths, Dangers, Strategies is a more recent (2014) and very different book. While they are certainly worth serious investigation, theories about a possible technological singularity can be somewhat hyperbolic in tone. But Bostrom comes across as very level-headed as he investigates how a superintelligence might be formed (as an AI, or otherwise), how it might or might not be controlled, and the political implications of such an entity coming into existence. For the most part, the book is engrossing and interesting, though clearly grounded in the “analytical” tradition of philosophy. It becomes more compelling because of the potential generality of its argument. Does a superintelligence already exist? Would we know if it did? Could it exist as a cybernetic actor, a composite of software, machines, and people? It is interesting to read the book, in parallel, as a speculation on (social, economic, geopolitical, technological, psychological or composites thereof) actors that may already exist but that are beyond our comprehension.

Bostrom’s arguments resemble how one might think about a nuclear arms race. He argues that the first superintelligence to emerge might have a decisive strategic advantage and, once in place, prevent (or be used to prevent) the emergence of competing superintelligences. At the same time it would bestow upon those who control it (if it can be controlled) a huge tactical advantage.

Even though Bostrom’s argument is mostly very general, at times it is obvious that much of the thinking is inspired by or based on the idea of AI as software running on a digital computer. To me this seemed implicit in many of the chapters. For example, Bostrom talks about being able to inspect the state of a (software agent’s) goal model, to be able to suspend, resume, and copy agents without information loss, to measure hedonic value, and so on. Bostrom in many cases implies that we would be able to read, configure and copy an agent’s state precisely, and sometimes also that we would be able to understand this state clearly and unambiguously, for example in order to evaluate whether our control mechanisms are working. Thus many of Bostrom’s arguments seem tightly coupled to the Church-Turing model of computation (or at least to a calculus/operational substrate that allows for inspection, modification and duplication of state). Some of his other arguments are, however, sufficiently general that we do not need to assume any specific substrate.

Bostrom, it seems to me, implicitly endorses at least the epistemological assumption throughout the book (and possibly also the ontological one). Even as he rightly takes pains to avoid stating specifically how technologies such as superintelligences or whole brain emulation would be implemented, it is clear that he imagines the formalism of digital computers as “sufficient to explain and generate intelligent behaviour”. In this, but perhaps not in everything he writes, he is a representative of current mainstream AI thinking. (I would like to add that even if he has wrongly taken over these assumptions, the extreme caution he advises us to proceed with regarding strong AI deserves to be taken seriously – the risks in practice are sufficiently great for us to be quite worried. I do not wish to undermine his main argument.)

It is thinkable but unlikely that in the near future, through a resounding success (which could be an academic, industrial or commercial one, for example), the epistemological assumption will be proven true. What I hold to be more likely (for reasons that have been gradually developed on this blog) is that current AI work will converge on something that may well be extremely impressive and that may affect society greatly, but that we will not consider to be human-like intelligence. The exact form that this will take remains to be discovered.

Hubert Dreyfus passed away in April 2017, while I was in the middle of writing this post. Although I never had the privilege of attending his lectures in person, his podcasted lectures and writings have been extremely inspirational and valuable to me. Thank you.

Brexit and globalisation

Two momentous events that took place last year were the election of Donald Trump to the presidency of the United States, and the UK’s referendum on EU membership that led to the “Brexit” decision to leave the union. The two are often lumped together and seen as symptoms of a single larger force, which they probably are. But in one respect they are different. The Trump presidency has an expiry date, but it is hard to see how Brexit might be reversed in the foreseeable (or even distant) future.

As a student and then an engineer in London during 2003-2007, one of the first vivid, intense impressions I got was that the UK was a much better integrated society than Sweden. Manifestly, people from all kinds of cultural backgrounds were – it seemed to the 19-year old me – living and working together smoothly on many social levels. During my life in Sweden until then, I had not ever seen immigration working out in this way. It was mostly seen and talked about as a problem that had to be addressed (and on a much smaller scale than what we have now).

This may of course reflect the fact that London has long been, until now, one of the most global cities in the world (Tokyo has nothing on it in this respect, although it has a massive energy and dynamic of a different kind), and the place I came from was rather rural. Countryside Britain was never as well integrated as London. World cities tend to be sharply different from the surroundings that support them. But on balance, the UK came across to me as a successfully global society.

In the years since, Sweden has, it seems to me, successfully integrated a lot of people and there are plenty of success stories. It has become a far more global society than it was in, say, 2003. At the same time, xenophobia has been on the rise, just as it has in the rest of Europe and the US, and now Swedish politics must, lamentably, reckon with a very powerful xenophobic party. Reactive (in the Nietzschean sense) forces are having a heyday. Ressentiment festers.

The global society is probably here to stay. The ways of life and work, the economic entities that now bestride the earth, are all firmly globalised. This is an ongoing process that may not end for some time. (However, this probably will never erase the importance of specific places and communities. To be rooted in something is in fact becoming ever more important.) But globalisation, to use that word, has plainly not brought prosperity to everyone. In fact, many have been torn out of prosperity by economic competition and technological advances. Witness American coal miners voting Trump. In my view, though not everyone will agree, a well-protected middle class is necessary to achieve a stable democratic society. Witness what happens when that protection is too far eroded. Neglecting this – which has been a failure of politics on a broad scale – is playing with fire. General frustration becomes directed at minorities.

Being somewhat confused ourselves, and living with weak or failing, if not xenophobic or corrupt, politicians and governments, we – western/globalised society – may need something that is utterly lacking: new ideology, new thinking, new dreams. Not a wishful return to the 90’s, 70’s or some other imagined lost paradise, but something that we can strive for positively, and in the process perhaps reconfigure our societies, politics and economies. For this to happen, people may need to think more, debate more, read more books, and be more sincere. Sarcasm and general resignation lead nowhere. One needs to look sincerely at one’s own history, inward into the soul, as well as outward.

A successful form of such new politics probably will not involve a departing from the global society. But it may involve a reconfiguration of one’s relationship with it. So as Theresa May’s government proceeds to negotiate the withdrawal of the UK from the EU – which must be a bitter, gruelling task for many of those involved – I hope that what she is initiating is such a reconfiguration. I hope that Britain can draw on its past success as a highly global society and constructively be part of the future of the West.

Synthesis is appropriation

In contemporary society, we make use of the notion that things may be synthetic. Thus we may speak of synthetic biology, “synthesizers” (synthetic sound), synthetic textile etc. Such things are supposed to be artificial and not come from “nature”.

However, the Greek root of the word synthesis actually seems to refer to the conjoining of pre-existing things, rather than something being purely man-made. But what does it mean to be purely man-made?

Furniture, bricks, bottles, roads and bread are all made in some sense; they are the result of human methods, tools and craft applied to some substrate. But they do not ever lose the character of the original substrate, and usually this is the point – we would like to see the veins of wood in fine furniture, and when we eat bread, we would like to ingest the energy, minerals and other substances that are accumulated in grains of wheat.

Products like liquid nitrogen or pure chlorine, created in laboratories, are perhaps the ones most readily called “synthetic”, or the ones that most readily would form the basis for something synthetic.  This owing to their apparent lack of specific character/particularity, such as the veins of wood or the minerals in wheat. On the other hand, it is apparent that they possess such non-character only from the point of reference of atoms as the lowest level. If we take into consideration ideas from string theory or quantum mechanics, most likely the bottom level shifts and the pure chlorine no longer seems so homogenous.

Accordingly, if we follow this line of thought to the end, as long as we have not established the bottom or ground level of nature – and it is questionable if we ever shall – all manufacture, all making and synthesis, is only a rearrangement of pre-existing specificity. Our crafts leave traces in the world, such as objects with specific properties, but do not ever bring something into existence from nothing.

Synthesis is appropriation: making is taking.

Rice fields and rain

img_5604

Humans primarily live in a world of beings, each of which has meaning. Meaningful beings appear to us interconnected, referencing practices and other beings in a referential totality. Buttons suggest pushing, chairs suggest sitting, a tractor suggests farming. A (Japanese) rice paddy may suggest the heavy labour that goes into the rice harvest each year, the tools and equipment that go with it, as well as the gradual depopulation of the village, since the young ones prefer a different line of work elsewhere. It may be part of the site and locus of an entire set of concerns and an outlook on life.

The world of beings is the one that is most immediate to us, and a world of molecules, atoms, energy or recorded data, useful as it may be, is something much further away. In each case it must be derived and renewed from the use of a growing and complex apparatus of equipment, practices and body of concepts, such as the traditions of physics or mathematics. Yet nobody would dispute that these worlds – the world of beings and the calculated world – are interrelated. In some cases they are even deeply intertwined.

But how can we reconcile the calculated world with the world of beings? How exactly do they influence each other? And if the calculated world is expanding aggressively, thanks to the spread of computational machinery and its servants, is the world of beings being pushed back? Receding? Are we abandoning it, since it is no longer good enough for us? Refusing to touch it, other than with thick gloves?

The calculated world concerns itself with propositions, true facts, formal models, records. A conceptual basis is needed to codify and engage with it. A record is formed when an observation is made, and the observer writes down what was observed. Initially, it retains an intimate connection with the world (of beings). The record is interpreted in light of the world and allowed to have its interplay with other beings. The observation “it rained heavily this week” is allowed to mean something in the context of farming, in the context of a possible worry about floods, or as a comment on an underwhelming holiday. Depending on who the reader is and what their concerns are, all these meanings can be grasped. The record may thus alter the reader’s outlook in a way similar to what direct experience of the rainfall would do.

At this level, the only facts we may record are that it rained or did not rain, and whether the rain was heavy or light. But given that we have some notion of space or time, as human beings do, repetition becomes possible. Scales for measuring time and space can be constructed, The rainfall can now be 27 or 45 mm. We are now further away from the world of farming, floods and holidays – “45 mm” of rain needs to be interpreted in order to be assigned any meaning. It has been stripped of most the world where it originated. The number 45 references only calculable repetition of an act of measurement. Enabled by the notions of space and time, already it tries to soar above any specific place or time to become something composable, calculable, nonspecific. Abstraction spreads its wings and flaps them gently to see if they will hold.

So on all the way up to probability distributions, financial securities, 27 “likes” in a day on social media and particle physics. At each level of the hierarchy, even when we purport to move “downward” into the “fundamentals” of things, layers of meaning are shed and a pyramid of proverbial ivory soars to the sky.

Spatial and temporal observations depend on measurement on linear scales, such as a stopwatch or a ruler. Such scales are first constructed through repeated alignment of some object with another object. Such repeated alignment depends on counting, which in turn depends on the most basic and most impoverished judgment: whether something is true or false, does or does not accord. Thus something can have the length of 5 feet or the duration of 3 hourglasses: it accords with the basic unit a certain number of times. This accordance is the heavily filtered projection of a being through another. The side of a plot of land is measured, in the most basic case, by viewing the land through a human foot – how many steps or feet suffice to get from one side to the other? Even though the foot is actually able to reveal many particularities of the land being measured – its firmness, its dampness, its warmth – the only record that this attitude cares to make is whether or not spatial distance accords, and how many times in succession it will accord. All kinds of measurement devices, all quantitative record making, follows this basic principle. Thus, the calculable facts are obtained by a severe discarding of a wealth of impressions. This severity is obvious to those who are being trained to judge quantitatively for the first time, but soon internalised and accepted as a necessity. Today, these are precisely the facts we are accustomed to calling scientific and objective.

But the accordance of beings with distance or time is, of course, very far from the only things we can perceive about them. The being emits particular shapes, configurations, spectra that make impressions on us and on other beings. Thus it is that we may perceive any kind of similarity – for example the notion that two faces resemble each other, that a dog resembles its owner, or that a constellation of stars looks like a warrior. We delight in this particularity, which in a way is the superfluous or excess substance of beings – it is not necessary for their perception but it forms and adds to it. Thus the stranger I met is the stranger with a yellow shirt and not merely the stranger. He can also be the stranger with a yellow shirt and unkempt hair, or the stranger with a yellow shirt and unkempt hair and a confident smile, and so on – any number of details may be recorded, any number of concepts may be brought into the description. These details are not synthetic or arbitrary. But they are also not independent of the one who observes. They would depend both on a richness that is of the being under observation, and on the observer’s ability to form judgments and concepts, to see metaphorically, creatively and truthfully.

Such impressions, which carry a different and perhaps more immediate kind of truth than the truth that we derive from calculations and records, may now have become second class citizens in the calculated world that grows all around us.