Photography

One of my recent interests has been film photography. Of course, I was interested in exploring the difference between digital and analog technology, and having taken more than my share of smartphone pictures in my life, I was ready to jump to the opposite end of the spectrum. It also helps that Japan has an excellent second-hand market for vintage cameras and lenses. Some manual focus lenses made here in the 1970s and 1980s are still considered excellent performers with today’s latest “mirrorless” digital cameras.

I have been surprised by the richness of this activity. Film photography forces a higher level of consciousness than the easy point and click photography of smartphones, which must now be almost as automatic as breathing for many. With film, it is necessary to compose the shot, consider, and then wait for the result. Of course, there will be no previews until the film has been processed. Not only am I forced to think more about the shots, I’m also forced to consider what photography is, becoming aware of myself as someone who observes and records.

Susan Sontag has argued clearly enough that photography is not objective truth. Unless some kind of scientific attitude is applied, there is too much framing, selection and cherry-picking. But photography is maybe the art form that most convincingly makes the claim to being objective truth. A phenomenology of photography, the taking of photos and their viewing, would be something rich and complex. For me as a photographer, photography is almost a pure exploration of the psyche and of my own reaction to subjects. Other people viewing my photographs would, I expect, usually discover a completely different meaning than the one I have already attached to them.

Truth and meaning-considerations aside, impressions of the physical world are on some level captured in photographs, digital as well as analog. Photography exemplifies several ways of relating to particularity through instruments and attitudes. Digital photography imposes a final alphabet and ground level of measurements, and a digital image is thus effectively a number in a very large integer space. Film photography impresses the image upon silver halide crystals, which are not homogenous, not square-shaped, and whose physical properties may or may not have been fully elucidated. In some sense the ground of film photography may be said to be open in a way that digital photography is not. For all that, of course, in 2018 digital photography may be the quickest and most practical way to get sharp and high quality images, by most people’s common sense standards. But it is hard to suppress the feeling that something must be lacking there, that we tend to make the leap too easily and quickly.

 

 

Nietzschean toxicology

Although one of my main projects is software for toxicology and toxicogenomics, my background in toxicology is not as strong as in, for example, computer science, and I’m lucky to be able to rely on experienced collaborators. With that said, I’d still like to try to speculate about the field through a mildly Nietzschean lens.

Toxicology focuses in the main on identifying mechanisms of degradation. Ingesting large quantities of the painkiller acetaminophen will cause liver damage and necrosis of liver cells. This will seriously harm the organism, since the liver is such an important organ, and many essential functions that the body depends on will be degraded or perhaps vanish completely. Untreated acute liver failure is fatal. It is very clearly a degradation.

Toxicology wishes to understand the mechanisms that lead to such degradation. If we understand the sequence of molecular events that eventually leads to the degradation, perhaps we can either make some drug or compound safer, by blocking those events, or we can distinguish between safe and unsafe compounds or stimuli.

Safety testing of a new drug, however, is done in aggregate, on a population of cells (or, in a clinical trial for example, on a group of animals or even humans, after a high degree of confidence has been established). If only a few individuals develop symptoms out of a large population, the drug is considered unsafe. But in practice, different individuals have different metabolism, different versions of molecular pathways, different variants of genes and proteins, and so on. Accordingly, personalised medicine holds the promise of – when we have sufficient insight into individual metabolism – being able to prescribe unsafe drugs (for the general population) to only those individuals that can safely metabolise them.

It is easy to take a mechanism apart and stop its functioning. However, while a child can take a radio apart, often he or she cannot put it back together again, and only very rarely can a child improve a radio. And in which way should it be improved? Should it be more tolerant to noise, play sound more loudly, receive more frequencies, perhaps emit a pleasant scent when receiving a good signal? Some of these improvements are as hard to identify, once achieved, as they might be to effect. Severe degradation of function is trivial both to effect and to identify, but improvement is manifold, subtle, may be genuinely novel, and may be hard to spot.

An ideal toxicology of the future should, then, be personalised, taking into account not only what harms people in the average case, but what harms a given individual. In the best case (a sophisticated science of nutrition) it should also take into account how that person might wish to improve themselves, a problem that is psychological and ethical as much as it is biological, especially when such improvement involves further specialisation or a trade-off between different possibilities of life. Here the need for consent is even more imperative than with more basic medical procedures that simply aim to preserve or restore functioning.

In fact, the above issues are relevant not only for toxicology but also for medicine as a whole. Doctors can only address diseases and problems after viewing them as a form of ailment. Such a viewpoint is based on a training that has as its topic the average human being. But species and individuals tend towards specialisation, and perhaps the greatest problems are never merely average problems. Personalised medicine as a field may eventually turn out to be much more complex than we can now imagine, and place entirely new demands on physicians.

Action, traces and perception

A sketch of the ways that concepts allow us to make sense of traces of action in the world (or simply of processes, if we do not wish to posit an actor).

Actions (or processes) leave traces. Traces of such processes include beings, such as houses, roads, animals and plants, and also non-beings, some of which may be potential beings, for example new species or scientific phenomena to be named in the future.

The intelligibility of traces depends on having access to meaningful concepts, such as the concept of an oak or an owl. Not only must we have developed the relevant concept in ourselves and become sufficiently familiar with it, but it must also present itself at the right time when we encounter pre-conceptual oak-indications or owl-indications (or traces of an oak-making process). Some doubt as to whether the traces are of an oak or of a different tree is allowed at first, but not later as the learner becomes more experienced in the world of trees.

What presents itself is not merely an instance of the concept “oak” but also qualities of the oak. It may be towering, withered, majestic or small. Weather conditions and parasites may have left all kinds of marks that interleave themselves with the basic impression. The oak’s particularity is inexhaustible. “I saw an oak” is in no way a complete account of what was seen. Indeed the task of seeing the oak itself may be time-consuming and difficult if taken seriously. A world where all oaks were merely pure instances of the oak concept would be a completely meaningless one.

If what is perceived is man-made, then it will be the perception of a process that contains in part a sequence of actions carried out by humans (but necessarily has its ultimate origin in a non-human process). Here the additional dimension of intent may be added to the act of perception. Through our understanding of ourselves and of our culture, we may be able to work out what was created and why, and for what purpose. The case of a neighbour redecorating their garden is comparable in quality to that of encountering a foreign culture and trying to understand its religious ceremonies and objects. In a time of conflict, we may look at the object as a source of potential hostility or friendliness.

Man-made objects will be the easiest ones to imitate since intent and human actions may be extracted from the traces. Seeing a man-made object will in many cases allow someone with sufficient pre-existing skill to create a similar object. Natural processes are considerably harder. We are as yet unable to manufacture oaks or owls from scratch (not the same as sowing an acorn or hatching an egg). Laboratories, biomedical and otherwise, are constantly at work translating the processes of nature into sequences of human actions (e.g. molecular cloning protocols). Thus science works by expanding the space of what is, or can be, man-made.

 

 

 

Interactive toxicogenomics

If you work in toxicology or drug discovery, you might be familiar with the database Open TG-GATEs, a large transcriptomics database that catalogues gene expression response to well-known drugs and toxins. This database was developed by Japan’s Toxicogenomics Project during many years, as a private-public sector partnership, and remains a very valuable resource. As with many large datasets, despite the open-ness, accessing and working with this data can require considerable work. Data must always be placed in a context, and these contexts must be continually renewed. One user-friendly interface to simplify access to this data is Toxygates, which I begun developing as a postdoc at NIBIOHN in the Mizuguchi Lab in 2012 (and am still the lead developer of). As a web application, Toxygates lets you look at data of interest in context, together with annotations such as gene ontology terms and metabolic pathways, as well as visualisation tools.

We are now releasing a new major version of Toxygates, which, among many other new features, allows you to perform and visualise gene set clustering analyses directly in the web browser. Gene sets can also be easily characterised through an enrichment function, which is supported by the TargetMine data warehouse. Last but not least, users can now upload their own data and cluster and analyse it in context, together with the Open TG-GATEs data.

Our new paper in Scientific Reports documents the new version of Toxygates and illustrates the use of the new functions through a case study performed on the hepatotoxic drug WY-14643. If you are curious, give it a try.

When I begun the development as a quick prototype, I had no idea that the project would still be evolving many years later. Toxygates represents considerable work and many learning experiences for me as a researcher and software engineer, and I’m very grateful to everybody who has collaborated with us, supported the project, and made our journey possible.

 

Dreyfus and Bostrom. Four AI assumptions and two books.

At first glance, Hubert Dreyfus’ 1992 book What Computers Still Can’t Do (WCSCD, originally published in 1972 as What Computers Can’t Do) seems untimely in the current business climate, which favours massive and widespread investment in AI (these days, often understood as being synonymous with machine learning and neural networks). However, being untimely may in fact allows us to act “against our time and thus hopefully also on our time, for the benefit of a time to come” (Nietzsche). And the book’s argument might in fact not be outdated, but simply forgotten in the frenzy of activity that is our present AI summer.

Dreyfus outlines four assumptions that he believes were (in many cases, still are) implicitly made by AI optimists.

The biological assumption. On some level, the (human) brain functions like a digital computer, processing discrete information.

The psychological assumption. The mind, rather than the brain, functions like a digital computer, even if the brain doesn’t happen to do so.

The epistemological assumption. Even if neither minds nor brains function like digital computers, then this formalism is still sufficient to explain and generate intelligent behaviour. An analogy would be that planets moving in orbits are perhaps not solving differential equations, but differential equations are adequate tools for describing and understanding their movement.

The ontological assumption. Everything essential to intelligent behaviour ­— such as information about the environment — can in principle be formalised as a set of discrete facts. 

These assumptions all relate to the limitations of computation (as we currently understand it) and of propositional logic.

Dreyfus is famous for interpreting thinkers such as Heidegger and Merleau-Ponty, and consistently draws upon these thinkers in his arguments. In fact, as he points out in WCSCD, the phenomenological school attacks the very long philosophical tradition that sees mind and world as strictly separate, and that assumes that the mind functions by way of a model that somehow can be reduced to logical operations (we can see why the field of AI has implicitly, and in many cases unwittingly, taken over this tradition). Historically, this tradition reached perhaps one of its purest expressions with Descartes. Indeed Being and Time, Heidegger’s major work, is very anti-Cartesian. Heidegger’s account of intelligibility demands that one (Dasein) is in a world which appears primarily as meaningful interrelated beings (and not primarily as atomic facts, or sources thereof, to be interpreted), and is historically in a situation, making projections on the basis of one’s identity. Here, calculation and correspondence-based theories of truth are derived and secondary things. There is no clear separation between world and “model” since there is no model, just the world and our ability to relate to it.

I will hazard a guess that most neuroscientists today would not take the first two assumptions seriously. In all kinds of biology and medicine, we regularly encounter new phenomena and mechanisms that could not be captured by the simple models we originally came up with, forcing us to revise our models. Making brains (bodies) and/or minds somehow isomorphic to symbolic manipulation seems wholly inadequate. More interesting, and much harder to settle unambiguously, are the epistemological and the ontological assumptions. If the epistemological assumption is false, then we will not be able to generate “intelligent behaviour” entirely in software. If the ontological assumption is false, then we will not be able to construct meaningful (discrete and isolated) models of the world.

The two latter assumptions are indeed the stronger ones out of these four. If the epistemological assumption turns out to be invalid, then the biological and psychological assumptions would necessarily also be invalid. The ontological assumption is closely related and similarly strong.

By contrast, Nick Bostrom‘s Superintelligence: Paths, Dangers, Strategies is a more recent (2014) and very different book. While they are certainly worth serious investigation, theories about a possible technological singularity can be somewhat hyperbolic in tone. But Bostrom comes across as very level-headed as he investigates how a superintelligence might be formed (as an AI, or otherwise), how it might or might not be controlled, and the political implications of such an entity coming into existence. For the most part, the book is engrossing and interesting, though clearly grounded in the “analytical” tradition of philosophy. It becomes more compelling because of the potential generality of its argument. Does a superintelligence already exist? Would we know if it did? Could it exist as a cybernetic actor, a composite of software, machines, and people? It is interesting to read the book, in parallel, as a speculation on (social, economic, geopolitical, technological, psychological or composites thereof) actors that may already exist but that are beyond our comprehension.

Bostrom’s arguments resemble how one might think about a nuclear arms race. He argues that the first superintelligence to emerge might have a decisive strategic advantage and, once in place, prevent (or be used to prevent) the emergence of competing superintelligences. At the same time it would bestow upon those who control it (if it can be controlled) a huge tactical advantage.

Even though Bostrom’s argument is mostly very general, at times it is obvious that much of the thinking is inspired by or based on the idea of AI as software running on a digital computer. To me this seemed implicit in many of the chapters. For example, Bostrom talks about being able to inspect the state of a (software agent’s) goal model, to be able to suspend, resume, and copy agents without information loss, to measure hedonic value, and so on. Bostrom in many cases implies that we would be able to read, configure and copy an agent’s state precisely, and sometimes also that we would be able to understand this state clearly and unambiguously, for example in order to evaluate whether our control mechanisms are working. Thus many of Bostrom’s arguments seem tightly coupled to the Church-Turing model of computation (or at least to a calculus/operational substrate that allows for inspection, modification and duplication of state). Some of his other arguments are, however, sufficiently general that we do not need to assume any specific substrate.

Bostrom, it seems to me, implicitly endorses at least the epistemological assumption throughout the book (and possibly also the ontological one). Even as he rightly takes pains to avoid stating specifically how technologies such as superintelligences or whole brain emulation would be implemented, it is clear that he imagines the formalism of digital computers as “sufficient to explain and generate intelligent behaviour”. In this, but perhaps not in everything he writes, he is a representative of current mainstream AI thinking. (I would like to add that even if he has wrongly taken over these assumptions, the extreme caution he advises us to proceed with regarding strong AI deserves to be taken seriously – the risks in practice are sufficiently great for us to be quite worried. I do not wish to undermine his main argument.)

It is thinkable but unlikely that in the near future, through a resounding success (which could be an academic, industrial or commercial one, for example), the epistemological assumption will be proven true. What I hold to be more likely (for reasons that have been gradually developed on this blog) is that current AI work will converge on something that may well be extremely impressive and that may affect society greatly, but that we will not consider to be human-like intelligence. The exact form that this will take remains to be discovered.

Hubert Dreyfus passed away in April 2017, while I was in the middle of writing this post. Although I never had the privilege of attending his lectures in person, his podcasted lectures and writings have been extremely inspirational and valuable to me. Thank you.