Nietzschean toxicology

Although one of my main projects is software for toxicology and toxicogenomics, my background in toxicology is not as strong as in, for example, computer science, and I’m lucky to be able to rely on experienced collaborators. With that said, I’d still like to try to speculate about the field through a mildly Nietzschean lens.

Toxicology focuses in the main on identifying mechanisms of degradation. Ingesting large quantities of the painkiller acetaminophen will cause liver damage and necrosis of liver cells. This will seriously harm the organism, since the liver is such an important organ, and many essential functions that the body depends on will be degraded or perhaps vanish completely. Untreated acute liver failure is fatal. It is very clearly a degradation.

Toxicology wishes to understand the mechanisms that lead to such degradation. If we understand the sequence of molecular events that eventually leads to the degradation, perhaps we can either make some drug or compound safer, by blocking those events, or we can distinguish between safe and unsafe compounds or stimuli.

Safety testing of a new drug, however, is done in aggregate, on a population of cells (or, in a clinical trial for example, on a group of animals or even humans, after a high degree of confidence has been established). If only a few individuals develop symptoms out of a large population, the drug is considered unsafe. But in practice, different individuals have different metabolism, different versions of molecular pathways, different variants of genes and proteins, and so on. Accordingly, personalised medicine holds the promise of – when we have sufficient insight into individual metabolism – being able to prescribe unsafe drugs (for the general population) to only those individuals that can safely metabolise them.

It is easy to take a mechanism apart and stop its functioning. However, while a child can take a radio apart, often he or she cannot put it back together again, and only very rarely can a child improve a radio. And in which way should it be improved? Should it be more tolerant to noise, play sound more loudly, receive more frequencies, perhaps emit a pleasant scent when receiving a good signal? Some of these improvements are as hard to identify, once achieved, as they might be to effect. Severe degradation of function is trivial both to effect and to identify, but improvement is manifold, subtle, may be genuinely novel, and may be hard to spot.

An ideal toxicology of the future should, then, be personalised, taking into account not only what harms people in the average case, but what harms a given individual. In the best case (a sophisticated science of nutrition) it should also take into account how that person might wish to improve themselves, a problem that is psychological and ethical as much as it is biological, especially when such improvement involves further specialisation or a trade-off between different possibilities of life. Here the need for consent is even more imperative than with more basic medical procedures that simply aim to preserve or restore functioning.

In fact, the above issues are relevant not only for toxicology but also for medicine as a whole. Doctors can only address diseases and problems after viewing them as a form of ailment. Such a viewpoint is based on a training that has as its topic the average human being. But species and individuals tend towards specialisation, and perhaps the greatest problems are never merely average problems. Personalised medicine as a field may eventually turn out to be much more complex than we can now imagine, and place entirely new demands on physicians.

Category: Bioinformatics, Philosophy Comment »


Leave a Reply



Back to top