Utilitarianism and computability

I’ve started watching Michael Sandel’s Harvard lecture series on political philosophy, “justice”. In this series, Sandel introduces the ideas of major political and moral philosophers, such as Bentham, Locke, and Kant, as well as some libertarian thinkers I hadn’t heard of. I’m only halfway through the series, so I’m sure there’s other big names coming up. The accessibility of the lectures belies their substance: what starts out with simple examples and challenges to the audience in the style of Socratic method often ends up being very engaging and meaty. (Incidentally, it turns out that Michael Sandel has also become fairly famous in Japan, with his lectures having been aired on NHK, Japan’s biggest broadcaster.)

One of the first schools of thought he brings up is utilitarianism, whose central idea appears to be that the value of an action is placed in the consequences of that action, and not in anything else, such as the intention behind the action, or the idea that there are certain categories of actions that are definitely good or definitely evil. What causes the greatest happiness for the greatest number of people is good, simple as that. From these definitions a huge amount of difficulty follows immediately. For instance, is short-term happiness as good as long-term happiness? How long term is long term enough to be valuable? Is the pleasure of ignorant people as valuable as that of enlightened people? etc. But let’s leave all this aside and try to bring some notion of computability into the picture.

Assume that we accept that “the greatest happiness for the greatest number of people” is a good maxim, and we seek to achieve this. We must weigh the consequences of actions and choices to maximise this value. But can we always link a consequence to the action, or set of actions, that led to it? Causality in the world is a questionable idea since it is a form of inductive knowledge. Causality in formal systems and in the abstract seems valid, since it is a matter of definition, but causality in the empirical, in the observed, seems to always be a matter of correlation: if I observe first A and then B sufficiently many times, I will infer that A implies B, but I have no way of knowing that there are not also other preconditions of B happening (for instance, a hitherto invisible particle having a certain degree of flux). It seems that I cannot reliably learn what causes what, and then, how can I predict the consequences of my actions? Now, suddenly, we end up with an epistemological question, but let us leave this too aside for the time being. Perhaps epistemological uncertainty is inevitable.

I still want to do my best to achieve the greatest happiness for the greatest number of people, and I accept that my idea of what actions cause what consequences is probabilistic in nature. I have a set of rules, A1 => B1, A2 => B2… An => Bn which I trust to some extent and I want to make the best use of them. I have now ended up with a planning problem. I must identify a sequence of actions that maximises that happiness variable. But my brain has limited computational ability, and my plan must be complete by time t in order to be executable. Even for a simple problem description, the state space that planning algorithms must search becomes enormous, and identifying the plan, or a plan, that maximises the value is simply not feasible. Furthermore, billions of humans are planning concurrently, and their plans may interfere with each other. A true computational utilitarian system would treat all human individuals as a single system and find, in unison, the optimal sequence of actions for each one to undertake. This is an absurd notion.

This thought experiment aside, if we are utilitarianists, should we enlist the increased computing power that has recently come into being to help manage our lives? Can it be used to augment (presumably it can not supplant) human intuition for how to make rapid choices from huge amounts of data?

Comments 1