Tag: utilitarianism


Scott Aaronson has misunderstood continental philosophy

December 27th, 2013 — 12:32am

It is first with delight and then with a growing feeling of sadness that I read Luke Muelhauser’s interview with the computer scientist Scott Aaronson at the Machine Intelligence Research Institute. As a computer scientist, Aaronson has contributed much to our understanding of complexity theory and other areas. He has even written popular science books on the field. I am happy to read that he seems to feel strongly about the links between computer science and philosophy. I agree with Aaronson about a lot of things. Certainly, computer science and philosophy are fields that cross-fertilise each other a lot, and my feeling is that this process is only getting started, much more can be done. Perhaps this mating of the two fields is even severely lagging behind what we would need in today’s world. Without a doubt, the study of formal models of rewriting and interpretation is extremely interesting and sheds light on questions about the nature of language, complexity, knowledge, understanding, communication, equipment, and the abilities of the human mind.

But then, just as I am about to call Aaronson one of my intellectual heroes, he stumbles:

By far the most important disease, I’d say, is the obsession with interpreting and reinterpreting the old masters, rather than moving beyond them.

And then he stumbles severely:

One final note: none of the positive or hopeful things that I said about philosophy apply to the postmodern or Continental kinds. As far as I can tell, the latter aren’t really “philosophy” at all, but more like pretentious brands of performance art that fancy themselves politically subversive, even as they cultivate deliberate obscurity and draw mostly on the insights of Hitler and Stalin apologists. I suspect I won’t ruffle too many feathers here at MIRI by saying this.

The unfortunate continental-analytic pseudo-divide

Who are the “Hitler and Stalin apologists”? I hope that this embarrassing epithet is not supposed to refer to Nietzsche and Marx, for example, since even a very casual reader of Nietzsche would quickly discover that he does not like nationalism or anti-semitism. Instead, his thinking was twisted and selectively misused by Nazi ideologists. It is true that thinkers like Heidegger and Foucault for a time supported Nazism and the Khomeini revolution, respectively, and there are other examples of controversial association. But using this as an excuse not to read these thinkers, let alone dismiss all of continental thinking, is very superficial.

This kind of comment itself is not normally worth a serious reply, and it seems Aaronson is just throwing it out a bit carelessly, expecting his audience to be people with views similar to his own. But since it is said by someone who is clearly very intelligent and who clearly wants to bridge philosophy and computer science (which I also want to do), I felt that I should counter the position I imagine that he is coming from. In doing so I will not be responding to Aaronson’s interview as a whole, which is basically an excellent read for the most part, full of interesting viewpoints. Instead, I will be focussing on these two unfortunate remarks only, and the misguided viewpoint that I believe generated them.

The artificial 20th century split between “continental” (French, German, etc) philosophy and “analytical” (mostly Anglo-Saxon) philosophy is extremely unfortunate and one hopes that it can be bridged one day. Aaronson exemplifies a general theme. He is anglo-saxon, a scientist and logician, has a limited interest in the humanities, and is thoroughly modern in that he has lost sight of the unitary origin of the scattered, fragmented array of academic fields and disciplines that we have today. The writers he likes are great ones but on one side of the continental-analytic divide only. He is doing great work, but he could potentially be doing so much more.

On solvent abuse

I believe that I understand Aaronson’s intellectual background to some degree. I studied for my undergraduate degree at Imperial College London, which, like MIT, is a place full of people who are very technically oriented. For me that was a great education in many ways, but it would not be an overstatement to say that very little attention was (and is, probably) given to the humanities there. This was by design. A certain deep but ultimately restricted kind of vision was cultivated there. The pure rationalist perspective functions exactly like bleach in the sense that it will disinfect, killing harmful bacteria, but it might kill healthy tissue too if applied too liberally. It also removes colour. For reasons unclear to me – perhaps partly as a reaction – my interest in humanities flickered to life during my final year there, however, and intensified when I begun my graduate studies here in Tokyo. I became very interested in the viewpoints that philosophy could offer me, and especially in continental writers. It is as someone who has made a difficult migration from a very restrictive logical/scientific viewpoint to a more inclusive one that I write these comments. My hope is that Aaronson will also make this leap and expand the range of his work to include the truly useful – if his funding agencies would let him, that is.

The unified root of knowledge

As Aaronson says, Einstein, Bohr, Godel and Turing had views outside of the scientific fields they are remembered for. It even seems that they might have been so successful in part because of their breadth. Blaise Pascal is remembered in some circles as a mathematician but we could equally well call him a philosopher who did some mathematics on the side. Francis Bacon thought not only scientifically but also meta-scientifically, imagining the limits of science and scientific method. The Pythagoreans approached mathematics not as something to be contemplated as a formal exercise at a desk with pen and paper, but as part of something esoteric and mystical. In ancient Greece, education emphasised an integrated, well-balanced body and mind and training in a wide range of theoretical and practical fields was important for one’s stature. The Greeks preferred this kind of multiplicity, and would have been horrified at the suggestion that the focus on specialties and separate disciplines that we have today should somehow be better. But today we have thoroughly rejected the idea that all knowledge and understanding is connected and stems from a single source.

In the first of the two remarks that I have singled out above, Aaronson complains that academic philosophy continually gets into the “hermeneutic trap” of reinterpreting the same passages by dead writers again and again. What is decisive here, as in so many things, is the attitude with which one carries out this interpretation. If this exercise is carried out for the sake of getting a grade at a modern university, passing a class, or for academic promotion, then the result can be nothing but junk, artificial, forced thinking and writing, and a bad reputation for the activity as a whole. The necessary attitude that gives this activity its true value is grounded in a desire to return to the origin – the root that many applied scientists mistakenly believe their branch of the tree constitutes – and then use the insights from there to bring society forward. The suggestion that this activity has no value is ridiculous. Would Aaronson also say that we don’t need to study history, that we should let every generation invent society anew? Maybe he’d recommend burning books older than 50 years? I’m very far from making some kind of blanket endorsement of conservatism, but I would certainly endorse a selective conservatism that critiques the past in order to learn from its experience and create a better future. Only the earnest interpretation of old texts can renew our connection with the origin of our thinking. (This is not to say that what goes on in humanities departments today is such an earnest interpretation, but that discussion belongs elsewhere.)

“Utility” and what is truly useful

Many of us moderns are obsessed with a particular notion of utility, which comes to dictate what is worth doing. Everybody understands that it is easy to fund computer science because it leads to applications, be they commercial, scientific or military, that can immediately be exchanged for money. (It is through luck that academics doing good work of true value are sometimes able to dress up their work as “useful” to the markets and funders. If this didn’t happen, institutional thinking would be even more diseased and withered than it already is.) It is difficult to fund a study of hermeneutics or existentialism, because the markets don’t care, consumers are not interested. But just as democracies are unable to make long term decisions but instead make decisions that will please voters today, what is “useful” from computer science in the short term, for example for fighting battles in Afghanistan or for making a new iPad, is not necessarily what is needed for the long term, which is: the furthering and evolution of culture, new, inspiring and vital visions for society and for the future, spiritual height. The suggestion by Clark Glymour that Aaronson refers to (but thankfully doesn’t endorse), that philosophy departments should be defunded unless they contribute something applicable to other disciplines, might be the single worst idea I have ever encountered.

Poetry, prose and contradiction; style as a conduit of meaning

Heidegger’s Being and Time is a very difficult text to read. Is it, to use Aaronson’s words, a pretentious brand of performance art? Is the difficulty there only for the sake of being difficult? To put it another way, is the difficulty accidental and contrived or is it essential?

Accidental difficulty should obviously be removed as much as possible from any work, so that it can be made more accessible. “As simple as possible, but no simpler”. But I contend that the difficulty in this and other, similar texts is an essential one. There is no simpler way of phrasing the argument. The arguments in mathematics, and to a large extent in computer science, can be phrased in a formal calculus and can be expressed with (apparent) elegance and simplicity. But philosophy would be severely limited if reduced to a formal calculus. The arguments made by Heidegger, for example, are in some way deeply bound up with language itself. In order to receive his teaching, it is necessary to feel and engage with his words and his phrasing. To reduce the arguments to simpler but apparently similar sentences would be to remove some of their essence. In other words, when reading this kind of work we should not insist on trying to separate “form” and “content”. This is to some degree true for all continental philosophers I’ve read, but especially clear with Heidegger – as anyone who has seriously tried to get through Being and Time would probably agree. There is also no doubt that this kind of writing sheds light on something. And who would dispute that illumination of the world and our conditions of existence is one essential aim of philosophy?

If one finds it difficult to read texts that do not present strictly logical arguments, but communicate meaning in other ways, then the only way around this difficulty would be to invest time, effort and patience into the reading process, just as one does when trying to understand or formulate a mathematical proof.

Reaching towards the extralogical

A typical reaction from someone who has spent too much time with “analytic” thinking exclusively and then encounters a continental thinker would be something like: “This makes no sense. I do not understand what facts are being stated or what propositions are being proven. The writer is even contradicting himself. How can anyone take this seriously?”.

In order to move beyond this kind of hasty judgment, it is necessary to step outside the realm of the mathematical. The following points may serve to indicate where this realm lies (here it is very much the case that it lies just before our eyes — actually, in our eyes, in our nerves, in our very being — and we do not see it).

1. There are things that can not be expressed in logic but are worth studying. The way that we approach ethics and “utility” is for the most part extralogical. One’s identity and sense of direction in life is extralogical. A logical system is not worth much without axioms or applications, i.e. without bridges into and out of it. Art is one of the most important sources of such bridges. Insisting on a fundamental separation of the artistic and the useful/valuable, in the way that Aaronson seems to do, is ridiculous.

2. Mathematics and even computer science depend vitally on artistic elements, however contrived, personal and inexpressible they might be, to receive their salience, their sense of height and gravity.

3. Do politics, world history, human society and biology move according to the rules of logic? Dubious. Should these things be enslaved to logic in an ideal world? Highly dubious!

4. Poetry can express meaning that cannot be captured in logical arguments. Poetry can circumscribe and indicate. Contradiction is one particular poetic element and as such it can carry meaning. This is one reason why it is not an argument against a philosophical text when it is self-contradicting.

5. Attitude, grasping, understanding, and vision that gives a particular kind of access to the world — these are complementary to and as important as facts that can be expressed as propositions. Questioning, having the ability to persist in uncertainty, is sometimes more valuable than definite propositions about something.

Conclusion

Computer science is now a rapidly growing scientific and cultural force, and computer scientists must be critical of their roots, their style of thinking, and their methods, to avoid making serious mistakes. Computer scientists should reach deeply into the humanities, just as the humanities should reach into computer science. One hopes that the Machine Intelligence Research Institute understands that machinery (and logic) is not an infinite space that encompasses everything intelligible. It is necessary to understand the boundaries of that space in order to work inside it and build good bridges to its exterior.

Having said all this, I feel somewhat guilty for having singled out Aaronson as a representative of a larger group of technologists who thumb their nose at the humanities (French and German humanities in particular). He is far from the worst in this category. My only excuse is that the sense of wasted potential that I get is especially great here – it would be sad if Aaronson went through the rest of his career never reaching into continental thinking. I would recommend Aaronson to read Nietzsche’s writings on appearance, masks, becoming and truth, and then reflect on complexity in the light of that, read Heidegger’s writings on being in order to get a new idea of what meaning is, and reflect on artificial intelligence in the light of that, and read Foucault’s writings on power, visibility and control, and reflect on the overall social role of computers in the light of that. As a bridge between mathematical and continental thinking, I recommend Manuel DeLanda, whose books truly touch both of the “continents”.

Thinking does not stop where logic ends, if indeed it has begun at that point.

 

3 comments » | Computer science, Philosophy

Technology and utilitarianism

March 4th, 2012 — 1:19pm

Technologists and engineers often use the ideas of utilitarianism to evaluate their solutions. If something is cheaper, or faster, or lets people live 3.2 days longer on average, or some other number can be optimised, they judge a solution to be better. In short, they use a quantitative form of  judgment. This way of thinking is the appropriate way of judging engineering problems, but not the best way of judging design problems.

To a degree it is possible to come up with a new product by simply improving on some numbers from an old one. “Here’s a new hard drive with 1.3x more space.” However, such innovation will always be incremental.

The challenge for technology is how to create products and solutions that are not justified or evaluated from a quantitative, utilitarian perspective, but from an entirely different one, perhaps an aesthetic perspective. And this is also the challenge for social innovators and policymakers in society. Solutions that maximise numbers have value and can enable qualitative change in the long run, but in themselves they never constitute true progress.

To see how far the utilitarian thinking has gone, think about how many technology products are justified with sentences along the lines of “it makes more information available”, or “it makes X cheaper” , or “it makes you more connected”. In all seriousness, there are situations when it is not desirable to have more information.

2 comments » | Computer science, Philosophy

Utilitarianism and computability

September 18th, 2010 — 5:10pm

I’ve started watching Michael Sandel’s Harvard lecture series on political philosophy, “justice”. In this series, Sandel introduces the ideas of major political and moral philosophers, such as Bentham, Locke, and Kant, as well as some libertarian thinkers I hadn’t heard of. I’m only halfway through the series, so I’m sure there’s other big names coming up. The accessibility of the lectures belies their substance: what starts out with simple examples and challenges to the audience in the style of Socratic method often ends up being very engaging and meaty. (Incidentally, it turns out that Michael Sandel has also become fairly famous in Japan, with his lectures having been aired on NHK, Japan’s biggest broadcaster.)

One of the first schools of thought he brings up is utilitarianism, whose central idea appears to be that the value of an action is placed in the consequences of that action, and not in anything else, such as the intention behind the action, or the idea that there are certain categories of actions that are definitely good or definitely evil. What causes the greatest happiness for the greatest number of people is good, simple as that. From these definitions a huge amount of difficulty follows immediately. For instance, is short-term happiness as good as long-term happiness? How long term is long term enough to be valuable? Is the pleasure of ignorant people as valuable as that of enlightened people? etc. But let’s leave all this aside and try to bring some notion of computability into the picture.

Assume that we accept that “the greatest happiness for the greatest number of people” is a good maxim, and we seek to achieve this. We must weigh the consequences of actions and choices to maximise this value. But can we always link a consequence to the action, or set of actions, that led to it? Causality in the world is a questionable idea since it is a form of inductive knowledge. Causality in formal systems and in the abstract seems valid, since it is a matter of definition, but causality in the empirical, in the observed, seems to always be a matter of correlation: if I observe first A and then B sufficiently many times, I will infer that A implies B, but I have no way of knowing that there are not also other preconditions of B happening (for instance, a hitherto invisible particle having a certain degree of flux). It seems that I cannot reliably learn what causes what, and then, how can I predict the consequences of my actions? Now, suddenly, we end up with an epistemological question, but let us leave this too aside for the time being. Perhaps epistemological uncertainty is inevitable.

I still want to do my best to achieve the greatest happiness for the greatest number of people, and I accept that my idea of what actions cause what consequences is probabilistic in nature. I have a set of rules, A1 => B1, A2 => B2… An => Bn which I trust to some extent and I want to make the best use of them. I have now ended up with a planning problem. I must identify a sequence of actions that maximises that happiness variable. But my brain has limited computational ability, and my plan must be complete by time t in order to be executable. Even for a simple problem description, the state space that planning algorithms must search becomes enormous, and identifying the plan, or a plan, that maximises the value is simply not feasible. Furthermore, billions of humans are planning concurrently, and their plans may interfere with each other. A true computational utilitarian system would treat all human individuals as a single system and find, in unison, the optimal sequence of actions for each one to undertake. This is an absurd notion.

This thought experiment aside, if we are utilitarianists, should we enlist the increased computing power that has recently come into being to help manage our lives? Can it be used to augment (presumably it can not supplant) human intuition for how to make rapid choices from huge amounts of data?

1 comment » | Computer science, Philosophy

Back to top