Tag: politics


Dreyfus and Bostrom. Four AI assumptions and two books.

April 23rd, 2017 — 9:09pm

At first glance, Hubert Dreyfus’ 1992 book What Computers Still Can’t Do (WCSCD, originally published in 1972 as What Computers Can’t Do) seems untimely in the current business climate, which favours massive and widespread investment in AI (these days, often understood as being synonymous with machine learning and neural networks). However, being untimely may in fact allows us to act “against our time and thus hopefully also on our time, for the benefit of a time to come” (Nietzsche). And the book’s argument might in fact not be outdated, but simply forgotten in the frenzy of activity that is our present AI summer.

Dreyfus outlines four assumptions that he believes were (in many cases, still are) implicitly made by AI optimists.

The biological assumption. On some level, the (human) brain functions like a digital computer, processing discrete information.

The psychological assumption. The mind, rather than the brain, functions like a digital computer, even if the brain doesn’t happen to do so.

The epistemological assumption. Even if neither minds nor brains function like digital computers, then this formalism is still sufficient to explain and generate intelligent behaviour. An analogy would be that planets moving in orbits are perhaps not solving differential equations, but differential equations are adequate tools for describing and understanding their movement.

The ontological assumption. Everything essential to intelligent behaviour ­— such as information about the environment — can in principle be formalised as a set of discrete facts. 

These assumptions all relate to the limitations of computation (as we currently understand it) and of propositional logic.

Dreyfus is famous for interpreting thinkers such as Heidegger and Merleau-Ponty, and consistently draws upon these thinkers in his arguments. In fact, as he points out in WCSCD, the phenomenological school attacks the very long philosophical tradition that sees mind and world as strictly separate, and that assumes that the mind functions by way of a model that somehow can be reduced to logical operations (we can see why the field of AI has implicitly, and in many cases unwittingly, taken over this tradition). Historically, this tradition reached perhaps one of its purest expressions with Descartes. Indeed Being and Time, Heidegger’s major work, is very anti-Cartesian. Heidegger’s account of intelligibility demands that one (Dasein) is in a world which appears primarily as meaningful interrelated beings (and not primarily as atomic facts, or sources thereof, to be interpreted), and is historically in a situation, making projections on the basis of one’s identity. Here, calculation and correspondence-based theories of truth are derived and secondary things. There is no clear separation between world and “model” since there is no model, just the world and our ability to relate to it.

I will hazard a guess that most neuroscientists today would not take the first two assumptions seriously. In all kinds of biology and medicine, we regularly encounter new phenomena and mechanisms that could not be captured by the simple models we originally came up with, forcing us to revise our models. Making brains (bodies) and/or minds somehow isomorphic to symbolic manipulation seems wholly inadequate. More interesting, and much harder to settle unambiguously, are the epistemological and the ontological assumptions. If the epistemological assumption is false, then we will not be able to generate “intelligent behaviour” entirely in software. If the ontological assumption is false, then we will not be able to construct meaningful (discrete and isolated) models of the world.

The two latter assumptions are indeed the stronger ones out of these four. If the epistemological assumption turns out to be invalid, then the biological and psychological assumptions would necessarily also be invalid. The ontological assumption is closely related and similarly strong.

By contrast, Nick Bostrom‘s Superintelligence: Paths, Dangers, Strategies is a more recent (2014) and very different book. While they are certainly worth serious investigation, theories about a possible technological singularity can be somewhat hyperbolic in tone. But Bostrom comes across as very level-headed as he investigates how a superintelligence might be formed (as an AI, or otherwise), how it might or might not be controlled, and the political implications of such an entity coming into existence. For the most part, the book is engrossing and interesting, though clearly grounded in the “analytical” tradition of philosophy. It becomes more compelling because of the potential generality of its argument. Does a superintelligence already exist? Would we know if it did? Could it exist as a cybernetic actor, a composite of software, machines, and people? It is interesting to read the book, in parallel, as a speculation on (social, economic, geopolitical, technological, psychological or composites thereof) actors that may already exist but that are beyond our comprehension.

Bostrom’s arguments resemble how one might think about a nuclear arms race. He argues that the first superintelligence to emerge might have a decisive strategic advantage and, once in place, prevent (or be used to prevent) the emergence of competing superintelligences. At the same time it would bestow upon those who control it (if it can be controlled) a huge tactical advantage.

Even though Bostrom’s argument is mostly very general, at times it is obvious that much of the thinking is inspired by or based on the idea of AI as software running on a digital computer. To me this seemed implicit in many of the chapters. For example, Bostrom talks about being able to inspect the state of a (software agent’s) goal model, to be able to suspend, resume, and copy agents without information loss, to measure hedonic value, and so on. Bostrom in many cases implies that we would be able to read, configure and copy an agent’s state precisely, and sometimes also that we would be able to understand this state clearly and unambiguously, for example in order to evaluate whether our control mechanisms are working. Thus many of Bostrom’s arguments seem tightly coupled to the Church-Turing model of computation (or at least to a calculus/operational substrate that allows for inspection, modification and duplication of state). Some of his other arguments are, however, sufficiently general that we do not need to assume any specific substrate.

Bostrom, it seems to me, implicitly endorses at least the epistemological assumption throughout the book (and possibly also the ontological one). Even as he rightly takes pains to avoid stating specifically how technologies such as superintelligences or whole brain emulation would be implemented, it is clear that he imagines the formalism of digital computers as “sufficient to explain and generate intelligent behaviour”. In this, but perhaps not in everything he writes, he is a representative of current mainstream AI thinking. (I would like to add that even if he has wrongly taken over these assumptions, the extreme caution he advises us to proceed with regarding strong AI deserves to be taken seriously – the risks in practice are sufficiently great for us to be quite worried. I do not wish to undermine his main argument.)

It is thinkable but unlikely that in the near future, through a resounding success (which could be an academic, industrial or commercial one, for example), the epistemological assumption will be proven true. What I hold to be more likely (for reasons that have been gradually developed on this blog) is that current AI work will converge on something that may well be extremely impressive and that may affect society greatly, but that we will not consider to be human-like intelligence. The exact form that this will take remains to be discovered.

Hubert Dreyfus passed away in April 2017, while I was in the middle of writing this post. Although I never had the privilege of attending his lectures in person, his podcasted lectures and writings have been extremely inspirational and valuable to me. Thank you.

Comment » | Computer science, Philosophy

Brexit and globalisation

March 30th, 2017 — 2:37pm

Two momentous events that took place last year were the election of Donald Trump to the presidency of the United States, and the UK’s referendum on EU membership that led to the “Brexit” decision to leave the union. The two are often lumped together and seen as symptoms of a single larger force, which they probably are. But in one respect they are different. The Trump presidency has an expiry date, but it is hard to see how Brexit might be reversed in the foreseeable (or even distant) future.

As a student and then an engineer in London during 2003-2007, one of the first vivid, intense impressions I got was that the UK was a much better integrated society than Sweden. Manifestly, people from all kinds of cultural backgrounds were – it seemed to the 19-year old me – living and working together smoothly on many social levels. During my life in Sweden until then, I had not ever seen immigration working out in this way. It was mostly seen and talked about as a problem that had to be addressed (and on a much smaller scale than what we have now).

This may of course reflect the fact that London has long been, until now, one of the most global cities in the world (Tokyo has nothing on it in this respect, although it has a massive energy and dynamic of a different kind), and the place I came from was rather rural. Countryside Britain was never as well integrated as London. World cities tend to be sharply different from the surroundings that support them. But on balance, the UK came across to me as a successfully global society.

In the years since, Sweden has, it seems to me, successfully integrated a lot of people and there are plenty of success stories. It has become a far more global society than it was in, say, 2003. At the same time, xenophobia has been on the rise, just as it has in the rest of Europe and the US, and now Swedish politics must, lamentably, reckon with a very powerful xenophobic party. Reactive (in the Nietzschean sense) forces are having a heyday. Ressentiment festers.

The global society is probably here to stay. The ways of life and work, the economic entities that now bestride the earth, are all firmly globalised. This is an ongoing process that may not end for some time. (However, this probably will never erase the importance of specific places and communities. To be rooted in something is in fact becoming ever more important.) But globalisation, to use that word, has plainly not brought prosperity to everyone. In fact, many have been torn out of prosperity by economic competition and technological advances. Witness American coal miners voting Trump. In my view, though not everyone will agree, a well-protected middle class is necessary to achieve a stable democratic society. Witness what happens when that protection is too far eroded. Neglecting this – which has been a failure of politics on a broad scale – is playing with fire. General frustration becomes directed at minorities.

Being somewhat confused ourselves, and living with weak or failing, if not xenophobic or corrupt, politicians and governments, we – western/globalised society – may need something that is utterly lacking: new ideology, new thinking, new dreams. Not a wishful return to the 90’s, 70’s or some other imagined lost paradise, but something that we can strive for positively, and in the process perhaps reconfigure our societies, politics and economies. For this to happen, people may need to think more, debate more, read more books, and be more sincere. Sarcasm and general resignation lead nowhere. One needs to look sincerely at one’s own history, inward into the soul, as well as outward.

A successful form of such new politics probably will not involve a departing from the global society. But it may involve a reconfiguration of one’s relationship with it. So as Theresa May’s government proceeds to negotiate the withdrawal of the UK from the EU – which must be a bitter, gruelling task for many of those involved – I hope that what she is initiating is such a reconfiguration. I hope that Britain can draw on its past success as a highly global society and constructively be part of the future of the West.

4 comments » | Life, Philosophy

AI and the politics of perception

August 1st, 2016 — 11:36am

Elon Musk, entrepreneur of some renown, believes that the sudden eruption of a very powerful artificial intelligence is one of the greatest threats facing mankind. “Control of a super powerful AI by a small number of humans is the most proximate concern”, he tweets. He’s not alone among silicon valley personalities to have this concern. To reduce the risks, he has funded the OpenAI initiative, which aims to develop AI technologies in such a way that they can be distributed more evenly in society. Musk is very capable, but is he right in this case?

The idea is closely related to the notion of a technological singularity, as promoted by for example Kurzweil. In some forms, the idea of a singularity resembles a God complex. In C G Jung’s view, as soon the idea of God is expelled (for example by saying that God is dead), God appears as a projection somewhere. This because the archetype or idea of God is a basic feature of the (western, at least) psyche that is not so easily dispensed with. Jung directs this criticism at Nietzsche in his Zarathustra seminar. (Musk’s fear is somewhat more realistic and, yes, proximate, than Kurzweil’s idea, since what is feared is a constellation of humans and technology, something we already have.)

But if Kurzweil’s singularity is a God complex, then the idea of the imminent dominance of uncontrollable AI, about to creep up on us out of some dark corner, more closely resembles a demon myth.

Such a demon myth may not be useful in itself for understanding and solving social problems, but its existence may point to a real problem. Perhaps what it points to is the gradual embedding of algorithms deeply into our culture, down to our basic forms of perception and interaction. We have in effect already merged with machines. Google and Facebook are becoming standard tools for information finding, socialising, getting answers to questions, communicating, navigating. The super-AI is already here, and it has taken the form of human cognition filtered and modulated by algorithms.

It seems fair to be somewhat suspicious — as many are — of fiat currency, on the grounds that a small number of people control the money supply, and thus, control the value of everybody’s savings. On similar grounds, we do need to debate the hidden algorithms, controlled by a small number of people (generally not available for perusal, even on request, since they would be trade secrets), and pre-digested information that we now use to interface with the world around us almost daily. Has it ever been so easy to change so many people’s perception at once?

Here again, as often is the case, nothing is truly new. Maybe we are simply seeing a tendency that started with the printing press and the monotheistic church, taken to its ultimate conclusion. In any case I would paraphrase Musk’s worry as follows: control of collective perception by a small number of humans is the most proximate concern. How we should address this concern is not immediately obvious.

Comment » | Computer science, Philosophy

Technology and utilitarianism

March 4th, 2012 — 1:19pm

Technologists and engineers often use the ideas of utilitarianism to evaluate their solutions. If something is cheaper, or faster, or lets people live 3.2 days longer on average, or some other number can be optimised, they judge a solution to be better. In short, they use a quantitative form of  judgment. This way of thinking is the appropriate way of judging engineering problems, but not the best way of judging design problems.

To a degree it is possible to come up with a new product by simply improving on some numbers from an old one. “Here’s a new hard drive with 1.3x more space.” However, such innovation will always be incremental.

The challenge for technology is how to create products and solutions that are not justified or evaluated from a quantitative, utilitarian perspective, but from an entirely different one, perhaps an aesthetic perspective. And this is also the challenge for social innovators and policymakers in society. Solutions that maximise numbers have value and can enable qualitative change in the long run, but in themselves they never constitute true progress.

To see how far the utilitarian thinking has gone, think about how many technology products are justified with sentences along the lines of “it makes more information available”, or “it makes X cheaper” , or “it makes you more connected”. In all seriousness, there are situations when it is not desirable to have more information.

2 comments » | Computer science, Philosophy

The limits of responsibility

December 23rd, 2011 — 2:26am

(The multi-month hiatus here on Monomorphic has been due to me working on my thesis. I am now able to, briefly, return to this and other indulgences.)

Life presupposes taking responsibility. It presupposes investing people, objects and matters around you with your concern.

In particular, democratic society presupposes that we all take full, in some sense, responsibility for society itself, its decision making and its future.

However, he who lacks information about some matter cannot take responsibility for it. And thus we often defer to authorities in practice. Authorities allow us to specialise our understanding, which increases our net ability to understand as a collective, assuming that we have sufficiently well functioning interpersonal communication.

There are whole categories of problems that routinely are assigned to specific, predefined authorities and experts; for instance legal matters, constitutional matters, whether some person is mentally ill, medical matters, nuclear and chemical hazards, and so on. Fields where some degree of extensive training is generally required. (However, under the right conditions, these authorities could probably also be called into question by the public opinion.) The opposite is those categories of problems that are routinely assigned to “public opinion” and all of its voices and modulating contraptions and devices, its amplifiers, dampeners, filters, switches and routing mechanisms.

Responsibility aside, in order to maximise an individual’s prospects for life, and by extension society’s prospects for life, it seems important that the individual possess just the right knowledge that they need in their situation. Adding more knowledge is not always a benefit; some kinds of knowledge can be entirely counterproductive. Nietzsche showed this (“On the use and abuse of history for life”), and we can easily apply the idea of computational complexity to see how having access to more information would make it harder to make  decisions.

This is especially true for some kinds of knowledge: knowledge about potential grave dangers, serious threats, monumental changes threatening to take place. Once we have such knowledge we cannot unlearn it, even if it is absolutely clear that we cannot act on it and that we do not have the competence to assess the situation fully. It  takes effort and an act of will to fully disregard a threat on the basis of one’s own insufficient competence.

On the other hand, knowledge about opportunities, about resources, and about problems that one is able to, or could become able to deal with, would generally be helpful and not harmful. However, even this could be harmful if the information is so massive as to turn into noise.

Even disregarding these kinds of knowledge, one of the basic assumptions of democracy – that each individual takes full responsibility for society – seems to be an imperative that is designed never to be fulfilled. An imperative designed to be satisfied by patchworks of individual decisions and “public opinion”, and whatever information fate happens to throw in one’s way. Out of a basic, healthy understanding of their own limitations, individuals generally assume that the democratic imperative to know and to take responsibility was never meant to be taken seriously anyway, but one does one’s best to match one’s peers in appearing to do so.

It seems to me that the questions we must ask and answer are about the proper extent of responsibility, and the proper extent of knowledge, for each individual. For the individual, taking on no responsibility seems detrimental to life; taking on full responsibility for all problems in the world right now, here today, would also be an impossibility. There would be such a thing as a proper extent of responsibility. One’s initial knowledge and abilities would inform this proper extent of responsibility, and the two might properly expand and shrink together, rather than expand and shrink separately.

In a democratic society, in so far as one wants to have one, we should ask: what is the proper level of responsibility that society should expect from each individual, and what level should the individual expect from himself as an ideal?

More generally, empirical studies of how public opinion functions and how democracies function in practice are needed. It is inappropriate to judge and critique democracies based on their founding ideals when the democratic practice differs sharply from those ideals – as inappropriate as it is to critique and judge economies based on the presumption that classical economic principles apply to economic practice in the large.

3 comments » | Philosophy

Back to top