Tag: complexity


Permanence and technology

November 19th, 2010 — 12:23am

1. Mt. Fuji, 3776 m high. A petrified mass of volcanic discharge, thought to have been first ascended in year 663.

2. Skyscrapers in Ootemachi, Tokyo and the City, London. Buildings belonging mostly to banks and insurance companies. They appear, on some intuitive level, to have been there forever, though most of these buildings can now be built from the ground up in less than a year. It is hard to fathom how they could ever be destroyed, though the work could be done in a matter of months (?) with the right equipment.

3. What is permanent? Anything that we cannot perceive as changeable, we call permanent. But this is a linguistic and epistemological error. The inability to perceive something has led us to declare its absence.

4. The earth. 5.9736 x 10^24 kg of matter, likely fused into a planet about 4.54 billion years ago. The sun will enter a red giant phase in about 5 billion years and swallow or cause tremendous damage to it. The sun is also currently the source of all fossilised energy on earth and the energy used by most life forms on it.

5. A certain class of mathematical proofs often consist in converting facts from one basis (family of concepts) to another. Such proofs often have a hamburger-like structure: first the initial facts are rewritten into a larger, more complex formulation that suits both the assumptions and the conclusion, and then the complex formulation is collapsed in such a way that the desired results come out and the original formulation is lost. The “beef” in such a proof often consists in carrying out the correct rewriting process in the middle.

6. Facebook takes off and becomes enormously popular, in part because it facilitates, on a huge scale, something that human beings want to do naturally. Communication and the need to relate to crowds and individuals could be said to be universal among humans.

An incomplete version of the technology lattice, as suggested in this post, with human desires at the top and the resources available in the universe at the bottom.

7. We can imagine technology as a lattice-like system that mediates between the human being, on one hand, and the universe on the other. As a very rough sketch of fundamental human needs, we could list drives like communication, survival/expansion, power/safety and art. (In fact, an attempt to make some of these subordinate to others would constitute an ethical/philosophical system. Here we do not need such a distinction, and the one I have made is arbitrary and incomplete.) When we place our fundamental drives on one end, and the resources and conditions provided by the universe on another – elements and particles, physical laws and constants – we can begin to guess how new technologies arise and where they can have a place. The universe is a precondition of the earth, which is a precondition of animals and plants, which we currently eat. And food is currently a precondition of our survival. But we can imagine a future in which we are not dependent on the earth for food, having spread to other planets. We can imagine a future in which oil and nuclear power are no longer necessary as energy sources, because something else has taken their place. New possibilities entering the diagram like this adds more structure in the middle – more beef – but the motivating top level and the supplying bottom level do not change perceptibly. (Of course, if they did, beyond our perception, they could be made part of an even larger lattice with a new bottom and top configuration.)

8. Technology is a means to the establishment of permanence, and a re-encoding of human desires into reality.

9. New technologies arise constantly. But can this evolutionary process go on forever? Does the lattice converge towards a final state?

Comment » | Philosophy

The coming politicization of mathematics and computer science

October 9th, 2010 — 7:10pm

Increasingly, ordinary people encrypt their internet communications. Some want to share files. Some are worried about the increasing surveillance and threats of surveillance of Internet data that is taking place in many corners of the world. ACTA, Hadopi, data retention would be a few examples. People may simply wish to keep their data private, even in cases when the data is not objectionable. Others, hopefully not so ordinary people, have an acute need to hide from authorities of some form or another, maybe because they actually have a criminal intent, or maybe because they are regime critics in repressive countries. Maybe they are submitting data to sites like Wikileaks.

Various technologies have come out of academic experiments, volunteer work and government sponsored research to assist with encrypted communication. PGP/GnuPG and SSH are classic mainstays. Onion routing, as implemented in the TOR system, is an effective way of concealing the true origin and destination of data being sent around. Darknet systems like the I2P project aim to build a complete infrastructure for an entirely new kind of Internet, piggybacking on the old one but with anonymity and encryption as first class fundamental features.

I think we are only at the start of a coming era of political conflicts centered around communications technology, and that more and more issues will have to be ironed out in the coming years and decades. The stakes are high. On one hand control and political stability, on the other hand individual rights and democratic progress. This is not new. One thing that I think is potentially new and interesting though, is how mathematics and computer science ought to become increasingly sensitive and political in the coming years.

Today disciplines like genetics and stem cell research are considered controversial research areas by some people since they touch on the very foundations of what we think of as life. Weapons research of all kinds is considered controversial for obvious reasons, and the development of a weapon on the scale of nuclear bombs would completely shift the global power structure.  One fundamental building block of communications control is the ability to encrypt and to decrypt. These abilities are ultimately limited by the frontiers of mathematical research. Innovations such as the Skein hash function directly affect the cryptographic power balance.

Most of the popular varieties of encryption in use today can be overcome, given that the adversary has sufficient computing power and time. In addition, human beings often compromise their keys, trust the wrong certificates, or act in ways that diminish the security that has been gained. Encryption is not absolute unless the fact that something has been encrypted has been perfectly hidden. Rather, it is a matter of economics, of making it very cheap to encrypt data,and very expensive for unintended receivers to decrypt it.

It is not possible to freeze encryption at a certain arbitrary level, or to restrict the use of it. Computers are inherently general purpose, and software designed for one purpose can almost always be used for another. If the situation is driven to its extreme, we might identify two possible outcomes: either general purpose computers are forbidden or restricted, or uncontrolled, strongly encrypted communication becomes the norm. Christopher Kullenberg has touched on this topic in Swedish.

Those who would rather not see a society where widespread encryption is commonplace would perhaps still want to have what they see as desirable effects of computerisation. In their ideal world they would pick and choose what people can do with computers, in effect giving a list of permitted and prohibited uses. But this is not how general purpose computers work. They are programmable, and people can construct software that does what they want. If the introduction of non-authorised software somehow is prohibited, and all applications must be checked by some authority, applications can still usually be used for purposes they were not designed for. This generality of purpose simply cannot be removed from computers without making them useless – at least that is how it seems today. It seems that it would take a new fundamental model of computation that selectively prohibits certain uses is needed in order to make this happen. (In order to make sure that this kind of discovery is not put to use by the “other camp”, those of us who believe in an open society should try to find it, or somehow establish the fact that it cannot be constructed.)

Mathematics now stands ever more closely connected with political power. Mathematical advances can almost immediately increase or decrease the resistance to information flow (given that somebody incorporates the advances into usable software). The full consequences of this are something we have yet to see.

6 comments » | Computer science

Utilitarianism and computability

September 18th, 2010 — 5:10pm

I’ve started watching Michael Sandel’s Harvard lecture series on political philosophy, “justice”. In this series, Sandel introduces the ideas of major political and moral philosophers, such as Bentham, Locke, and Kant, as well as some libertarian thinkers I hadn’t heard of. I’m only halfway through the series, so I’m sure there’s other big names coming up. The accessibility of the lectures belies their substance: what starts out with simple examples and challenges to the audience in the style of Socratic method often ends up being very engaging and meaty. (Incidentally, it turns out that Michael Sandel has also become fairly famous in Japan, with his lectures having been aired on NHK, Japan’s biggest broadcaster.)

One of the first schools of thought he brings up is utilitarianism, whose central idea appears to be that the value of an action is placed in the consequences of that action, and not in anything else, such as the intention behind the action, or the idea that there are certain categories of actions that are definitely good or definitely evil. What causes the greatest happiness for the greatest number of people is good, simple as that. From these definitions a huge amount of difficulty follows immediately. For instance, is short-term happiness as good as long-term happiness? How long term is long term enough to be valuable? Is the pleasure of ignorant people as valuable as that of enlightened people? etc. But let’s leave all this aside and try to bring some notion of computability into the picture.

Assume that we accept that “the greatest happiness for the greatest number of people” is a good maxim, and we seek to achieve this. We must weigh the consequences of actions and choices to maximise this value. But can we always link a consequence to the action, or set of actions, that led to it? Causality in the world is a questionable idea since it is a form of inductive knowledge. Causality in formal systems and in the abstract seems valid, since it is a matter of definition, but causality in the empirical, in the observed, seems to always be a matter of correlation: if I observe first A and then B sufficiently many times, I will infer that A implies B, but I have no way of knowing that there are not also other preconditions of B happening (for instance, a hitherto invisible particle having a certain degree of flux). It seems that I cannot reliably learn what causes what, and then, how can I predict the consequences of my actions? Now, suddenly, we end up with an epistemological question, but let us leave this too aside for the time being. Perhaps epistemological uncertainty is inevitable.

I still want to do my best to achieve the greatest happiness for the greatest number of people, and I accept that my idea of what actions cause what consequences is probabilistic in nature. I have a set of rules, A1 => B1, A2 => B2… An => Bn which I trust to some extent and I want to make the best use of them. I have now ended up with a planning problem. I must identify a sequence of actions that maximises that happiness variable. But my brain has limited computational ability, and my plan must be complete by time t in order to be executable. Even for a simple problem description, the state space that planning algorithms must search becomes enormous, and identifying the plan, or a plan, that maximises the value is simply not feasible. Furthermore, billions of humans are planning concurrently, and their plans may interfere with each other. A true computational utilitarian system would treat all human individuals as a single system and find, in unison, the optimal sequence of actions for each one to undertake. This is an absurd notion.

This thought experiment aside, if we are utilitarianists, should we enlist the increased computing power that has recently come into being to help manage our lives? Can it be used to augment (presumably it can not supplant) human intuition for how to make rapid choices from huge amounts of data?

1 comment » | Computer science, Philosophy

Partitioning idea spaces into containers

August 29th, 2010 — 3:55pm

Some scattered thoughts on idea flows.

The global idea space is partitioned in various ways. One example would be peoples speaking different languages. English speakers all understand each other, Japanese speakers all understand each other, but there are relatively few people who speak Japanese and English very well. We can understand this situation in an abstract way as two large containers with a narrow passage connecting them.

Similar partitionings occur whenever there are groups of people that communicate a lot among themselves and less with people in other groups. For instance, there would be a partitioning between people who use the internet frequently and people who use it rarely (to some extent similar to a partitioning between young and old people). This partitioning is in fact orthogonal to the language partitioning, i.e. there is an English internet, a Japanese internet, an English non-internet, etc.

The partitioning of the space into containers has effects on the establishment of authorities and the growth of specialised entities inside the containers. The establishment of authorities is in some ways a Darwinist selection process. There can only be one highest authority on philosophy, on history, on art, on mathematics etc. that speaks one given language or acts inside a given container. Or for a more banal example: pop charts and TV programs. (Even though, inside the Anglosphere, each country may still have their own pop chart, they influence each other hugely.) If there are two contenders for the position of highest authority on art in a container, either they have to be isolated from each other somehow, or they must interact and resolve their conflict, either by subordination of one to the other, or by a refinement of their roles so that these do not conflict. As for the specialised entities, the larger the container is, the more space there is for highly niched ideas. This is in fact the “long tailidea. The Internet is one of the biggest containers to date, and businesses such as Amazon have (or at least had) as their business model to sell not large numbers of a few popular items, but small numbers of a great many niched items. Such long tails can be nurtured by large containers. (In fact this is a consequence of the subordination/refinement when authority contenders have a conflict.)

We may also augment this picture with a directional graph of the flows between containers. For instance, ideas probably flow into Japan from the Anglosphere more rapidly than they flow in the reverse direction. Ideas flow into Sweden from the Anglosphere and from Japan but flow back out of Sweden relatively rarely. Once an idea has flowed into a space like Sweden or Japan from a larger space like the Anglosphere, though, the smaller space can act like a kind of pressure cooker or reactor that may develop, refine, or process the imported idea and possibly send a more interesting product back. A kind of refraction occurs.

In the early history of the internet, some people warned that the great danger of it is that everybody might eventually think the same thoughts, and that we would lose the diversity of ideas. This has turned out to be an unrealised fear, I think, at least as long as we still have different languages. But are languages not enough? Do we need to do more to create artificial partitionings? What is the optimal degree of partitioning, and can we concretely map the flows and containers with some degree of precision?

Comment » | Philosophy

Provocation and adaptation

June 23rd, 2010 — 5:22pm

My last post, on the topic of resisting the circumstances in life, ended with a question. What choices should I make to resist maximally, given that choices make me stronger, i.e. choices have long term side effects on me?

So I would like to, probabilistically, maximise my set of skills in order to best be able to achieve some kind of ambition I have set for myself. Cutting off my hand will probably not help me, but learning arabic might. Being in a car crash is unlikely to be helpful, but being a marathon runner could conceivably be useful. Both involve pain, but one causes irreversible damage, the other causes an increase of strength if done properly. What is the ideal form of schooling for children (If we take the unlikely view that the purpose of schools is teaching things)? That which increases their ability the fastest, which is to say, the most difficult knowledge, the fastest speed of teaching that they can possibly cope with. The maximum trajectory that they can sustain without losing the grip or their interest in the subject.

Should I do the same in life, then? Probably, but it gets tricky, because life experiences that promise to teach me a lot are often unfamiliar, or dangerous, or otherwise involve pain. As we have seen, it is not the case that pain equals learning, but pain can be strongly correlated with learning. To be more precise: if I become crippled in a car crash, or by cutting off my hand, it is because I received stimuli from directions and with intensities that I could not withstand. Provoke me at a slowly building rate, and I will learn to deal with the provocations and perhaps bite back. Provoke me really hard and really fast from the start, and I will die. And then there are provocation vectors to which individuals cannot adapt in a single generation, for instance, drowning. Species might adapt to this kind of threat over several generations. Is not life precisely that which adapts to changing circumstances, potentials and provocations, in particular potential threats or benefits? But intelligent animals, like humans, are a special form of life. We can select what experiences to undergo, and thus what training to receive. This is how we can consciously adapt in advance when we expect a difficult situation. (Young animals play in order to train themselves for adult behaviour, but this kind of training has been conditioned by evolution over many generations. Are there any animals that train selectively to face threats that they have identified during the same generation, like humans do?)

If I identify the maximum “provocation rate” that I am able to withstand concerning a particular skill, another problem I would want to solve is: do skills compete? If I learn Arabic very well, will it downgrade my Russian? If I become a marathon runner, will it disrupt my ballet dancing ability? When a skill involves a particular conditioning of the body and the muscles, it is probably easy to see that some skills conflict. When they involve a conditioning of the mind, it is less obvious. Is the mind flexible enough to support radically opposed skills and viewpoints at the same time? Is this property the same or different for different people?
Questions that lead to more questions.

Comment » | Philosophy

Back to top