Tag: metaphors

The cryptographic-spiritual realm

December 15th, 2010 — 9:19am

Internet services and systems such as Google and Amazon usually appear to us as a visual representation of a page, as if it were taken out of some kind of printed publication. For almost all of the users, these visual qualities are all that will ever be seen. They are always present and never present, because we cannot point to the place where they really reside.

But of course they reside somewhere. Cables and machines embody the apparition that users interact with, and these cables can be found and cut. The machines can be shut off. Traceroute tells me which path the data is taking. But the thread that binds the body to the spirit is thin, and the two evolve in a largely independent manner.

With cryptographic methods, such as the I2P network, it is possible to hide the exact location of a system, disperse it across the fabric so widely that it cannot be excised without destroying the fabric itself.

The effect is the same as if the system had no physical existence at all. It now exists in a kind of spiritual realm, where it can only be touched with great difficulty.

3 comments » | Computer science, Philosophy

Partitioning idea spaces into containers

August 29th, 2010 — 3:55pm

Some scattered thoughts on idea flows.

The global idea space is partitioned in various ways. One example would be peoples speaking different languages. English speakers all understand each other, Japanese speakers all understand each other, but there are relatively few people who speak Japanese and English very well. We can understand this situation in an abstract way as two large containers with a narrow passage connecting them.

Similar partitionings occur whenever there are groups of people that communicate a lot among themselves and less with people in other groups. For instance, there would be a partitioning between people who use the internet frequently and people who use it rarely (to some extent similar to a partitioning between young and old people). This partitioning is in fact orthogonal to the language partitioning, i.e. there is an English internet, a Japanese internet, an English non-internet, etc.

The partitioning of the space into containers has effects on the establishment of authorities and the growth of specialised entities inside the containers. The establishment of authorities is in some ways a Darwinist selection process. There can only be one highest authority on philosophy, on history, on art, on mathematics etc. that speaks one given language or acts inside a given container. Or for a more banal example: pop charts and TV programs. (Even though, inside the Anglosphere, each country may still have their own pop chart, they influence each other hugely.) If there are two contenders for the position of highest authority on art in a container, either they have to be isolated from each other somehow, or they must interact and resolve their conflict, either by subordination of one to the other, or by a refinement of their roles so that these do not conflict. As for the specialised entities, the larger the container is, the more space there is for highly niched ideas. This is in fact the “long tailidea. The Internet is one of the biggest containers to date, and businesses such as Amazon have (or at least had) as their business model to sell not large numbers of a few popular items, but small numbers of a great many niched items. Such long tails can be nurtured by large containers. (In fact this is a consequence of the subordination/refinement when authority contenders have a conflict.)

We may also augment this picture with a directional graph of the flows between containers. For instance, ideas probably flow into Japan from the Anglosphere more rapidly than they flow in the reverse direction. Ideas flow into Sweden from the Anglosphere and from Japan but flow back out of Sweden relatively rarely. Once an idea has flowed into a space like Sweden or Japan from a larger space like the Anglosphere, though, the smaller space can act like a kind of pressure cooker or reactor that may develop, refine, or process the imported idea and possibly send a more interesting product back. A kind of refraction occurs.

In the early history of the internet, some people warned that the great danger of it is that everybody might eventually think the same thoughts, and that we would lose the diversity of ideas. This has turned out to be an unrealised fear, I think, at least as long as we still have different languages. But are languages not enough? Do we need to do more to create artificial partitionings? What is the optimal degree of partitioning, and can we concretely map the flows and containers with some degree of precision?

Comment » | Philosophy

Searching and creating

July 20th, 2009 — 8:17pm

We distinguish between inventions and discoveries. You can own the intellectual property rights to an invention, but not to a discovery (you can’t patent the discovery of mercury or selenium, for instance). Inventions are meant to be created, and discoveries are meant to be sought for. But sometimes, the line between invention and discovery is blurry.

We cannot own the rights to mathematical structures or theorems, since they follow directly from axioms. Anyone with a mathematical education would come to the same results within the same axiomatic system. The creation of a mathematical theorem can be said to be a search process, hence the term “discovery” and not “invention”.

We can own the rights to music and paintings, since these are considered to be inventions. But isn’t the process that leads to a painting or work of music being created also a search process? Doesn’t the artist search for possible combinations that work together, in a — albeit very large and continuous — search space? But this is considered to be creation/synthesis rather than search.

The software developer is, at least sometimes, somewhere in between. A vision of a user interface that interacts with end users in a certain way can perhaps be said to come from the same large, continuous space as music and paintings come from. But given the constraints imposed by such a vision, and by the platform on which the system is to be built, the available libraries, the languages, etc, I would say that the construction of much of desktop/consumer software is a search problem. We look for combinations of components that fit the constraints, and when we have decided on this combination, we must connect the pieces together correctly. The space of possible solutions here, at least for someone who follows good design principles, is in essence much smaller than the music/painting search space. Of course there are considerations of taste and style, but they are completely irrelevant to the compiled product. They are a programmer aid.

Artificial intelligence problems are defined as search problems. But what are search problems, and what are “creational” problems, precisely? Is it merely a question of the size of the search/design space?

4 comments » | Computer science, Philosophy

Languages and automata, part 2

July 12th, 2009 — 11:55pm


Today an oppressive, passivizing heat covers Tokyoites like a massive woollen blanket. Summer is here. In a feeble attempt to defy the heat, I follow up on my previous post on languages and automata.

That post ended with the suggestion that we can apply these concepts to interactions in society. But can we? As a starting point, let’s think about stateless and stateful interactions in a system. Stateful interactions involve a change of state, in some sense. Stateless interactions involve no such change. What counts as stateful depends very much on how detailed the model is – these might be examples:

  • You make a purchase in a convenience store – the obvious changes of state are the balance in your wallet/bank account, the amount of items you possess/carry with you, and the corresponding opposite changes on behalf of the store.
  • You greet somebody you know on the street and exchange some small talk. Even though no actionable information is exchanged, you both feel happier afterwards and in a better mood because you were acknowledged by someone else. This is a change of state. The precondition is that you are in a state where you know the other person – this interaction would not be possible with a random person in a random state. (On a different level, a typical such exchange goes through at least three discrete states in itself – “greeting”, “exchange of information”, “goodbye”).
  • You go to your job in an office, read some documents, write some reports and leave. We can think about the wear and tear on the furniture and the building, the carbon dioxide-oxygen exchange in the air, and the changes to your company’s total body of information as changes of state. Which to choose depends, again, on the model.

Are there any stateless interactions then? Within the context of a particular model, yes. If we only care about monetary and material transactions, the meeting on the street might be stateless. If we only care about “mood” states, the purchase in the store might be stateless, and the office job might have a negative effect on accumulated mood.

In software engineering, we always try to hide state as much as possible. State makes the system far harder to understand and reason about.  We like immutable objects, whose state never change. If we look at reality through abstractions, maybe such things can exist, but in the physical world I don’t believe they do (I’d have to ask a physicist to know the answer though).

The most complex interactions in society, I think, take place among people and organisations that have long lasting relationships. These entities can modify each other’s state over a long period of time. If I’ve known somebody for years, there’s a very large number of possible states a conversation with that person might be in, a large number of topics I might possibly bring up and discuss. But the limitations of societal norms and my own knowledge imply that a conversation with a stranger might be a very small state machine indeed. (On the other hand, maybe this is why getting to know a new person can be very satisfying – the newness of building a new structure from scratch in your head to represent this person’s states). Companies that interact with customers in short, anonymous relationships almost never present them with complex interactions (convenience stores, taxi drivers). With other companies we have more complex interactions and longer relationships (doctors, banks).

These transitions of state are, again, like words that make up sentences in formal languages. We all live these languages every day. How many states do you have?

3 comments » | Computer science, Philosophy

Languages and automata, part 1

July 6th, 2009 — 11:50am

Yoyogi, TokyoComputing is very new as a science. Blaise Pascal devised a mechanical calculator in 1645, but Charles Babbage’s analytical engine, widely considered the first programmable computer, was not conceived of until the mid-19th century. However, it was never constructed (unlike Babbage’s simpler “difference engine”), and even at this time there was almost no theory to go with the invention. Today, the fundamental abstractions of computing and programming are Turing machines and Lambda calculus, described in the 1930’s. So essentially, the theory has had less than a century to mature, and is being viewed by many as a branch of mathematics.

The newness of computing means that we don’t know that much about its role or its applicability outside of devices built specifically for computing, nor do we know if today’s fundamental computing abstractions are the best ones.

Languages and automata are two of the most fundamental ideas in computing. In contrast to human languages, which are informal and rather unsystematic, in computing we often speak of formal languages. Something like the following is an example of a formal grammar:

  • Sequence-list: Sequence [ Sequence-list ]
  • Sequence: Wake up Action-list Have lunch Action-list Go to sleep
  • Action-list: Action [ Action-list ]
  • Action: Work | Answer the phone | Attend meeting | Relax

Using this grammar we can model the life of an office worker. We can generate an infinite list of potentially infinitely long “sentences”. The following are examples of valid sentences in the grammar:

  • Wake up, Work, Have lunch, Attend meeting, Go to sleep
  • Wake up, Work, Have lunch, Work, Go to sleep, Wake up, Work, Have lunch, Work, Go to sleep
  • Wake up, Answer the phone, Answer the phone, Answer the phone, Have lunch, Work, Go to sleep

A grammar such as this has a 1-1 correspondence with what is known as a deterministic finite automaton (DFA) – a very simple building block of software and hardware models. A formal grammar like the above is in a sense just a more natural way of thinking about a DFA.

What is the applicability of formal languages outside computing hardware and software?

Ferns. Kyoto, Japan

For one thing, we see them in nature, not least in ferns, which on a miniature level appear to have used the same rules as on the macro level. We see them in trees and flowers. In fact, the formal language paradigm appears to be a very good fit for many natural phenomena. One reason for this might be that formal languages allow rich structures to be constructed from a very small description.

One idea I find fascinating is trying to apply these models to human society: people and institutions. Can we describe the interactions in society as automata and formal languages, and if so, what can we learn about them?

2 comments » | Computer science, Philosophy

Back to top