Tag Archives: intellectual property

Partitioning idea spaces into containers

Some scattered thoughts on idea flows.

The global idea space is partitioned in various ways. One example would be peoples speaking different languages. English speakers all understand each other, Japanese speakers all understand each other, but there are relatively few people who speak Japanese and English very well. We can understand this situation in an abstract way as two large containers with a narrow passage connecting them.

Similar partitionings occur whenever there are groups of people that communicate a lot among themselves and less with people in other groups. For instance, there would be a partitioning between people who use the internet frequently and people who use it rarely (to some extent similar to a partitioning between young and old people). This partitioning is in fact orthogonal to the language partitioning, i.e. there is an English internet, a Japanese internet, an English non-internet, etc.

The partitioning of the space into containers has effects on the establishment of authorities and the growth of specialised entities inside the containers. The establishment of authorities is in some ways a Darwinist selection process. There can only be one highest authority on philosophy, on history, on art, on mathematics etc. that speaks one given language or acts inside a given container. Or for a more banal example: pop charts and TV programs. (Even though, inside the Anglosphere, each country may still have their own pop chart, they influence each other hugely.) If there are two contenders for the position of highest authority on art in a container, either they have to be isolated from each other somehow, or they must interact and resolve their conflict, either by subordination of one to the other, or by a refinement of their roles so that these do not conflict. As for the specialised entities, the larger the container is, the more space there is for highly niched ideas. This is in fact the “long tailidea. The Internet is one of the biggest containers to date, and businesses such as Amazon have (or at least had) as their business model to sell not large numbers of a few popular items, but small numbers of a great many niched items. Such long tails can be nurtured by large containers. (In fact this is a consequence of the subordination/refinement when authority contenders have a conflict.)

We may also augment this picture with a directional graph of the flows between containers. For instance, ideas probably flow into Japan from the Anglosphere more rapidly than they flow in the reverse direction. Ideas flow into Sweden from the Anglosphere and from Japan but flow back out of Sweden relatively rarely. Once an idea has flowed into a space like Sweden or Japan from a larger space like the Anglosphere, though, the smaller space can act like a kind of pressure cooker or reactor that may develop, refine, or process the imported idea and possibly send a more interesting product back. A kind of refraction occurs.

In the early history of the internet, some people warned that the great danger of it is that everybody might eventually think the same thoughts, and that we would lose the diversity of ideas. This has turned out to be an unrealised fear, I think, at least as long as we still have different languages. But are languages not enough? Do we need to do more to create artificial partitionings? What is the optimal degree of partitioning, and can we concretely map the flows and containers with some degree of precision?

The identity crisis of the internet

The architecture of the Internet is fundamentally decentralized, a fact that continues to impress to this day. The breadth and depth of the sea of applications and uses we have made of it, and its resilience, impress perhaps all the more, because many of our experiences from everyday life tell us that some of the strongest things in society are singular and centralized — huge companies and governments, for instance. I’m actually not an expert on internet architecture, but my understanding is that the only thing that is fixed in it is the DNS system, which relies on some top level hardcoded IP addresses and coordination.

But even though the Internet is built on a decentralized architecture, it also supports applications/services that are highly centralized in their architecture and in their intended use. Google and Facebook are two very famous such applications. On the other extreme are applications that might be called P2P, including notorious file sharing systems such as Bittorrent, and also simple email (which was designed for decentralized use but is becoming heavily centralized with services like Gmail).

In recent days there’s been much discussion about Facebook’s role, particularly since it has been taking more and more liberties with the vast amounts of data about it users that it holds, scaling back the notions of privacy and integrity as they see fit. Many people are calling for decentralized alternatives to Facebook to rear their heads, and I suppose people have been calling for decentralized search engines as well for some time.

Much seems to be at stake here. What’s the future direction of the internet? A few giants holding all the data, monopolising certain functions, or a distributed network of peers, creating functionality together? The debate is ideologically charged and could be mapped into a big government/small government discussion, although I think it would be fruitless to do so. What is certain is that radically different applications can be created using the centralized/decentralized models and that it is rarely a case of merely “porting” an app from one architecture to another, the way you port an application from C to Java. On an abstract level, the two models could serve as substrates for the same functionalities (such as social network services), but the concrete implementations would have very different characteristics.

Do we create centralized applications because our legal systems, property rights systems, and so on, have not evolved at the same pace as our infrastructure, so that our tendencies, habits and ideals from a brick-and-mortar world are preserved in the world of fiber and switches, appearing ever more outdated?

In Sweden this debate has been especially pronounced recently with companies like Flattr being firmly on the side of decentralized models. Flattr is trying to be a universal donation system for content on the internet, and the vision behind it is a large number of decentralized creators of “content” (which are themselves consumers).

I’m not sure which model will win in the long run. I prefer to think that both models have a role to play and that they can coexist nicely. But lately it seems as if the centralized model has had a bit too much momentum. Let’s dig deeper into the decentralizing potential of the internet!

Searching and creating

We distinguish between inventions and discoveries. You can own the intellectual property rights to an invention, but not to a discovery (you can’t patent the discovery of mercury or selenium, for instance). Inventions are meant to be created, and discoveries are meant to be sought for. But sometimes, the line between invention and discovery is blurry.

We cannot own the rights to mathematical structures or theorems, since they follow directly from axioms. Anyone with a mathematical education would come to the same results within the same axiomatic system. The creation of a mathematical theorem can be said to be a search process, hence the term “discovery” and not “invention”.

We can own the rights to music and paintings, since these are considered to be inventions. But isn’t the process that leads to a painting or work of music being created also a search process? Doesn’t the artist search for possible combinations that work together, in a — albeit very large and continuous — search space? But this is considered to be creation/synthesis rather than search.

The software developer is, at least sometimes, somewhere in between. A vision of a user interface that interacts with end users in a certain way can perhaps be said to come from the same large, continuous space as music and paintings come from. But given the constraints imposed by such a vision, and by the platform on which the system is to be built, the available libraries, the languages, etc, I would say that the construction of much of desktop/consumer software is a search problem. We look for combinations of components that fit the constraints, and when we have decided on this combination, we must connect the pieces together correctly. The space of possible solutions here, at least for someone who follows good design principles, is in essence much smaller than the music/painting search space. Of course there are considerations of taste and style, but they are completely irrelevant to the compiled product. They are a programmer aid.

Artificial intelligence problems are defined as search problems. But what are search problems, and what are “creational” problems, precisely? Is it merely a question of the size of the search/design space?