A wikipedia of algorithms

Here’s something I’ve wanted to see for some time, but probably don’t have time to work on myself.

It would be nice if there was a wikipedia-like web site for code and algorithms. Just the common ones to start with, but perhaps more specialised ones over time. Of course the algorithms should be available in lots of different languages. This would in fact be one of the main points, so that people could compare good style and see how things should be done for different languages. In addition, there should be an in-browser editor, just like on Wikipedia (but perhaps with syntax highlighting) so people can make changes easily.

Furthermore, there should be unit tests for every algorithm, and these should be user-editable in the same way as the main code. In an ideal world, the web site would automatically run the unit tests every time there’s a change to some algorithm and check in a new version of the code to a versioned repository. People could then trust with reasonable confidence that the code is valid and safe. However, if the system were to be as open as Wikipedia is, such a system wouldn’t work, since users could write unit tests with malicious code. So I suspect volunteers would have to download, inspect, and run the unit tests regularly, and perhaps there would be a meta-moderation system of some kind, allowing senior members to promote changes to the official repository. In the meantime, everybody should be allowed to see and edit changes on the wiki immediately, but they would be marked as “untested” or “unsafe”.

User interface would be very important since this kind of site needs to be fun and easy to use regularly.

Has this kind of project already been carried out by someone? I can find some things by googling. The Code Wiki appears to once have been a wikipedia of code, but it seems defunct, C# only, and now they’re selling a book with the contents of the site! Algorithm Wiki has many algorithms in different languages, but the user interface is awkward and littered with obstructive advertising, the code is hard to browse, and it doesn’t make for a usable quick reference. They seem to have gotten off to a good start though. Any others?

Edit: Rosetta Code seems to be the most mature and useful such site out there today.

Where is Java going?

creative

Today, Java is one of the most popular programming languages. Introduced in 1995, it rests on a tripod of the language itself, its libraries, and the JVM. In the TIOBE programming language league charts, it has been at the top for as long as the measurements have been made (since 2002), overtaken by C only for a brief period due to measurement irregularities.

Yet not all is Sun-shine in Java world. Sun Microsystems is about to be taken over by Oracle, pending EU approval. (EU is really dragging its feet in this matter but it seems unlikely they would really reject the merger). Larry Ellison has voiced strong support for Java and for Sun’s way of developing software, so maybe this is really not a threat by itself. But how far can the language itself go?

The Java language was carefully designed to be relatively easy to understand and work with. James Gosling, its creator, has called it a blue collar language, meaning it was designed for industrial, real world use. In a world where C++ was the de facto standard for OO programming, Java was a big step forward in terms of ease of development, with its lack of pointers and strong type system – to say nothing of its garbage collection. Many classes of common programming errors were removed altogether. However, in the interests of simplicity and clarity, some tradeoffs were made. The language’s detractors today point to problems such as excessive verbosity, the lack of closures, the limited generics, and the checked exceptions.

For some time there has been a lot of exciting alternative languages available on the JVM. Clojure is a Lisp dialect. Scala, the only non-Java JVM language I have used extensively, mixes the functional and object oriented paradigms. Languages like JPython and JRuby basically exist to allow scripting and interoperability with popular scripting languages on the JVM.

Today it seems as if the JVM and the standardized libraries will be Java’s most prominent legacy. The language itself will not go away for a long time either – considering that many companies still maintain or develop in languages like Cobol and Fortran, we will probably be maintaining Java code 30 years from now (what a sad thought!), but newer and more modern JVM languages will probably take turns being number one. The JVM and the libraries guarantee that we will be able to mix them relatively easily anyway, unless they stray too far from the standard with their custom features.

So in hindsight, developing this intermediate layer, this virtual machine – and disseminating it so widely –  was a stroke of genius. Will it be that in future programming models we have even more standardized middle layers, and not just one?

Meanwhile, there’s a lot of debate about the process being used to shape and define Java. For a long time, Sun employed something called the Java Community Process, JCP, which was supposed to ensure openness. Some people proclaim that the openness has ended. To take one example, very recently, Sun announced that there will be support for closures in Java 7, after first announcing that there would be no support for closures in Java 7. The process by which this decision has been managed has been described as not being a community effort. Some aspects of Java are definitely up in the air these days.

Abundance and the culture of thrift

Tiny fish

For a long time, the level of comfort allowed us by technology has risen persistently. This trend shows no signs of slowing down. One of two things would have to happen: either we reach some point where a fundamental barrier prevents us from extracting or converting certain natural resources beyond a certain rate, and this becomes a hard constraint on humanity for all time, or physical matter ends up being under our complete control. In this latter scenario, which I don’t view as unlikely, we’d be able to convert trash into useful things at our whim, for instance.

This scenario is sometimes referred to as an age of abundance. It may have a large intersection with the singularity, an idea first championed in 1993 by Vernor Vinge, or it may be a consequence or a necessary prerequisite of it. For now, let us focus on the economic aspect of abundance only.

If these things come to pass, one of the fundamental assumptions of classical economics – scarcity – would be contradicted. I would suggest that we are culturally unprepared for this kind of world.

As countries’ economic productivity increases, we are faced with the choice of whether to work less and enjoy the same standard of living, or work as much and enjoy a higher standard of living. My understanding is that people have always chosen the latter.

In The Protestant Ethic and the Spirit of Capitalism, Max Weber puts forth the view that the development of capitalism in Europe was largely influenced by protestant values, particularly Calvinist ones. Even though many European peoples today consider themselves to be secular, it is clear that a Christian legacy has left a big mark on contemporary European culture. Simply put, many people only feel proud when they work and feel that they serve a useful purpose to their country. This is why they cannot choose to work less.

In an era of abundance, people would not be needed for the carrying out of most tasks. If they insisted on carrying out the tasks anyway, they would have to know that they were being costly and useless, thereby depriving them of enjoyment – unless we deluded them!

I see a few ways out of this situation.

  • Craftsmanship is considered a uniquely human and artistic activity, and people who turn to art and crafts can continue to feel that they are important.
  • Some work is fundamentally centered on human interaction and human meetings, for instance care, psychotherapy, hairdressing and leadership. These roles are unlikely to grow useless even as technology advances (purely materially).
  • Culture would have to change, allowing people to rest and feel valuable even without contributing to their society’s affluence. If this is possible or not is an open question.

I should point out that the contribution-as-pride mindset is a feature not just of European protestant cultures, but also seems to be one of Japan – though for different reasons. And probably one of many other countries as well.

Presentations: one lump of sugar, or two?

A glimpse of the monomorphic life

Recently I watched a friend give a presentation on a research topic he’s been working on for years. I found the presentation to be fascinating, and the clearest explanation of his work that I have seen to date. But I felt compelled to criticise him on one point.

In order to lighten up the speech a bit, he had chosen to include characters from a popular science fiction movie on every other slide, using them to explain the results he had attained in theoretical computer science. The link between the characters and the results was nearly non-existent; the pictures were clearly only there to lighten the presentation up a bit. I had been irritated by people’s tendency to do these things for some time, so I decided to point it out. One extreme example of this tendency gone too far occurred recently in a presentation about the database CouchDB – readers can Google for the slides to see the full controversy, though they are somewhat NSFW. (I don’t want to make moral judgments in this context, but I think the academic/professional domain can be kept free of these controversies. Save those battles for where they belong!)

So there’s a tendency for people to sugarcoat their presentation topic sometimes. The arguments in favor of doing this are that it can lighten an intrinsically heavy subject a lot, and save people from nearly falling asleep from compounded boredom (such as a conference where 30+ presentations about results in theoretical computer science are given). Essentially it mixes in some sugar with the sour stuff, yielding what might be called a sweet and sour talk. The medicine becomes easier to swallow.

But isn’t there something essentially contradictory about mixing contemporary pop culture so freely with results that, in this case, were about essentially pure mathematical theory? For one thing it takes the essentially perennial and debases it, linking it up with images that are hopelessly stuck in a short timeframe. For another, it can be seen a vote of non-confidence in your own ideas. It can be seen as saying “I know this is boring and useless to you, so please bear with me, and look at these amusing pictures until it’s over.” I’m not a good presenter, but in order to become one, I think I need to have sufficient confidence in my ideas to present them unsweetened unless the circumstances are extreme. I need to make my audience see the value in my ideas. Also, it’s quite different if the sugar coating is of the kind that helps people get into your idea, or if it’s the kind that just distracts (this case).

My view is therefore that one should use one’s lumps of sugar with restraint. Maybe a situation where this is called for is when the audience necessarily contains some people who are on the level that you need to be talking to, and many other people who are not on that level, and cannot possibly be brought up to it. In this situation, the sugar might be used to keep the second group somewhat alive and alert. And this is in fact the kind of situation my friend wrote the presentation for originally. So, no scorn on him, just a word of warning to the general public!

Fact and narrative

photo-2

Philosophers have long debated whether we can perceive reality in an objective manner, or if there is a multitude of subjective perceptions. I am not qualified to enter this debate on an academic level, but I will offer some thoughts from my current vantage point.

Sensory impressions can probably be said to be objective. I have no reason to contest this. Probably, there’s a certain genetic variation in how sensitive our sensory organs are, e.g. degrees of color blindness or sensitivity to high frequencies, but this can be compensated for technologically; with hearing aids, microscopes and various kinds of sensors we can expand our sensory range far beyond what we are born with.

It’s quite likely that when me and my friend look at an object, we will notice different things about it and walk away with different first impressions. If they contradict each other, we return to the object and try to establish who was right. So these contradictions can be resolved by going back to the source.

We tell ourselves narratives about what we observe. Most abstractions are such narratives. For instance, I have never seen a perfect circle or a perfect line, since such things don’t exist, but I have seen very good approximations of such things in the world. Only by going up extremely close can I see that my perception was an approximation. But even though I know this, I will remember my perceptions in terms of these approximations since it’s the only practical thing to do. However, I can still “go back to the source” and establish the validity of my impression.

So with first hand perceptions, and with concepts that are built from compounded first hand perceptions, there’s nothing really contradicting an objective reality or suggesting that such a reality wouldn’t exist. But many objects of vital importance in society revolve around narratives that can not conveniently be examined in terms of first hand sensory impressions. Objects such as impressions of people, political platforms and ideologies, appreciation of art (which, even though it can be reduced to sensory impressions, seems supremely hard to explain in terms of it), and so on. For this reason, I think that the narratives that are most likely to be told in these fields form a subjective reality that is highly unlikely to be disproven or reduced to sensory impressions. By the very nature of these, precise communication between spectators is impossible and people are likely to carry wildly contradictory stories in their heads.

And in such a world, whether or not we can agree on the objectivity of basic sensory impressions, subjective impressions (narratives that will not be deconstructed or falsified readily) will carry great importance. In fact, we have a basic drive to construct these narratives in order to deal with the complexity of everything we perceive. This might change if we in the future can create a perfect mathematical model of the human mind. In this case, maybe some problematic items such as appreciation of art or the meaning of an ideology might be reduced to an objective and verifiable-from-sensory-impressions concept.

It would be interesting to explore the grayzone between concepts that we easily perceive objectively and concepts that we easily perceive subjectively. Are there ideas whose validity can be reduced to sensory impressions, but only with great effort, so that people do not usually do so?

(This post is partly inspired by recent posts by Carl Svanberg, who blogs about objectivism in Swedish. My philosophical views are still in development, and I don’t want to side with one -ism camp or the other as of yet.)