Why Scala? (2) Compact syntax applied to probabilistic actions

As a little fun project, I developed some probabilistic cellular automata with Scala and very basic AWT graphics. I continue to become more proficient with Scala, and it feels increasingly natural to use. During this exercise I came up with something that I thought was particularly elegant, and that I am pretty sure would have been a lot less readable in Java. I will just reproduce the interesting bits. The basic idea is that I want cells on a 2D grid to take certain redrawing actions according to given probabilities. First, I define this utility function:

object Util {
	//sum of floats (probabilities) should be at most 1.0
	def multiAction(acts: Seq[Tuple2[()=> Unit, Float]]) = {
		val r = Math.random
		var soFar: Float = 0
		var acted = false
		for ((act,prob) <- acts) {
			soFar += prob
			if (soFar > r && !acted) {
				acted = true

The idea here is that we supply a list of tuples. The first element of each tuple is a function, and the second element is a float value between 0 and 1 representing the probability that each function is evaluated. Only one of the functions supplied is actually evaluated.

This is how I put it to use (excerpt from another class):

  (() => {
     cellsWrite(x, y-1) = cellsRead(x, y-1) + 0.01f;
     cellsWrite(x, y+1) = cellsRead(x, y+1) + 0.01f 
    }, 0.2f),
  (() => { 
     cellsWrite(x+1, y) = cellsRead(x+1, y) + 0.01f;
     cellsWrite(x-1, y) = cellsRead(x-1, y) + 0.01f 
    }, 0.1f)

Once you take in the braces here it is actually quite simple. We have two anonymous functions taking zero parameters of two statements each. The first one has probability 0.2 and the second probability 0.1, meaning that there’s a 70% probability that nothing will happen. We can also make an arbitrarily long list of such functions on the fly.

To the best of my knowledge, the only way of reproducing this flexibility in Java would be to create anonymous inner classes and put them inside an array. Certainly that would be quite a bit more verbose than this.

The absurdity of flying

The first time I found myself onboard an airplane was when I was 9-10 years old or so. At the time, travelling by myself to visit my aunt who lived on a remote island was a big experience. In particular, I think, the sensation that the environment was managed in the extreme made a big impression on me. The temperatures and winds outside my seat window were a hostile element, but human technological achievement successfully shielded me from these dangers. I could take part in the collective human pride in this affirmation of technological ability.

Much later, when I was a student in London, I was subject to budget constraints and went for the cheapest flight whenever possible. Accordingly I found myself flying with an Irish airline, Ryanair, quite a lot. This enterprise is marked by its grisly yellow and dark blue colour scheme and continuous experimentation in lowered flight standards, comfort and safety, all for the sake of lower prices. For a 1-2 hour flight between England and Sweden it was fully acceptable.

Recently I have been flying between Japan and Sweden quite a bit. The intercontinental flight can last more than ten hours, and takes on quite a different character from short flights. Some of the essential absurdities of any flight journey become increasingly difficult to ignore during this time period.

Firstly, there is the fact that the airplane that more than a hundred passengers ride in is a sealed off, highly fragile, mobile cross-section of society and a habitat for human beings. Airplanes need continuous replacement, draining and replenishment of food, waste, excrement, water, fuel and electricity. The air pressure and temperature inside the cabin are artificially maintained. The similarities with an imagined future biodome on the moon are not a few. What happens if an airplane has to land on a tiny island in the middle of the ocean and doesn’t have enough fuel to fly back, or there is some kind of technical problem? All of these buffered flows which the airplane must always replenish would be interrupted, and our very lives are hooked up to those flows.

In addition, hundreds of people are placed very close to each other for an extended period of time with minimal lateral separation (although there is some longitudinal separation in the form of seat rows). A certain neuroticism is provoked. We become hyper-aware of our neighbours and what they do, what they talk about, how they dress and what habits they have. We try our best not to notice. And this lattice, this packing of people, is surveyed from above by the panoptic eyes of the flight stewards and hostesses. Observation not only from above but also from peers becomes essential in maintaining order in a closed-off society where governmental violence cannot reach and the usual norms might easily be violated. Security breaches are to the greatest possible extent preempted by the pre-flight security theatre, and what remains of risk is contained by observation and observability effects.

This pressurised air and pressurised micro-society is spiced up, or muddled, slightly by the increasingly confused roles of the stewards and hostesses. In the jet set era, the air hostess was an object of attraction, an apple of the eyes of businessmen, an icon of liberty who had authority but no doubt also a certain intoxicating effect which helped to pacify. Today she is more clearly authoritarian, but the old role has not quite been erased from people’s minds. Something oedipal threatens to take place. Is this person who serves me food a nurse, a security guard, a mother as well as a possible lover? The neuroticism of the family extended into international airspace. All authority figures merged into one. Male stewards only slightly less confusing.

Fortunately airlines are very happy to serve up small doses of wine and beer to take the edge off the situation. Flying is absurd, but for the moment we have no other way of getting around.

Continuous computing

Disclaimer: I haven’t checked academic sources for any of the statements made in this post – all of it is speculation which may be affirmed or rejected by existing literature.

Existing computing hardware and software are based on a discrete model: the Church-Turing model. The machinery is built on digital logic, and formalisms such as lambda calculus and turing machines are also essentially discrete. But what if we were to attempt to build some kind of continuous, or non-discrete, computer?

Digital logic gives us some unique capabilities that do not seem to exist in the real world, for instance: the ability to read a value without altering it, the ability to copy a value without altering it, the ability to test for equivalence and receive a yes or no as an answer. (The whole idea of “equality” is digital/platonic in nature.)

It will not do to simulate a continuous computer in software, not even with arbitrary precision arithmetic. It seems that some properties that a continuous computer might have would be impossible to simulate on discrete hardware. At least, we would need some kind of non-digital hardware extension that produces the continuous operations.

The discrete, digital model may seem like an abstract ideal, disjoint from reality. Yet continuous, real numbers are at least as much of an ideal. Between any two real numbers, no matter how close they are, there is an infinite amount of intermediate real numbers by definition. It seems implausible that we could find this infinite amount in the real world.

Is the real world continuous or discrete? I don’t know, and last time I asked one of my friends who knows physics, the answer I got was too complicated to be reduced to yes or no, or even to “yes, mostly” or “no, mostly”, if memory serves.

What properties might a continuous computer have? Depending on how it is designed, maybe some or all of the following:

  • If we compute a value twice, there would be a level of precision at which the results appear different
  • In fact, there is no way to establish the absolute equivalence of two values, equality is reduced to a matter of precision and generalisation (as it in practice already is for computer implementations of floating point arithmetic today)
  • The simple act of reading a value might alter it slightly.
  • The more steps a value passes through (i.e. the greater the number of times it is copied), the more it deviates from the original value
  • The ability to truly move a value, as opposed to mere copying and deletion, might become important, to mitigate the above effect (digital computers cannot truly move values)

We must also ask the question: how do we model continuous computing mathematically? Is it enough to allow for numbers with arbitrary range and precision and use standard logic, simulating the destructive effects of computation somehow? (Probably insufficient). Could we generalise lambda calculus/turing machines to abandon their inherent discreteness and end up with a more general formalism?

If we accept the above list of properties, even if we concede that we cannot accurately simulate a C. computer on discrete hardware, maybe we can build a simulator that gives us an idea of what a real device might behave like. But we would have no idea what we’re missing.

Motivation? The main motivation is that it is interesting, i.e. it promises to point us in new and powerful directions, laden with potential discoveries. If something more concrete is needed: intuitively, we should be able to bridge computer software and the physical world much more easily with this kind of system, bringing benefits to UIs, simulation and modelling, etc.

Edit: After writing the above, I found out that people have investigated the idea of analog computers, which intersects with the idea of the (perhaps poorly named) continuous computing described in this post. The image at the start of this post is a diagram of the Norden bombsight, an optical/mechanical computer used in WW2.


A characteristic of a naive approach to the digital world is the tendency to record and store everything. JustBecauseWeCan. Every photo, every e-mail, every song, every web site ever visited, every acquaintance who ever added you as a friend on some social network, every message you ever received. Somebody, probably an author, termed this the “database complex”, I think. A projection of a certain greedy tendency to gather and collect things. This does have certain benefits when coupled with a good search function. Every now and then I find myself having to use some information that only exists in an e-mail that I received 6 months ago or so.

A more advanced approach is selective forgetfulness. Humans cannot go on with their lives if they do not forget memories and experiences that are irrelevant and useless. They become unable to set and act on new targets. I think that a slightly less naive digital life would contain a measure of deletion. Deletion of files, old e-mails that have probably become useless, “friends” on social networks who are mere acquaintances or even less, and so on. Taking away the old makes space for the new. It can be especially powerful to see the number of files in your home directory reduced from 50 to 5. A lot of confusion and ambivalence is immediately removed.

Part of taking the next step step deeper into the digital age should be deciding, each for themselves, what one’s personal thresholds and principles of deletion are. What should be deleted, when and why? In our brains it has been managed by evolution for us. Now we must manage it by ourselves.