Tag: possibly novel


Towards an understanding of will

January 11th, 2012 — 11:05am

Will has the potential to be turned into a fundamental concept through which ethics, epistemology, art, life and politics might be understood. How can we define the idea of will?

I’m sure I’ll find a lot of answers to this in the philosophical literature in time (maybe I should read Schopenhauer). But what I came up with myself, as a preliminary definition, is this:

A system can be said to have will if it makes progress towards some goal state in a wide array of circumstances, circumnavigating obstacles (including other systems with will) to some degree.

Here, progress doesn’t need to be an achievement – progress in the form of maintaining some state should also qualify.

This definition is dependent on definitions of states, progress, circumstances and systems. An intuitive conception of all of these should suffice for the time being.

One of my friends suggested that instead of trying to define will as an intrinsic property of something, it should instead just be understood as a human heuristic, a cognitive tool that we use as a lens through which to view the world. These two views are not incompatible, since the question here becomes: what is the minimal set of attributes that something must have for us to view it through the conceptual lens of “will”?

3 comments » | Philosophy

Continuous computing

July 10th, 2010 — 5:45pm

Disclaimer: I haven’t checked academic sources for any of the statements made in this post – all of it is speculation which may be affirmed or rejected by existing literature.

Existing computing hardware and software are based on a discrete model: the Church-Turing model. The machinery is built on digital logic, and formalisms such as lambda calculus and turing machines are also essentially discrete. But what if we were to attempt to build some kind of continuous, or non-discrete, computer?

Digital logic gives us some unique capabilities that do not seem to exist in the real world, for instance: the ability to read a value without altering it, the ability to copy a value without altering it, the ability to test for equivalence and receive a yes or no as an answer. (The whole idea of “equality” is digital/platonic in nature.)

It will not do to simulate a continuous computer in software, not even with arbitrary precision arithmetic. It seems that some properties that a continuous computer might have would be impossible to simulate on discrete hardware. At least, we would need some kind of non-digital hardware extension that produces the continuous operations.

The discrete, digital model may seem like an abstract ideal, disjoint from reality. Yet continuous, real numbers are at least as much of an ideal. Between any two real numbers, no matter how close they are, there is an infinite amount of intermediate real numbers by definition. It seems implausible that we could find this infinite amount in the real world.

Is the real world continuous or discrete? I don’t know, and last time I asked one of my friends who knows physics, the answer I got was too complicated to be reduced to yes or no, or even to “yes, mostly” or “no, mostly”, if memory serves.

What properties might a continuous computer have? Depending on how it is designed, maybe some or all of the following:

  • If we compute a value twice, there would be a level of precision at which the results appear different
  • In fact, there is no way to establish the absolute equivalence of two values, equality is reduced to a matter of precision and generalisation (as it in practice already is for computer implementations of floating point arithmetic today)
  • The simple act of reading a value might alter it slightly.
  • The more steps a value passes through (i.e. the greater the number of times it is copied), the more it deviates from the original value
  • The ability to truly move a value, as opposed to mere copying and deletion, might become important, to mitigate the above effect (digital computers cannot truly move values)

We must also ask the question: how do we model continuous computing mathematically? Is it enough to allow for numbers with arbitrary range and precision and use standard logic, simulating the destructive effects of computation somehow? (Probably insufficient). Could we generalise lambda calculus/turing machines to abandon their inherent discreteness and end up with a more general formalism?

If we accept the above list of properties, even if we concede that we cannot accurately simulate a C. computer on discrete hardware, maybe we can build a simulator that gives us an idea of what a real device might behave like. But we would have no idea what we’re missing.

Motivation? The main motivation is that it is interesting, i.e. it promises to point us in new and powerful directions, laden with potential discoveries. If something more concrete is needed: intuitively, we should be able to bridge computer software and the physical world much more easily with this kind of system, bringing benefits to UIs, simulation and modelling, etc.

Edit: After writing the above, I found out that people have investigated the idea of analog computers, which intersects with the idea of the (perhaps poorly named) continuous computing described in this post. The image at the start of this post is a diagram of the Norden bombsight, an optical/mechanical computer used in WW2.

3 comments » | Computer science

Making playtime useful with color filling games

February 1st, 2010 — 9:44am
Flood-it, a color filling game. This version was made by Lab Pixies for the iPhone, but many others exist.

Flood-it, a color filling game. This version was made by Lab Pixies for the iPhone, but many others exist.

There’s a veritable torrent of little games constantly being released for the iPhone. One of the more likable ones is Flood-It, which I’ve been playing recently. The premise is extremely simple: you start off with a grid divided into squares of different, randomized colors. You are given a tool that works a bit like the bucket fill in a picture editor. At each turn, the player chooses a color to fill the grid with, starting from the upper left corner. The monochromatic area slowly grows, and the aim is to fill the entire grid with a single color within a limited number of turns.

A recent analysis showed that finding an optimal solution to games like Flood-It is a NP-hard problem. In addition, deciding whether the game can be solved in n steps for some n is NP-complete. The analysis relies on a reduction of Flood-It to an instance of the SCS problem (shortest common superstring). (It’s important to note that what is NP-complete is deciding whether a particular board can be solved in a certain number of steps, not solving the game with a bounded number of steps. This can be done in polynomial time.) For those who need a summary, ACM Communications had an excellent review of the state of the P/NP problem in September last year.

For a NP-hard problem H, there exists a polynomial time reduction of any problem in NP to H, meaning that if we can solve H in P-time, we can solve any problem in NP in P-time. Many optimization problems in society rely on approximate solutions to difficult problems: routing traffic, assembling DNA sequences from partial subsequences, mathematical theorem proving… On the hypothesis that evolution has turned people into efficient solvers of hard problems (i.e. we have good heuristics in our brains from birth and from experience), we ought to pay people to play these games on their phones, but map real problems into game instances, so that people effectively work while they’re playing. We ought to design games that act as front-ends for real combinatorial problems.

A computer game, as we understand it, can be defined as a very smooth learning curve, and if we only “play” very tricky instances of combinatorial problems, the game would probably present too much of a barrier to new players. So maybe the best way of executing this kind of scheme would be that a majority of all game instances do not represent real problems, but mere training or verification of already solved problems — but every once in a while, a real problem pops up. The player should still get paid though.

A double benefit would be blurring the line between work time and  play time, what is useful and what is useless — I think this line is often artificially constructed. Has technology ever before given us the possibility to literally turn work into play?

Acknowledgements. I am indebted to Christian Sommer for showing me the complexity analysis of Flood-it.

The Flood-It game, easy difficulty setting, with the player having made some progress.

The Flood-It game, easy difficulty setting, with the player having made some progress.

2 comments » | Computer science

Back to top