Continuous computing

Disclaimer: I haven’t checked academic sources for any of the statements made in this post – all of it is speculation which may be affirmed or rejected by existing literature.

Existing computing hardware and software are based on a discrete model: the Church-Turing model. The machinery is built on digital logic, and formalisms such as lambda calculus and turing machines are also essentially discrete. But what if we were to attempt to build some kind of continuous, or non-discrete, computer?

Digital logic gives us some unique capabilities that do not seem to exist in the real world, for instance: the ability to read a value without altering it, the ability to copy a value without altering it, the ability to test for equivalence and receive a yes or no as an answer. (The whole idea of “equality” is digital/platonic in nature.)

It will not do to simulate a continuous computer in software, not even with arbitrary precision arithmetic. It seems that some properties that a continuous computer might have would be impossible to simulate on discrete hardware. At least, we would need some kind of non-digital hardware extension that produces the continuous operations.

The discrete, digital model may seem like an abstract ideal, disjoint from reality. Yet continuous, real numbers are at least as much of an ideal. Between any two real numbers, no matter how close they are, there is an infinite amount of intermediate real numbers by definition. It seems implausible that we could find this infinite amount in the real world.

Is the real world continuous or discrete? I don’t know, and last time I asked one of my friends who knows physics, the answer I got was too complicated to be reduced to yes or no, or even to “yes, mostly” or “no, mostly”, if memory serves.

What properties might a continuous computer have? Depending on how it is designed, maybe some or all of the following:

  • If we compute a value twice, there would be a level of precision at which the results appear different
  • In fact, there is no way to establish the absolute equivalence of two values, equality is reduced to a matter of precision and generalisation (as it in practice already is for computer implementations of floating point arithmetic today)
  • The simple act of reading a value might alter it slightly.
  • The more steps a value passes through (i.e. the greater the number of times it is copied), the more it deviates from the original value
  • The ability to truly move a value, as opposed to mere copying and deletion, might become important, to mitigate the above effect (digital computers cannot truly move values)

We must also ask the question: how do we model continuous computing mathematically? Is it enough to allow for numbers with arbitrary range and precision and use standard logic, simulating the destructive effects of computation somehow? (Probably insufficient). Could we generalise lambda calculus/turing machines to abandon their inherent discreteness and end up with a more general formalism?

If we accept the above list of properties, even if we concede that we cannot accurately simulate a C. computer on discrete hardware, maybe we can build a simulator that gives us an idea of what a real device might behave like. But we would have no idea what we’re missing.

Motivation? The main motivation is that it is interesting, i.e. it promises to point us in new and powerful directions, laden with potential discoveries. If something more concrete is needed: intuitively, we should be able to bridge computer software and the physical world much more easily with this kind of system, bringing benefits to UIs, simulation and modelling, etc.

Edit: After writing the above, I found out that people have investigated the idea of analog computers, which intersects with the idea of the (perhaps poorly named) continuous computing described in this post. The image at the start of this post is a diagram of the Norden bombsight, an optical/mechanical computer used in WW2.

Comments 3

  1. soliptic wrote:

    Firstly, I must say that I really enjoy reading your blog. I found it with a Google search for something I can’t presently remember, but I have been checking for new entries now and again for a couple of months. I even commented on on your blog exploring computer interfaces and their future.

    This entry is particularly interesting to me, since I’ve often thought about the idea of a continuous computer. I don’t have the knowledge of formal computational theory as you do, however, so it was much more interesting reading your musings than regarding my own. As such, there are a couple of issues I’d like to get your input on.

    Whenever I envision an analogue machine (the existence of which I was also unaware until you pointed it out) I imagine it’s implementation could be possible by — like digital machines — directly mapping voltages to mathematical values. For instance, an electronic component could hold a particular voltage between m and n, and m could be mapped to 1 and n mapped to 0, and every value in between could be subsequently interpreted appropriately as values between 1 and 0. Of course, I’m assuming voltage values could be accumulated continuously, as they are assumed to be in elementary circuit calculations.

    I’ve also wondered whether the universe is continuous (as have many scientists and philosophers throughout the course of human thought). Can time and space be subdivided into an infinite number of fragments? I’m sure theorists have modern mathematical models that elucidate the answer far more clearly than I could comprehend. There’s a popular science book I read recently called “Programming the Universe” by Seth Lloyd, and he proposes an idea derived from information theory that says that the Universe is essentially a computer in itself, and from it’s humble origins as a singularity — or single binary value — the universe has been undergoing a process of “computational explosion,” requiring more and more precision to represent itself.

    Finally, you show that a continuous machine creates a problem for equality comparison. But is this necessarily so? It’s true that the way equality is determined in our current computers is to exhaustively compare each value discretely value by value, but isn’t it possible that there’s another way to state equality? For instance, we could define pi as a particular continuous value in our continuous machine. We, as humans, certainly don’t need to compare f(x) operator g(x) to each digit in pi forever to understand that such an equality exists — we simply define it as such and derive the equality by deducing it.

    Posted 21 Jul 2010 at 7:22 am
  2. johan wrote:

    Hi Soliptic,
    (I moved your comment to this post since I assumed it was intended to be here. Somehow it ended up on the “Deletion” post instead.)
    Thanks for your comment, and for the book tip.
    The problem with your approach to equality would be identifying a particular continuous value. Maybe I misunderstand your suggestion, but I’ll just give my comments on what I think it is. My understanding is that we basically cannot measure or observe physical systems without affecting them – a device like a voltmeter changes the resistance in a circuit ever so slightly, measuring the speed of wind slows it down, and so on. This effect is sometimes very small, but the digital approach to circuits is basically a way of getting around it and getting reliable computation.
    So if you decide to define equality as “pi is this particular continuous value here”, you would run into the problem of knowing when you are observing that value and when you’re not. And attempting to identify it using, say, a voltmeter, would change it slightly, unless you rounded off enough precision to compensate for those effects. But if you do, you will have some kind of digital system again, so the whole point is lost, I would think.
    Of course we could accept that equality is inherently a “digital” operation. Two things in the real world are never said to be equal unless there is some degree of abstraction, some loss of precision.
    Another approach might be to have the computer do symbolic algebra instead of working with numbers, so that it could reason about the calculations and say “these calculations have to have resulted in the same value”. This approach, again, seems to be inherently discrete.

    Posted 23 Jul 2010 at 6:03 pm
  3. Ingemar wrote:

    Are you saying that an continuous computer would be “closer” to the continuous nature in itself but that our digital computers are closer to our platonic idea of knowledge?

    Posted 15 May 2013 at 7:43 pm

Post a Comment

Your email is never published nor shared. Required fields are marked *