Disclaimer: I haven’t checked academic sources for any of the statements made in this post – all of it is speculation which may be affirmed or rejected by existing literature.
Existing computing hardware and software are based on a discrete model: the Church-Turing model. The machinery is built on digital logic, and formalisms such as lambda calculus and turing machines are also essentially discrete. But what if we were to attempt to build some kind of continuous, or non-discrete, computer?
Digital logic gives us some unique capabilities that do not seem to exist in the real world, for instance: the ability to read a value without altering it, the ability to copy a value without altering it, the ability to test for equivalence and receive a yes or no as an answer. (The whole idea of “equality” is digital/platonic in nature.)
It will not do to simulate a continuous computer in software, not even with arbitrary precision arithmetic. It seems that some properties that a continuous computer might have would be impossible to simulate on discrete hardware. At least, we would need some kind of non-digital hardware extension that produces the continuous operations.
The discrete, digital model may seem like an abstract ideal, disjoint from reality. Yet continuous, real numbers are at least as much of an ideal. Between any two real numbers, no matter how close they are, there is an infinite amount of intermediate real numbers by definition. It seems implausible that we could find this infinite amount in the real world.
Is the real world continuous or discrete? I don’t know, and last time I asked one of my friends who knows physics, the answer I got was too complicated to be reduced to yes or no, or even to “yes, mostly” or “no, mostly”, if memory serves.
What properties might a continuous computer have? Depending on how it is designed, maybe some or all of the following:
- If we compute a value twice, there would be a level of precision at which the results appear different
- In fact, there is no way to establish the absolute equivalence of two values, equality is reduced to a matter of precision and generalisation (as it in practice already is for computer implementations of floating point arithmetic today)
- The simple act of reading a value might alter it slightly.
- The more steps a value passes through (i.e. the greater the number of times it is copied), the more it deviates from the original value
- The ability to truly move a value, as opposed to mere copying and deletion, might become important, to mitigate the above effect (digital computers cannot truly move values)
We must also ask the question: how do we model continuous computing mathematically? Is it enough to allow for numbers with arbitrary range and precision and use standard logic, simulating the destructive effects of computation somehow? (Probably insufficient). Could we generalise lambda calculus/turing machines to abandon their inherent discreteness and end up with a more general formalism?
If we accept the above list of properties, even if we concede that we cannot accurately simulate a C. computer on discrete hardware, maybe we can build a simulator that gives us an idea of what a real device might behave like. But we would have no idea what we’re missing.
Motivation? The main motivation is that it is interesting, i.e. it promises to point us in new and powerful directions, laden with potential discoveries. If something more concrete is needed: intuitively, we should be able to bridge computer software and the physical world much more easily with this kind of system, bringing benefits to UIs, simulation and modelling, etc.
Edit: After writing the above, I found out that people have investigated the idea of analog computers, which intersects with the idea of the (perhaps poorly named) continuous computing described in this post. The image at the start of this post is a diagram of the Norden bombsight, an optical/mechanical computer used in WW2.