Tag: state

Pointers in programming languages

August 26th, 2011 — 12:21am

It is likely that few features cause as much problems as pointers and references in statement-oriented languages, such as C, C++ and Java. They are powerful, yes, and they allow us to control quite precisely how a program is to represent something. We can use them to conveniently compose objects and data without the redundancy of replicating information massively. In languages like C they are even more powerful than in Java, since just about any part of memory can be viewed as if it were just about anything through the use of pointer arithmetic, which is indeed frightening.

But they also complicate reasoning about programs enormously. Both human reasoning and automated reasoning. Pointers allow any part of the program to have side effects in any other part of the program (if we have a reference to an object that originated there), and they make it very hard to reason about the properties that an object might have at a given point in time (since we generally have no idea who might hold a reference to it – it is amazing that programmers are forced to track this in their heads, more or less). In my effort to design my own language, multiple pointers to the same objects – aliases – have come back from time to time to bite me and block elegant, attractive designs. I believe that this is a very hard problem to design around. Aliased pointers set up communication channels between arbitrary parts of a program.

Nevertheless attempts have been made, in academia and in research labs, to solve this problem. Fraction-based permissions track how many aliases exist and endow each alias with specific permissions to access the object that is referred to. Ownership analysis forces access to certain objects to go through special, “owning” objects. Unique or “unshared” pointers in some language extensions restrict whether aliases may be created or not. But so far no solution has been extremely attractive and convenient, and none has made it into mainstream languages. (I know that someone Philipp Haller made a uniqueness plugin for the Scala compiler, but it is not in wide use, I believe.)

If we are to attempt further incremental evolution of the C-family languages, aliased pointers are one of the most important issues we can attack in my opinion.

2 comments » | Computer science, Software development

Continuous computing

July 10th, 2010 — 5:45pm

Disclaimer: I haven’t checked academic sources for any of the statements made in this post – all of it is speculation which may be affirmed or rejected by existing literature.

Existing computing hardware and software are based on a discrete model: the Church-Turing model. The machinery is built on digital logic, and formalisms such as lambda calculus and turing machines are also essentially discrete. But what if we were to attempt to build some kind of continuous, or non-discrete, computer?

Digital logic gives us some unique capabilities that do not seem to exist in the real world, for instance: the ability to read a value without altering it, the ability to copy a value without altering it, the ability to test for equivalence and receive a yes or no as an answer. (The whole idea of “equality” is digital/platonic in nature.)

It will not do to simulate a continuous computer in software, not even with arbitrary precision arithmetic. It seems that some properties that a continuous computer might have would be impossible to simulate on discrete hardware. At least, we would need some kind of non-digital hardware extension that produces the continuous operations.

The discrete, digital model may seem like an abstract ideal, disjoint from reality. Yet continuous, real numbers are at least as much of an ideal. Between any two real numbers, no matter how close they are, there is an infinite amount of intermediate real numbers by definition. It seems implausible that we could find this infinite amount in the real world.

Is the real world continuous or discrete? I don’t know, and last time I asked one of my friends who knows physics, the answer I got was too complicated to be reduced to yes or no, or even to “yes, mostly” or “no, mostly”, if memory serves.

What properties might a continuous computer have? Depending on how it is designed, maybe some or all of the following:

  • If we compute a value twice, there would be a level of precision at which the results appear different
  • In fact, there is no way to establish the absolute equivalence of two values, equality is reduced to a matter of precision and generalisation (as it in practice already is for computer implementations of floating point arithmetic today)
  • The simple act of reading a value might alter it slightly.
  • The more steps a value passes through (i.e. the greater the number of times it is copied), the more it deviates from the original value
  • The ability to truly move a value, as opposed to mere copying and deletion, might become important, to mitigate the above effect (digital computers cannot truly move values)

We must also ask the question: how do we model continuous computing mathematically? Is it enough to allow for numbers with arbitrary range and precision and use standard logic, simulating the destructive effects of computation somehow? (Probably insufficient). Could we generalise lambda calculus/turing machines to abandon their inherent discreteness and end up with a more general formalism?

If we accept the above list of properties, even if we concede that we cannot accurately simulate a C. computer on discrete hardware, maybe we can build a simulator that gives us an idea of what a real device might behave like. But we would have no idea what we’re missing.

Motivation? The main motivation is that it is interesting, i.e. it promises to point us in new and powerful directions, laden with potential discoveries. If something more concrete is needed: intuitively, we should be able to bridge computer software and the physical world much more easily with this kind of system, bringing benefits to UIs, simulation and modelling, etc.

Edit: After writing the above, I found out that people have investigated the idea of analog computers, which intersects with the idea of the (perhaps poorly named) continuous computing described in this post. The image at the start of this post is a diagram of the Norden bombsight, an optical/mechanical computer used in WW2.

3 comments » | Computer science

On statefulness

June 15th, 2010 — 9:52pm

Last year I made some attempts at free association around formal languages and state machines. But at that time, not much was said about the idea of a state itself; an idea which I think holds a lot of interesting uncharted territory.

To begin with, what is state really? Intuitively the word distinguishes states of an object. The key here is the plurality. A single state in itself is uninteresting. Only as contrasted with another state does the first state acquire meaning. This leads us to an interpretation: states are a way of grouping all the possible forms-of-existence, for want of a better word, that an object has, which lets us make sense of such forms more easily.

To exemplify: the light switch in my apartment can be on or off. But in physical space, the plastic switch can occupy a very large number of positions between one and zero. However, the spring mechanism forces the switch into the first state or the second state as soon as I release my finger from it, giving rise to two distinct functional states. When I was a kid, I would sometimes play with the rather old light switches in my parents’ house by keeping the switch in the middle between on and off. A humming sound would be emitted, and the lights would flicker on and off. Surely not a very good thing for the fittings, and potentially dangerous, but interesting since this broke down the abstraction – the continuum behind the discrete was exposed.

So given a physical system, then, which remains the same system even as some parts move around, electrical currents flow, etc, we use states to partition all the forms of existence of that system into meaningful ideas. “The door is open/closed”, “The engine is turned on/off”, “The engine is turned on but there’s almost no fuel left”, and so on. States have probably been with us as long as we have been able to think of binary distinctions, which is to say throughout the history of mankind – opposites such as day/night and alive/dead must have been with the human mind from prelinguistic times.

Today, states are an essential way of turning the unmanageable analog realm into a finite, subjugated digital representation.

Comment » | Computer science, Philosophy

Languages and automata, part 2

July 12th, 2009 — 11:55pm


Today an oppressive, passivizing heat covers Tokyoites like a massive woollen blanket. Summer is here. In a feeble attempt to defy the heat, I follow up on my previous post on languages and automata.

That post ended with the suggestion that we can apply these concepts to interactions in society. But can we? As a starting point, let’s think about stateless and stateful interactions in a system. Stateful interactions involve a change of state, in some sense. Stateless interactions involve no such change. What counts as stateful depends very much on how detailed the model is – these might be examples:

  • You make a purchase in a convenience store – the obvious changes of state are the balance in your wallet/bank account, the amount of items you possess/carry with you, and the corresponding opposite changes on behalf of the store.
  • You greet somebody you know on the street and exchange some small talk. Even though no actionable information is exchanged, you both feel happier afterwards and in a better mood because you were acknowledged by someone else. This is a change of state. The precondition is that you are in a state where you know the other person – this interaction would not be possible with a random person in a random state. (On a different level, a typical such exchange goes through at least three discrete states in itself – “greeting”, “exchange of information”, “goodbye”).
  • You go to your job in an office, read some documents, write some reports and leave. We can think about the wear and tear on the furniture and the building, the carbon dioxide-oxygen exchange in the air, and the changes to your company’s total body of information as changes of state. Which to choose depends, again, on the model.

Are there any stateless interactions then? Within the context of a particular model, yes. If we only care about monetary and material transactions, the meeting on the street might be stateless. If we only care about “mood” states, the purchase in the store might be stateless, and the office job might have a negative effect on accumulated mood.

In software engineering, we always try to hide state as much as possible. State makes the system far harder to understand and reason about.  We like immutable objects, whose state never change. If we look at reality through abstractions, maybe such things can exist, but in the physical world I don’t believe they do (I’d have to ask a physicist to know the answer though).

The most complex interactions in society, I think, take place among people and organisations that have long lasting relationships. These entities can modify each other’s state over a long period of time. If I’ve known somebody for years, there’s a very large number of possible states a conversation with that person might be in, a large number of topics I might possibly bring up and discuss. But the limitations of societal norms and my own knowledge imply that a conversation with a stranger might be a very small state machine indeed. (On the other hand, maybe this is why getting to know a new person can be very satisfying – the newness of building a new structure from scratch in your head to represent this person’s states). Companies that interact with customers in short, anonymous relationships almost never present them with complex interactions (convenience stores, taxi drivers). With other companies we have more complex interactions and longer relationships (doctors, banks).

These transitions of state are, again, like words that make up sentences in formal languages. We all live these languages every day. How many states do you have?

3 comments » | Computer science, Philosophy

Languages and automata, part 1

July 6th, 2009 — 11:50am

Yoyogi, TokyoComputing is very new as a science. Blaise Pascal devised a mechanical calculator in 1645, but Charles Babbage’s analytical engine, widely considered the first programmable computer, was not conceived of until the mid-19th century. However, it was never constructed (unlike Babbage’s simpler “difference engine”), and even at this time there was almost no theory to go with the invention. Today, the fundamental abstractions of computing and programming are Turing machines and Lambda calculus, described in the 1930’s. So essentially, the theory has had less than a century to mature, and is being viewed by many as a branch of mathematics.

The newness of computing means that we don’t know that much about its role or its applicability outside of devices built specifically for computing, nor do we know if today’s fundamental computing abstractions are the best ones.

Languages and automata are two of the most fundamental ideas in computing. In contrast to human languages, which are informal and rather unsystematic, in computing we often speak of formal languages. Something like the following is an example of a formal grammar:

  • Sequence-list: Sequence [ Sequence-list ]
  • Sequence: Wake up Action-list Have lunch Action-list Go to sleep
  • Action-list: Action [ Action-list ]
  • Action: Work | Answer the phone | Attend meeting | Relax

Using this grammar we can model the life of an office worker. We can generate an infinite list of potentially infinitely long “sentences”. The following are examples of valid sentences in the grammar:

  • Wake up, Work, Have lunch, Attend meeting, Go to sleep
  • Wake up, Work, Have lunch, Work, Go to sleep, Wake up, Work, Have lunch, Work, Go to sleep
  • Wake up, Answer the phone, Answer the phone, Answer the phone, Have lunch, Work, Go to sleep

A grammar such as this has a 1-1 correspondence with what is known as a deterministic finite automaton (DFA) – a very simple building block of software and hardware models. A formal grammar like the above is in a sense just a more natural way of thinking about a DFA.

What is the applicability of formal languages outside computing hardware and software?

Ferns. Kyoto, Japan

For one thing, we see them in nature, not least in ferns, which on a miniature level appear to have used the same rules as on the macro level. We see them in trees and flowers. In fact, the formal language paradigm appears to be a very good fit for many natural phenomena. One reason for this might be that formal languages allow rich structures to be constructed from a very small description.

One idea I find fascinating is trying to apply these models to human society: people and institutions. Can we describe the interactions in society as automata and formal languages, and if so, what can we learn about them?

2 comments » | Computer science, Philosophy

Back to top