Tag: applications


Doing generality right

April 2nd, 2010 — 11:00am

Many software developers, while making a tool to solve a specific problem, heed the siren call of generality. By making a few specific changes, they can turn the tool into a general framework for solving a larger class of problems. And then, with a few more changes, an even larger class of problems, and so on. This often turns into a trap, and there is a risk that the end of the line is an over-generalised tool that isn’t very good at solving any problem, because the specificity that was present in the first place was part of why it was powerful. In this way, constraints can equal freedom.

Sometimes, though, the generalizers get it right. These are often moments of exceptional and lasting innovation. One example of such a system is the fabulously influential (but today, not that widely used) programming language†Smalltalk. Invented by the former jazz guitarist and subsequent Turing award winner†Alan Kay, Smalltalk was released as one of the first true object-oriented programming languages in 1980. It is probably still ahead of its time. It runs on a virtual machine, it has reflection, everything is an object, and the separation between applications is blurred in favour of a big object box. On running†Squeak, a popular Smalltalk implementation, with its default system image today, users discover that all the objects on the screen, including the IDE to develop and debug objects, appear to follow the same rules. No objects seem to have special privileges.

Another such system is an application that used to be shipped on Mac computers in the distant past,†Hypercard. Hypercard enabled ordinary users to create highly customized software using the idea of filing cards in a drawer as the underlying model, blurring the line between end users and developers through its accessibility. I haven’t had the privilege to use it myself, but it seems like this was as powerful as it was because it served up a homogenous and familiar model, where everything was a card, and yet the cards had considerable scope for modification and special features. Even though, in some ways, this system appears to be a database, the cards didn’t need to have the same format, for instance. (Are we seeing this particular idea being recycled in a more enterprisey form in†CouchDB?)

There are more examples of successful highly general design: the Unix file system, TCP/IP sockets and so on. They all have in common that they are easy to think about as a mental model, since a universal set of rules apply to all objects, they scale well in different directions when used for different purposes, and they give the user a satisfying sense of empowerment, blurring the line between work and play to draw on the user’s natural creativity. Successful general systems are the ones that can be easily applied in quite varied situations without tearing in the seams.

While not widely used by industrial programmers today, Smalltalk was incredibly influential. In 1981 Objective-C was created by Brad Cox and Tom Love, directly inspired by what the Smalltalk designers had done. Objective-C was subsequently used as the language of choice for NeXTStep, and later for Apple’s MacOS X when Apple bought NeXT. Today it’s seeing a big surge in popularity thanks to devices like the iPhone, on which it is also used. In 1995 Java was introduced, owing a great deal of its design to Objective-C, but also introducing features such as a universal virtual machine and garbage collection, which Objective-C didn’t have at the time. In some sense, both Objective-C and Java are blends of the C-family languages and Smalltalk. Tongue in cheek, we might say that it seems evolution in industrial programming these days consists of finding blends that contain less of the C model and more of smalltalk or functional programming.

Comment » | Philosophy, Software development

Standard new Mac setup routine

February 10th, 2010 — 4:49pm

I just got a new laptop, courtesy of the lab. Naturally, it’s of the fruity kind. One of the first steps: install essential software.

I thought I’d make a list of software I consider absolutely essential on any new computer, and it became longer than I thought.

General use:

NetNewsWire for news reading

DropBox for file syncing

OmniFocus as a task organizer (the GTD methodology actually works — it has liberated me from reciting a long list of things to do in my head all day long)

CircusPonies Notebook for note taking

iStat Pro for system monitoring

If I want to develop software:

Eclipse

Fink and MacPorts so I can get various unix tools (I can’t settle for one or the other, since some tools are in one of them only, but normally Fink is nicer since the packages are precompiled)

Apple’s developer tools

If I want to read and write papers:

TeXShop

Mendeley Desktop

So these are the “absolute essentials”. Of course web apps like gmail count too, but they require no installation. Anything I’ve missed?

One thing I do not install, but perhaps should, is Apple’s MobileMe. Considering how fruity my environment is, there ought to be some benefit. But between Dropbox, my own DAV server for calendars, and built-in syncing of apps like OmniFocus, I can make things stay in sync anyway, so MobileMe is probably not worth the cost… I think.

2 comments » | Life

The future of the web browser

August 6th, 2009 — 12:19am

Internet ExplorerThe web browser, it is safe to say, has gone from humble origins to being the single most widely used piece of desktop software (based on my own usage, but I don’t think I’m untypical). This development continues today. The battles being fought and the tactical decisions being made here reach a very large audience and have a big impact.

When exactly did the web browser make the transition from being a hypertext viewer to an application platform? This transition seems in retrospect to have been a very fluid affair. Forms with buttons, combo boxes and lists were supported very early. Javascript came in not too long after. When the XmlHttpRequest was introduced it wasn’t long until AJAX took off, paving the way for today’s “rich” web browser applications.

A couple of years ago I had a personal project ongoing for some time. I had decided that web browsers weren’t designed for the kind of tasks they were being made to do (displaying applications), and I wanted to make a new kind of application platform for delivering applications over the web. Today I’m convinced that this would never have succeeded. Even if I had gotten the technology right (which I don’t think I was ever close to), I would have had no way of achieving mass adoption. Incremental developments of the web browser have, however, placed a new kind of application platform in the hands of the masses. Today the cutting edge seems to be browsers like Google’s Chrome, aggressively optimised for application delivery. But some new vegetables have been added to the browser soup.

chrome_logo

Google’s¬†GWT web toolkit has been available for some time. This framework makes it easier to develop AJAX applications. Some hardcore AJAX¬†developers may consider it immature, but these frameworks are going to be increasingly popular since they bridge the differences between browsers very smoothly, I think. What’s interesting is that the same company is developing GWT and Chrome though. The two sides of the browser-application equation have a common creator. This helps both: GWT can become more popular if Chrome is a popular browser, and Chrome can become more popular if GWT is a popular framework. Google can make and has made GWT apps run very fast with the Chrome browser (I tested this personally with some things I’ve been hacking on). The sky is the limit here; they can easily add special native features in the browser that GWT alone can hook into.

Microsoft have something a little bit similar with their Silverlight, which while not playing quite the same role, has a co-beneficial relationship with Internet Explorer.

firefox_logo

Everyone’s favorite browser, Firefox, recently passed 1 billion downloads. Firefox doesn’t really have a web development kit of their own as I understand it. It just tries to implement the standards well. Which is fair and good, but it demotes FF from the league of agenda setters to people who play catch up, in some sense. Though, it must be said, the rich variety of plugins available for FF might go a long way to remedy this.

All this, and I haven’t even touched on Google’s recent foray into the OS market with “Chrome OS”…

Comment » | Uncategorized

Quantity as a success metric

July 24th, 2009 — 2:58pm

I have something of an engineering background, so I easily end up thinking of success in terms of quantity. Maximizing this variable or that. Ensuring the greatest possible reward, or the smallest possible cost. But sometimes this is fallacious thinking.

As an academic, I would like to publish prestigious articles. It would be nice to publish 10 papers at second or third rate conferences, but they might all be made irrelevant by a single article at a first rate conference (or even an article in Nature or Science, say). So quality is a better measure than quantity.

I would also like to come up with new and influential ideas, but I suspect I would probably be happier if I managed to influence 10 very highly regarded people than if I managed to influence 10 000 laymen. (These exact numbers were computed using the “wild guess” algorithm and further evaluation may be needed.)

In professional life, I’ve found it dangerously easy to fall into a mode of thinking where you evaluate yourself by your income. This is true up to a point, but I’ve found that there’s a point beyond which additional income has diminishing returns in terms of how much it adds to my overall rewards from life. So beyond this point, quality is a better measure than quantity. What are my tasks, how do they force me to learn and evolve, what kind of satisfaction do I feel and why? So quality is a better measure than quantity.

User satisfaction with computer software can, to some extent, be measured using response time and latency. A snappy, responsive user interface usually produces more satisfaction than a sluggish one. But this can often be compensated for to a surprising extent by having appropriate progress indicators, animations and design features that placate the user in some way, assuring them something is being done. This is in a sense the opposite of the money situation: up to a certain point, quality makes up for quantity, after that point (when the slowness becomes impossible to mask), quantity becomes increasingly important.

What’s most interesting is perhaps the convertibility between quality and quantity. In engineering a device or a software system, quantitative metrics can be crucial tools in the construction process, but the final user experience must be qualitatively right. So quantity is a tool to construct quality. And in the real life situations where quantity is actually the best measure — bargaining, comparing, communicating, constructing, … — I think of it as a way to mask qualities. The numbers are simply easier to consider than the vast number of qualities that lie underneath.

Comment » | Philosophy

Best bibliography management systems?

July 22nd, 2009 — 7:23pm

A question for readers who happen to manage bibliographies: what, if any, bibliography management systems do you use?

I started using Aigaion for mine. Then I found out that there’s an open system called bibsonomy, which is potentially much better since it lets you tag and share bibliographies socially, and it seems to already know about all the major computer science papers.

Again (see: The problem with standards), I’m frustrated by the fact that I can’t move my data around between applications as I like without lots of manual effort. A worthy research problem would be making data truly application independent once and for all.

3 comments » | Uncategorized

Back to top