Tag: usability


The “Friedrich principles” for bioinformatics software

September 13th, 2012 — 12:51am

I’ve just come back from Biohackathon 2012 in Toyama, an annual event, traditionally hosted in Japan, where users of semantic web technologies (such as RDF and SPARQL) in biology and bioinformatics come together to work on projects. This was a nice event with an open and productive atmosphere, and I got a lot out of attending. I participated in a little project that is not quite ready to be released to the wider public yet. More on that in the future.

Recently I’ve also had a paper accepted at the PRIB (Pattern Recognition in Bioinformatics) conference, jointly with Gabriel Keeble-Gagnère. The paper is a slight mismatch for the conference, as it is really focussing on software engineering more than pattern recognition as such. In this paper, titled “An Open Framework for Extensible Multi-Stage Bioinformatics Software” (arxiv) we make a case for a new set of software development principles for experimental software in bioinformatics, and for big data sciences in general. We provide a software framework that supports application development with these principles – Friedrich – and illustrate its application by describing a de novo genome assembler we have developed.

The actual gestation of this paper in fact occurred in the reverse order from the above. In 2010, we begun development on the genome assembler, at the time a toy project. As it grew, it became a software framework, and eventually something of a design philosophy. We hope to keep building on these ideas and demonstrate their potential more thoroughly in the near future.

For the time being, these are the “Friedrich principles” in no particular order.

  • Expose internal structure.
  • Conserve dimensionality maximally. (“Preserve intermediate data”)
  • Multi-stage applications. (Experimental and “production”, and moving between the two)
  • Flexibility with performance.
  • Minimal finality.
  • Ease of use.

Particularly striking here is (I think) the idea that internal structure should be exposed. This is the opposite of encapsulation, an important principle in software engineering. We believe that when the users are researchers, they are better served by transparent software, since the workflows are almost never final but subject to constant revision. But of course, the real trick is knowing what to make transparent and what to hide – an economy is still needed.

Comment » | Bioinformatics, Computer science, Software development

The aesthetics of technology

May 18th, 2010 — 11:20pm

Different technologies have different kinds of aesthetics, and they affect us in various ways, whether we are particularly fascinated with technology or not.

The easiest technologies to understand on an intuitive-emotional basis seem to be those that involve physical processes. Objects rotating, moving, being lifted and displaced, compressed, crushed. Gases and liquids being sent around in conduits, mediating force and energy. In short, the technology that has its foundation in classical mechanics.

If these are easy to get a feel for, it would probably be in part because an understanding of mechanical processes has been of use to us throughout history, and also before the advent of civilisation. An intuitive understanding of things such as momentum, acceleration, gravity has no doubt benefited mankind and its ancestors for a very long time.

It gets trickier when we get to the more recent technologies. Take electricity to be an arbitrary watershed. We have no intuitive idea of what electricity is, apart from the fact we might be afraid of thunder. Electricity has to be taught through the abstract idea of electrons flowing in conduits, a bit like water in pipes (to name one of many images being used).

And then there’s analog and digital electronics, integrated circuits, semiconductors and so on, where intuition has long ago been left behind. We are forced to approach these things in a purely abstract domain.

Yet, when our Mp3 players, game consoles, mobile phones and computers do things for us, we are left with a sense of wonder. Our minds, always looking for stories and explanations, want to associate the impressive effects produced by these devices with some stimuli. With a steam engine, it’s easy to associate the energy with pressure, heat and motion, all of which are well understood on a low level. With a mobile phone, not so much. A lot of very abstract stories have to be used in order to reach anything that resembles an explanation, and still it doesn’t reach the essence of the device, which might be in its interplay between radio transceivers, sound codec chips, a display with a user interface and software to drive it, a central CPU, and so on, together with, of course, the network of physical antennas and their connectivity with other such networks. Is it too much to suppose that the human mind often stops short of the true explanation here? That we associate the effects produced by the device with what we can touch, smell, see and hear?

This is of course the point where many computer geeks worldwide start to feel a certain affection for the materials that make up the machines. Suppose that we are in the 1980’s. Green text on a black terminal background. A particular kind of fixed width font. The clicking of the keyboard. The dull grey plastic used to make the case. All of these things can acquire a lot of meaning that they don’t really have, because the users lack a window (physical and emotional) into the essence of the machine. The ultimate “disconnected machine”, to relate to my field, is software.

This brings up questions such as: how far can we as a species proceed with technology that we cannot understand instinctively, how can we teach such technology meaningfully and include it in democratic debate, and how can we use people’s tendencies to associate sensory stimuli with meaning and effects in a more meaningful way? – for instance, when we design hardware and software interfaces.

2 comments » | Philosophy, Software development

The future of the web browser

August 6th, 2009 — 12:19am

Internet ExplorerThe web browser, it is safe to say, has gone from humble origins to being the single most widely used piece of desktop software (based on my own usage, but I don’t think I’m untypical). This development continues today. The battles being fought and the tactical decisions being made here reach a very large audience and have a big impact.

When exactly did the web browser make the transition from being a hypertext viewer to an application platform? This transition seems in retrospect to have been a very fluid affair. Forms with buttons, combo boxes and lists were supported very early. Javascript came in not too long after. When the XmlHttpRequest was introduced it wasn’t long until AJAX took off, paving the way for today’s “rich” web browser applications.

A couple of years ago I had a personal project ongoing for some time. I had decided that web browsers weren’t designed for the kind of tasks they were being made to do (displaying applications), and I wanted to make a new kind of application platform for delivering applications over the web. Today I’m convinced that this would never have succeeded. Even if I had gotten the technology right (which I don’t think I was ever close to), I would have had no way of achieving mass adoption. Incremental developments of the web browser have, however, placed a new kind of application platform in the hands of the masses. Today the cutting edge seems to be browsers like Google’s Chrome, aggressively optimised for application delivery. But some new vegetables have been added to the browser soup.

chrome_logo

Google’s GWT web toolkit has been available for some time. This framework makes it easier to develop AJAX applications. Some hardcore AJAX developers may consider it immature, but these frameworks are going to be increasingly popular since they bridge the differences between browsers very smoothly, I think. What’s interesting is that the same company is developing GWT and Chrome though. The two sides of the browser-application equation have a common creator. This helps both: GWT can become more popular if Chrome is a popular browser, and Chrome can become more popular if GWT is a popular framework. Google can make and has made GWT apps run very fast with the Chrome browser (I tested this personally with some things I’ve been hacking on). The sky is the limit here; they can easily add special native features in the browser that GWT alone can hook into.

Microsoft have something a little bit similar with their Silverlight, which while not playing quite the same role, has a co-beneficial relationship with Internet Explorer.

firefox_logo

Everyone’s favorite browser, Firefox, recently passed 1 billion downloads. Firefox doesn’t really have a web development kit of their own as I understand it. It just tries to implement the standards well. Which is fair and good, but it demotes FF from the league of agenda setters to people who play catch up, in some sense. Though, it must be said, the rich variety of plugins available for FF might go a long way to remedy this.

All this, and I haven’t even touched on Google’s recent foray into the OS market with “Chrome OS”…

Comment » | Uncategorized

Two new-ish search engines

May 26th, 2009 — 7:59am

Recently, while reading about methods for manipulating RDF, I discovered the search engine PowerSet. More recently, Wolfram Research’s Wolfram Alpha launched. There’s been no shortage of new search engines in the past year or so – Cuil is one that was much publicized but ended up remarkably useless – but these two still impress me.

PowerSet impresses me because of its interface – I can easily see what a particular match is about without leaving the list of search results. Speeding up the typical use cases like this is very important for usability.

Wolfram Alpha impresses me because of the quality of the results. Maybe I’m in the minority thinking this – the press seems to have been giving it mostly negative reviews. Clearly WA is not intended as a Google replacement, but perhaps it was described as being one at some point. Today, being available to the public, it’s something different. It lets me look at data, mostly of the quantitative sort, and make all sorts of semi-interactive charts and comparisons. Here are some searches I liked: earthquakes in Japan, 1 cup of coffee, Tokyo to Osaka. I especially like the interactive earthquake graph.

WA is not without its problems though. Sometimes it’s hard to figure out what kind of queries you can make. I found the above mostly by experimentation. If they exposed more details about their data model and what they knew about each kind of object, maybe this would be easier. Right now I’m wondering why I can do a query like “largest cities” but not “largest cities in mexico”, for instance. I suppose this is mainly a question of maturity both on behalf of the system and of its users, though.

Search engines like PowerSet and WA are indicative of a broader trend towards semantics in computing and internet usage. While the semantic web isn’t here yet in the sense that we don’t have a semantic web browser or a unified way of querying the internet, clearly services that are based very heavily on semantic models are becoming mainstream. More on the impact of this in a future post.

1 comment » | Uncategorized

Back to top