Tag: human condition


Assessing research quality

April 28th, 2011 — 4:48pm

Academic research is difficult to evaluate. In order to know the significance of an article, a result or an experiment, one must know a lot about the relevant field. It is probably fair to say that few people read research articles in great depth unless they work in exactly the area the article is in. PhD theses might cite hundreds of articles, but it seems natural that not all of these articles will be read with the same degree of scrutiny by the author of the thesis.

Hence the trouble with obtaining funding for research. In order to obtain funding, you have to communicate something that seems incommunicable without the full commitment of the reader. Grant dispensers want to know a number on a scale: “what’s the quality of this paper between 0 and 1?”, but this quality number cannot be communicated separately from the full substance of the paper and its environs. And thus we end up with keywords, catchphrases that become associated with quality for short periods of time, as a way of bypassing this complexity, an approximate way of indicating that you are doing research on something worthwhile.

This reflects a broader problem in society of evaluating authorities. I cannot evaluate my doctor’s, or my dentist’s, or my lawyer’s work, since I don’t have the necessary competence. Accordingly, I base my trust on the person and some of their superficial attributes, instead of judging the work by itself. It seems that the same kind of thing becomes necessary sometimes in choosing what researchers to fund.

It also points to a faculty that must have evolved in human being since millennia: the capacity for evaluating important properties of things we do not understand well very quickly, for danger, nutrition, etc. Only that this faculty does not translate well to research…

Comment » | Computer science

Values 2: Human reason is reactive

January 27th, 2011 — 9:57am

Previously I wrote about Nietzsche’s assertion that philosophers must create values, and a distinction between scholars, scientists and philosophers was made. The focus now shifts to the faculty of reason and its contrast with another mode of thinking.

Reason can be understood as man’s ability to think according to precise rules. Logic is one such set of rules: by using axioms and inference rules, we are able to generate vast arrays of valid statements. For instance, we can attempt to prove mathematical truths, or we can work out how to place furniture in a room, or the quickest way of carrying out five different errands in an afternoon.

Two essential functions of reason are finding solutions and validating solutions. In finding solutions, sometimes we apply reason as a search process, that is, we work through a number of combinations until we find one that works, or until we give up. By deduction we can reduce the size of the search space, and sometimes deduction will lead to a result without any search being necessary at all. In validating solutions, we might obtain the proposed solution from anywhere, possibly from outside reason itself, and then, again it is sometimes a search process: we may attempt to find contradictions that invalidate the proposed solution, and we do not always find them immediately. This would be validation by absence of contradictions, but we might also validate a solution affirmatively by using it in a problem. For instance, we can verify that 7 is the square root of 49 by computing 7*7, and it would be useless to verify it by testing that 7*7 does not equal any of the values 1,2,3…48,50,51,52… infinity.

Reasoning is a slow, tedious process, and it can only consider so many possible solutions in a given amount of time. But it is reliable, and the results of different pieces of reasoning can often be composed to yield a larger, consistent result. But it is clear that our minds have other ways of functioning as well, with other strengths and weaknesses. In particular, it seems that reasoning is essentially a reactive process. It reacts to a given problem with given constraints and rules of inference. But it seems to be unable to create. Creativity appears to always come from extralogical, extra-reasonable places. Creativity in the spontaneous sense of a child drawing a picture with crayons, or a novelist writing a book, or an orator using a particularly persuasive combination of words that captures a fleeting feeling, or a commuter taking a different route home from work, out of curiosity. The distinction is not always clear-cut: a decision like choosing the colour of a wallpaper could be done both using “principles” with which one reasons logically, or using a spur of the moment feeling about what is good. It is clear, though, that the two can interact very productively: often a complex mental activity needs a dialogue between reason and extra-reason, and not just in the sense that extra-reason produces a suggestion that reason validates. This seems to be the danger with excessive reliance on rationality and scientific skepticism, then – it risks shutting out the essential extralogical factor and reducing decision making to searching, or from another viewpoint, it risks invalidating the most powerful search heuristic of all.

It seems as if there is a parallel, of sorts, with modern democracy in this distinction. Democracy at the national level, too, is a reactive form of decision making today. It is true that groups of a small or moderate size sometimes can create things collectively, and when they do, it seems to be the case that the form of the group enables individuals to take turns in influencing the group and being responsible towards it: the individuals make serial contributions that layer on top of each other to form the collective contribution. But voters in a national democracy do not have a format that allows this process to take place across the entire group, and the scale is too great. Those who create proposals are smaller subgroups or elites, and the voters are reduced to playing one of the roles that reason can play: affirm or reject proposals. In fact, not even this, since they are typically not asked to affirm every proposal – they are able to stage a revolution if their discontent becomes tremendously large, and otherwise they only have the ability to voice rejection every four years or so. (The exceptional case where very large groups can create something collectively would be when they share a common sentiment very well, for instance in the event of a national crisis.)

The seat of creativity is ultimately in the individual, and not in the collective. When democracies create agendas, goals, projects and proposals, they are not acting democratically, but channeling individual elements within.

2 comments » | Philosophy

Permanence and technology

November 19th, 2010 — 12:23am

1. Mt. Fuji, 3776 m high. A petrified mass of volcanic discharge, thought to have been first ascended in year 663.

2. Skyscrapers in Ootemachi, Tokyo and the City, London. Buildings belonging mostly to banks and insurance companies. They appear, on some intuitive level, to have been there forever, though most of these buildings can now be built from the ground up in less than a year. It is hard to fathom how they could ever be destroyed, though the work could be done in a matter of months (?) with the right equipment.

3. What is permanent? Anything that we cannot perceive as changeable, we call permanent. But this is a linguistic and epistemological error. The inability to perceive something has led us to declare its absence.

4. The earth. 5.9736 x 10^24 kg of matter, likely fused into a planet about 4.54 billion years ago. The sun will enter a red giant phase in about 5 billion years and swallow or cause tremendous damage to it. The sun is also currently the source of all fossilised energy on earth and the energy used by most life forms on it.

5. A certain class of mathematical proofs often consist in converting facts from one basis (family of concepts) to another. Such proofs often have a hamburger-like structure: first the initial facts are rewritten into a larger, more complex formulation that suits both the assumptions and the conclusion, and then the complex formulation is collapsed in such a way that the desired results come out and the original formulation is lost. The “beef” in such a proof often consists in carrying out the correct rewriting process in the middle.

6. Facebook takes off and becomes enormously popular, in part because it facilitates, on a huge scale, something that human beings want to do naturally. Communication and the need to relate to crowds and individuals could be said to be universal among humans.

An incomplete version of the technology lattice, as suggested in this post, with human desires at the top and the resources available in the universe at the bottom.

7. We can imagine technology as a lattice-like system that mediates between the human being, on one hand, and the universe on the other. As a very rough sketch of fundamental human needs, we could list drives like communication, survival/expansion, power/safety and art. (In fact, an attempt to make some of these subordinate to others would constitute an ethical/philosophical system. Here we do not need such a distinction, and the one I have made is arbitrary and incomplete.) When we place our fundamental drives on one end, and the resources and conditions provided by the universe on another – elements and particles, physical laws and constants – we can begin to guess how new technologies arise and where they can have a place. The universe is a precondition of the earth, which is a precondition of animals and plants, which we currently eat. And food is currently a precondition of our survival. But we can imagine a future in which we are not dependent on the earth for food, having spread to other planets. We can imagine a future in which oil and nuclear power are no longer necessary as energy sources, because something else has taken their place. New possibilities entering the diagram like this adds more structure in the middle – more beef – but the motivating top level and the supplying bottom level do not change perceptibly. (Of course, if they did, beyond our perception, they could be made part of an even larger lattice with a new bottom and top configuration.)

8. Technology is a means to the establishment of permanence, and a re-encoding of human desires into reality.

9. New technologies arise constantly. But can this evolutionary process go on forever? Does the lattice converge towards a final state?

Comment » | Philosophy

Utilitarianism and computability

September 18th, 2010 — 5:10pm

I’ve started watching Michael Sandel’s Harvard lecture series on political philosophy, “justice”. In this series, Sandel introduces the ideas of major political and moral philosophers, such as Bentham, Locke, and Kant, as well as some libertarian thinkers I hadn’t heard of. I’m only halfway through the series, so I’m sure there’s other big names coming up. The accessibility of the lectures belies their substance: what starts out with simple examples and challenges to the audience in the style of Socratic method often ends up being very engaging and meaty. (Incidentally, it turns out that Michael Sandel has also become fairly famous in Japan, with his lectures having been aired on NHK, Japan’s biggest broadcaster.)

One of the first schools of thought he brings up is utilitarianism, whose central idea appears to be that the value of an action is placed in the consequences of that action, and not in anything else, such as the intention behind the action, or the idea that there are certain categories of actions that are definitely good or definitely evil. What causes the greatest happiness for the greatest number of people is good, simple as that. From these definitions a huge amount of difficulty follows immediately. For instance, is short-term happiness as good as long-term happiness? How long term is long term enough to be valuable? Is the pleasure of ignorant people as valuable as that of enlightened people? etc. But let’s leave all this aside and try to bring some notion of computability into the picture.

Assume that we accept that “the greatest happiness for the greatest number of people” is a good maxim, and we seek to achieve this. We must weigh the consequences of actions and choices to maximise this value. But can we always link a consequence to the action, or set of actions, that led to it? Causality in the world is a questionable idea since it is a form of inductive knowledge. Causality in formal systems and in the abstract seems valid, since it is a matter of definition, but causality in the empirical, in the observed, seems to always be a matter of correlation: if I observe first A and then B sufficiently many times, I will infer that A implies B, but I have no way of knowing that there are not also other preconditions of B happening (for instance, a hitherto invisible particle having a certain degree of flux). It seems that I cannot reliably learn what causes what, and then, how can I predict the consequences of my actions? Now, suddenly, we end up with an epistemological question, but let us leave this too aside for the time being. Perhaps epistemological uncertainty is inevitable.

I still want to do my best to achieve the greatest happiness for the greatest number of people, and I accept that my idea of what actions cause what consequences is probabilistic in nature. I have a set of rules, A1 => B1, A2 => B2… An => Bn which I trust to some extent and I want to make the best use of them. I have now ended up with a planning problem. I must identify a sequence of actions that maximises that happiness variable. But my brain has limited computational ability, and my plan must be complete by time t in order to be executable. Even for a simple problem description, the state space that planning algorithms must search becomes enormous, and identifying the plan, or a plan, that maximises the value is simply not feasible. Furthermore, billions of humans are planning concurrently, and their plans may interfere with each other. A true computational utilitarian system would treat all human individuals as a single system and find, in unison, the optimal sequence of actions for each one to undertake. This is an absurd notion.

This thought experiment aside, if we are utilitarianists, should we enlist the increased computing power that has recently come into being to help manage our lives? Can it be used to augment (presumably it can not supplant) human intuition for how to make rapid choices from huge amounts of data?

1 comment » | Computer science, Philosophy

Multiplayer protein folding game

August 10th, 2010 — 4:07pm

You read it here first – Monomorphic predicted this development in February. In a recent Nature article, researchers describe a multiplayer online graphical protein folding game, in which players collaborate against the computer to fold a protein correctly quickly. (Also: NYTimes article.) It turned out that the human players were successful compared to the computers, and the comparison teaches us much about the problem solving heuristics that humans use. Which will be the next computational task to be turned into an online game?

Comment » | Computer science

Back to top