Academic research is difficult to evaluate. In order to know the significance of an article, a result or an experiment, one must know a lot about the relevant field. It is probably fair to say that few people read research articles in great depth unless they work in exactly the area the article is in. PhD theses might cite hundreds of articles, but it seems natural that not all of these articles will be read with the same degree of scrutiny by the author of the thesis.
Hence the trouble with obtaining funding for research. In order to obtain funding, you have to communicate something that seems incommunicable without the full commitment of the reader. Grant dispensers want to know a number on a scale: “what’s the quality of this paper between 0 and 1?”, but this quality number cannot be communicated separately from the full substance of the paper and its environs. And thus we end up with keywords, catchphrases that become associated with quality for short periods of time, as a way of bypassing this complexity, an approximate way of indicating that you are doing research on something worthwhile.
This reflects a broader problem in society of evaluating authorities. I cannot evaluate my doctor’s, or my dentist’s, or my lawyer’s work, since I don’t have the necessary competence. Accordingly, I base my trust on the person and some of their superficial attributes, instead of judging the work by itself. It seems that the same kind of thing becomes necessary sometimes in choosing what researchers to fund.
It also points to a faculty that must have evolved in human being since millennia: the capacity for evaluating important properties of things we do not understand well very quickly, for danger, nutrition, etc. Only that this faculty does not translate well to research…