Thursday 20 December 2007

 Is competition a problem?

Daniel Lemire and Peter Turney have had an argument about the benefits of competition versus cooperation in science.
Since my opinions are (as usual...) no too well received on other people blogs I will summarize here what else I see as problematic with competition.

In a succeding post of Peter Turney he mentioned that Einstein was inspired by a book by Henri Poincaré, to me this is a perfect example of what Daniel Lemire said :
"After all, being the first to solve a given scientific problem, is important.
Science is a winner-takes-all game, at least some of time."
Actually it can be said that Einstein was "inspired" by the works of many (Poincaré, Lorentz, Minkowski, Hilbert, Grossmann, ...) though his groundbreaking publications did not include references to the work of others.
And after a few productive years (roughly 1905-1915) Einstein's findings appear remarkably bland and very much lacking of any insight ("God doesn't play dice" re quantum mechanics...).
Puting aside the vexing nature of this for the forgotten contributors and not to detract to the performance of having been able to make a synthesis of research trends of his time, this also breeds among the public a lot of misconceptions about science, lone geniuses and the like, which are used by a few scoundrels to sell snake oil or politically loaded agendas, "miracle cures" for this or that, creationism, toxic cults, etc...

So whenever Peter Turney says :
"There is a conflict between competition and cooperation in science.
I feel that conflict myself, but I strive for cooperation.
I have never regretted sharing my ideas."
May be he is overlooking a few other nefarious effects of competition beyond the selfless/selfish antagonism.

But even the misrepresentaion of science within the general public doesn't appear to me as the most serious problem with science.
What seems much more dangerous is the end result of giving the "competitive naked apes" a lot of powerfull gizmos which they tend to use as carelessly as if they were stone axes.
Breaking the neighbour's skulls with a stone axe is of course unfortunate but doesn't entail much damage, having Kim Jong-il, GW Bush and Ahmadinejad playing "nuke poker" is a game in a wholly different league (it seems to me...).

I do not share the optimism
Peter Turney shows in his "Second Most Important Research Problem", I see no good reason to suppose that cooperation can be agreed upon in most cases because "private" interests whether of groups or individuals are likely to always conflict on one point or another.
There is no such thing as the "general good" and it is not a matter of lack of rationality in trying to define it.
This is why though I find AI a very valuable research goal I am not really at ease with it.

Not because, like the silly paranoid Singularitarians (Hanson, Yudkowsky, Anissimov, etc...), I fear that the "Big Bad Autonomous AI" will take over humans and cull the masses of useless apes, but because the said useless apes are very well able to turn any technical capability into "improved" means to pursue their lethal competitives practices.

Because in the end it's not the "intelligent monkeys" who are in control but the ones with the biggest egos and the "biggest balls".

Submitted by Kevembuangga
KevembuanggaonThursday 20 December 2007 - 10:02:39
comment: 0

News Categories