Are Computer Scientists Hypercritical?

Print Friendly, PDF & Email

Are computer scientists hypercritical, even more critical than scientists and engineers in other disciplines? Bertrand Meyer from ETH Zürich cites statistics that, in 1-to-5 evaluations of projects submitted to the NSF, the average grade of computer science projects is one full point lower than the average for other disciplines.

I have three hypotheses,” said now Jeannette M. Wing, formerly of NSF. “One is that it is in our nature. Computer scientists like to debug systems. We are trained to consider corner cases, to design for failure, and to find and fix flaws. Computers are unforgiving when faced with the smallest syntactic error in our program; we spend research careers on designing programming languages and building software tools to help us make sure we don’t make silly mistakes that could have disastrous consequences. It could even be that the very nature of the field attracts a certain kind of personality. The second hypothesis is that we are a young field. Compared to mathematics and other science and engineering disciplines, we are still asserting ourselves. Maybe as we gain more self-confidence we will be more supportive of each other and realize that “a rising tide lifts all boats.” The third hypothesis is obvious: limited and finite resources. When there is only so much money to go around or only so many slots in a conference, competition is keen. When the number of researchers in the community grows faster than the budget — as it has over the past decade or so — competition is even keener.

Read the Full Story.


  1. I’d like to propose a fourth hypothesis: that computer scientists’s behavior is affected by regular interaction with their even more hypercritical colleagues in the computer industry. Working programmers are notorious for their tendency to trash others’ work. In addition to the hypotheses you mention, common theories include the relatively high number of programmers who are socially inept or maladjusted to the point of it being a medically diagnosable condition (most often Asperger’s), or the deliberate rejection of social nicety as a form of inefficiency (related to the previous hypothesis IMO). I’ll add one more. Many working programmers work in the intensely competitive startup culture. In addition to all the usual competitive behaviors, the notion of “disruption” has really taken root there. It’s not sufficient to prove that your product is good, or even that it’s better than some other, but many startup folks seem to think it’s necessary to show how previous approaches are *total* dead ends with no possible value or chance of redemption now that The New Hotness has come along to disrupt that entire segment of the industry. Even attempts to adapt older technology to newer needs are derided as investments in a failed model. Academics in general don’t seem to do this. They look for the value even of projects that might have failed overall, gleaning what they can from the described experience instead of rejecting it part and parcel. Given the free motion of people between CS academe and industry, though, it might not be surprising that the startup crowd’s “scorched earth” tactics have not only set the tone for the rest of industry but gained a foothold in the ivory tower as well.