Science needs time to think. Science needs time to read, and time to fail. Science does not always know what it might be at right now. Science develops unsteadily, with jerky moves and unpredictable leaps forward—at the same time, however, it creeps about on a very slow time scale, for which there must be room and to which justice must be done.Computer Science is inherently a fast discipline. Technology changes quickly and if you don't publish quickly your research may become irrelevant. We created a culture with conferences and deadlines that push us, even those of us on more theoretical end, to finish our projects quickly and publish the best we have at the next due date.
But we need to step back and take a while to view the great challenges we have in computing. Issues like privacy, big data, the Internet of things and cloud computing need time to really think of the right approach, not just short term projects. In computational complexity we have grand challenges in understanding the power of efficient computation but too often we just tweak a model to eke out another publication.
Andrew Wiles solved Fermat's last theorem by toiling on the problem by himself for several years right before the Internet revolution. Will we see a slow science success again in this new age?
You already have several posts on your blog on various breakthrough results (which sceptical commenters often call "breakthough results").
ReplyDeleteWouldn't they deserve to be considered successes of slow science ?
Calvin: "You know how Einstein got bad grades as a kid? Well mine are even worse!"
ReplyDeleteGrigori Perelman's slow science was done after the Internet revolution, so that is an example of a slow science success in the new age.
ReplyDeleteThere are no incentives in CS for slow science. The reality is that you need X amount of papers before you graduate to get a job and you need Y amount of papers to get tenure. If we change how people are evaluated then people will work differently.
ReplyDeleteThere are of course people who have done very well with few papers but that is a very risky strategy.
The problem is that you need an X amount, not of a certain quality. This forced horse-race is doing more damage to science than it should. It is much easier to measure the number X of papers than the quality of the paper. Program managers and higher-ups will tend to take the easier route, or favor things that get media attention as opposed to a high quality paper on a difficult concept. The quality of a paper is measured based on a collective vote by peers (on a short time scale) or the impact accumulated over years (on the long time scale), but only exceptional papers get a collective vote at all. But even a positive collective vote is not a guarantee for long-term impact. I think there is always a thread of slow science going on, but it is embedded in the noise of the fast "science".
ReplyDeleteIn TCS (and specifically complexity) you may need a certain amount, say two-three papers per year, but their quality is extremely important. Much more than their quantity. If you don't publish in first-tier conferences and journals and if your papers are not deep enough, do not address the big questions, or are not sophisticated or innovative enough, then you're not going to make it in any reasonably good school, let alone how much you publish.
ReplyDelete