Friday, October 13, 2006

What Happened to Departmental Tech Reports?

Imagine back to the early 90's before we had a world-wide web. You had a new result, a nice result but not so important that you would do a mass email. You wanted to mark the result with a time-stamp and make it available to the public so you created a departmental technical report, basically handing a copy of the paper to a secretary. You would get a report number and every now and then a list of reports was sent out to other universities who could request a copy of any or all of the reports. Eventually the paper would go to some conference and journal but the technical report made the paper "official" right away.

As the web developed CS departments started putting their tech reports online. But why should you have to go to individual department web sites to track down each report? So people developed aggregators like NCSTRL that collected pointers to the individual paper and let you search among all of them. CiteSeer went a step further, automatically searching and parsing technical reports and matching citations.

But why have technical reports divided by departments? We each live in two communities—our university and our research field. It's the latter that cares about the content of our papers. So now we see tech report systems by research area, either area specific systems like ECCC or very broad report systems like arXiv that maintain specific lists in individual subareas that bypass the department completely.

What's next? Maybe I won't submit a tech report at all letting search engines like Google Scholar or tagging systems like CiteULike organize the papers. Departmental tech reports still exist but don't play the role they once did and who can predict how we will publish our new results even five or ten years down the road.


  1. I think the major issue is how to rank tech reports quickly -- in a matter of days -- so people know what's worth checking out.

    Bloggers do this to some extent, but one would expect social news and automated methods to help as well (e.g., those that take into account the reputation of the authors, whether the topic is "hot", etc.).

  2. Given that a mere tech report is easier to access than a journal publication, I would guess that the future of "scientific publication" will essentially be about evaluation. I think that results from Mechanism Design on peer2peer
    quality control would apply.

    One question concerning technical reports though: it seems that sometime people are allowed to "update" their results in *old* technical results (i.e. without "updating" the date). Doesn't it contradict Lance's statement that a technical report "officialise" a result?

  3. For evaluation of tech reports, one might try an approach like this:

  4. I used to like the idea of people putting all new results out immediately on their web page. But I've since begun to think this is a bit unfair. (Maybe it's different in theory...that is not my field.) The problem is that you can put out a tech report with a half-baked idea, without going through the effort of backing it up (eg., through experiments). By avoiding the reviewing cycle, you get to time mark this much earlier than someone who did go through the effort to complete the baking process in order to make the result publishable. In addition to the annoying credit assignment problem, this means that (if the second paper can even get published, which is not a given) there are now two papers on essentially the same thing.