Thursday, July 22, 2010

CRA Snowbird Part I

I just returned home from my first trip to the CRA Snowbird Conference, the biennial meeting of CS chairs and other leaders in the CS community. I really enjoyed the short meeting and saw many old friends who are now chairs, some people I've only known by email and many I've talked to for the first time. There are a few theorists who have become chairs and deans, and, in the case of Jeff Vitter, provost at Kansas. Unlike theory conferences where I am usually one of the old people, most CS chairs are just about my age.

I, as I had to remind most people I met, am not a chair. I attended as a speaker on the Peer Review in Computing Research panel giving my usual spiel on how the current publication culture hurts the community-building aspects of conferences. Jeannette Wing made a great argument of how our deadline-driven research and conservative program committees may lead our field to lose its "vibrancy, excitement and relevance". 

A number of people talked about the big projects they work on which make me almost rethink having gone into theory. Almost. The need for better algorithms shows up in many of these talks. Yoky Matsuoka from U. Washington talked about the artificial hand her group developed that has the full range of motions of a human hand but they lack the algorithms to have the hand do even some simple natural tasks. Illah Nourbakhsh from CMU talked about building electric cars and his ideas of using a supercapacitor as an energy cache for batteries so the batteries become smaller and cheaper but hits challenging cache optimizing issues. The group is running a contest, best algorithm wins an electric car. 

Sally Fincher from the University of Kent gave a surprisingly strong talk Why Can't Teaching Be More Like Research? We get judged by our research based on how we compare to the global community but teaching is much more local and Sally talked about the implications of this distinction.

Most disappointing was the discussion on the NRC Rankings. Charlotte Kuh, who served on the NRC committee putting together the "soon" to be released report, said it will not give a specific ranking of each department but rather a range, like University of Southern North Dakota is ranked between 8th and 36th. And not just one range but five ranges based on different weights of the various criteria. And you can create other rankings based on your own choice of weights. All based on 2005-6 data. And they used citation data from the ISI which doesn't include most CS conferences. The CRA board talked them out of that but now the CS data and rankings will use no citation information at all. But even outside of CS, with multiple ranking ranges and old data, the NRC report will be of little value.

7 comments:

  1. Sally Fincher gave a surprisingly strong talk Why Can't Teaching Be More Like Research?

    IMHO, the only thing "surprising" about this comment is the word "surprising."

    The cognate comment in medicine would be "Why can't resident education be more like medical research, teaching, and practice?"

    And the medical answer is simple: By design, every aspect of resident education encompasses as nearly as feasible, the daily practical realities of medical research, teaching, and practice.

    It has been my observation that this natural coupling of learning to practice is the main reason why women especially (and men too) prosper in medicine ... it's because the educational methods make sense.

    Ed Wilson has a passionate description of this "kinetic" way of learning, on pages 124 of his new novel Anthill, beginning with the phrase "What is the best way to learn a frog?"

    My observation is that Wilson's way is the best way to learn not only frogs, but every element of naturality, whether in math, science, engineering, or medicine.

    ReplyDelete
  2. Sally Fincher's slides are available at "http://www.cs.kent.ac.uk/people/staff/saf/cra-talk/snowbird-presentation.camproj1.html"

    ReplyDelete
  3. On the NRC rankings -- or lack thereof: one is left to wonder exactly why a discipline so comfortable with handling rich and complex data sets does not allow its own community to create their own metrics and evaluations, by making available all the raw data.

    Some of the raw data are already publicly available: peer reviewed publications and funding figures. Staffing numbers are published and can be organized in a data set. What else is needed? Square footage of office and lab space? h-index values for faculty members? Attrition rates for PhD programs? Coffee consumption? Everything we need is quantifiable and can be measured, I believe.

    Create an open database, publish all the data, and let members of the CS community construct their own evaluation formulae. This is not difficult to do.

    If the NRC is attempting to capture subjective reputation, let it keep trying to appease egos and politics. In the mean time, please open up the data sets and let those with basic proficiency in Excel, R, perl or whatever tool they wish to use, conduct their own analysis!

    ReplyDelete
  4. Even top departments do not want potential graduate students to have this data.

    ReplyDelete
  5. Create an open database, publish all the data, and let members of the CS community construct their own evaluation formulae. This is not difficult to do.

    What data? Good ranking does not equal bean counting, no matter how you weight the beans.

    ReplyDelete
  6. @anonymous 8:48 pm:

    OK, if rankings based on data are not good rankings then

    (a) NRC's rankings are not good rankings and

    (b) pray tell what good rankings are.

    ReplyDelete
  7. Good ranking does not equal bean counting, no matter how you weight the beans.

    then how do you rank places? ask people to give their opinion? isn't that how "Princeton Law School" ends up getting votes in some rankings?

    ReplyDelete