Friday, October 21, 2011

The Cup Holder Principle


The story goes that when Toyota engineers started to design the first cup holders in the 80's, they went to a local 7-11 and got every different cup 7-11 had to make sure their design would work on all the cups in current use. These days the cup designers have to make sure their cups will work in today's cup holders.

This is one of the starkest examples of initial Technology for A being driven by B and now the technology for A drives the technology for B.

How much does this happen in theoretical computer science? Do we design algorithms to use an already established data structure? Do we modify our definitions of some object to make it group or a field? Do we create a cryptographic protocol so that some "standard assumption" makes it secure?

Are these good or bad things? Sometimes it is really useful to make square pegs fit into round holes and other times we miss new opportunities.

9 comments:

  1. I'd say worst-case analysis is a glaring example

    ReplyDelete
  2. We design tomorrow's CPUs to make today's benchmarks run fast, and use languages designed to run fast on yesterday's CPUs (C designed for the PDP-11, etc.).

    ReplyDelete
  3. Originally Google designed a search engine that will
    help you find relevant answers.

    Now companies try to design their websites so that
    Google will find them.

    ReplyDelete
  4. Perhaps complexity theory's standard-size cup-holders P/NP (specified in the 1970s) are needlessly large? Because practice, the drinks (≈ algorithms) people put in these holders belong to a restricted class whose volume (≈ run-time exponent) is decidable.

    After all, there's not much demand for drinks whose volume is undecidable; how would one even price these drinks?

    Moreover, if we shrink both of these cup-holders, perhaps it will become more readily apparent whether the NP cup-holder is the larger of the two (as most people think).

    The preceding is an informal statement of (what I take to be) a recurring theme of Juris Hartmanis' research.

    ReplyDelete
  5. Parameterized complexity theory?

    ReplyDelete
  6. This happens all the time in society. That's how you know you've won the social battle.

    Echoing what GARSARCH said. Originally, financial analysts/accountants devised metrics to determine what busineses were well run. Now businesses try to figure out ways to run their business to match what financial analysts want to see.

    Or turning more to the tech field. People are starting to fear robot competition, but it's not just because robot or algorithms are improving in judgment to match human performance. It's because jobs/questions are being redefined to suit the robot's capabilities. Soon humans will have to learn how to think in robotic ways to fit the new definitions. You already see this happening on the web, in finance, in retail data mining.

    This typically is not a good thing IMO. It heralds the slow death of a field of study when it turns inwards like this. The silver lining is that then there's room for another field to rise, and prove itself as a more accurate way to see the world.

    ReplyDelete
  7. without pressure for constant success and new, different and better results things won't be moving in explained way..all things are in correlation and it's more natural for some variables evolve at the time so others just follow that process.
    why all the comments here are in english?

    ReplyDelete
  8. shall we be driven by the beauty of theory or by the benefits of application?

    who knows ...
    |=

    ReplyDelete
  9. Does this not happen in nature?
    Evolution is an adaption to the evolution before.

    ReplyDelete