When we see "natural" in a computer science papers it
usually reflects an informal idea of realism, i.e., Clique
is a natural NP-complete problem while 1-in-3 SAT is not. Sometimes
though researchers use "natural" in a defined
term. Generally this should be avoided--no definition can prevent
artificial examples but more importantly perfectly reasonable notions that do not fit the
rule are, by definition, not natural.
I work with bits and usually take my logarithms base 2, an unnatural
logarithm. I use diagonalization to prove lower bounds on Turing
machines, an unnatural proof
technique applied to an unnatural
computing model. I have even been known to use unnatural
numbers, like 1/2.
What do you expect since I study an
unnatural
science?
No comments:
Post a Comment