In 1976, Juris Hartmanis and Leonard Berman defined the isomorphism conjecture: For all pairs of NP-complete sets there is a reduction from one set to the other that is 1-1, onto, polynomial-time computable and polynomial-time invertible. As a corollary to the conjecture all NP-complete sets must have many strings in them. They asked whether there could be any NP-complete sparse sets, where a set is sparse if the number of strings of length n is bounded by nk for some k.
Steve Mahaney in 1982 settled this second question.
Mahaney's theorem states that if P≠NP then there are no sparse NP-complete sets.
Before Mahaney, Piotr Berman (no relation to Leonard) in 1978 showed that there can't be NP-complete Tally sets, where a tally set is a subset of 1*. Steve Fortune extended this work to show that co-NP cannot have sparse complete sets. (These results assume P≠NP.)
To adapt Fortune's techniques for NP-complete sets, Mahaney had to find a way to know when strings were not in the sparse set. If one knew how many strings were in the set, and one found all those strings then you knew the rest were not in the set. Mahaney then just showed you could try all possible sizes of sparse sets. This neat idea of finding what's not there by finding everything that is there played a role in many future results in complexity, most notably in the proof that nondeterministic space is closed under complement.
In 1991, Mitsu Ogihara and Osamu Watanabe give a simpler proof using Left Sets, the set of pairs (φ,w) such that w is lexicographically smaller than some witness for φ. Ogihara and Watanabe's paper also extends Mahaney's theorem to show that a reduction from SAT to a sparse set that asks only a constant number of queries would imply P=NP. Whether a reduction using O(log n) queries implies P=NP remains open, even in relativized worlds.