(Updated version of Computational Intractability: A Guide to Algorithmic Lower Bound by Demaine-Gasarch-Hajiaghayi is here)
Any question like who first though of X is often hard to answer. I blogged about who first came up with the Fib numbers here. I've heard rumors that IBM had search engines way before Google but could not figure out how to make money off of it. There are other examples.
I had learned that Cobham defined P in the paper The intrinsic computational difficulty of functions, in 1965. It was The conference on Logic, Methodology, and Philosophy of Science. The paper is here. Jack Edmonds had the notion of P in the paper Paths, Trees, and Flowers here in 1965.
While it is true that Cobham defined P in that paper, and he might have been the first one to do so, was the notion somehow around earlier. I first thought the answer was no. Why? Because if you look at Joe Kruskal's paper on MST (see here) you don't see anything resembling time complexity. No O(E+Vlog V) or whatnot. So I thought that if the notion of this algorithm runs in such-and-such time was not in the air, then certainly any notion of P could not have been.
Hence I was surprised when I accidentally (more on that later) came across the following:
In 1910 (really, 1910) H.C.Pocklington analyzed two algorithms for solving quadratic congruences and noticed that
one took time proportional to a power of the log of the modulus, where as
the other took time proportional to the modulus itself or its square root.
THAT is the distinction between P and NOT-P.
The paper is titled The determination of the exponent to which a number belongs, the practical solution of certain congruences, and the law of quadratic reciprocity. It appeared in 1910, in the Proceedings of the Cambridge Philosophical Society, Volume 16, pages 1-5. (I could not find it online. If you know of a place online for it, leave a comment.)
ADDED LATER: Here is the article in pieces: Page 1, Pages2,3, Pages4,5.
How did I come across this? And why had I NOT come across this in my roughly 40 years working in complexity theory?
I came across it while reading a blog of Scotts, The Kolmogorov Option, see here where Pocklington is mentioned in passing. I am surprised how arbitrary the set of things ones knows can be. I have put the Pocklington story in the Demaine-Gasarch-Hajiaghayi book Computational Intractability: A Guide to Algorithmic Lower Bounds so that this knowledge gets to be better known.
ADDED LATER: That Cobham and Edmonds are known for discovering or inventing P is an example of the well known
Columbus Principle: Things are named after the LAST person to discover them (note that Columbus was the last person to discover America.)
Bonus Question: Most principles where the author is not on it, the author might be unknown. NOT in this case. I KNOW who coined the term `Columbus Principle' Do you? (It was not me.)
Scott is referring to a slightly different paper, mentioning 1917 as publication date.
ReplyDeleteNote that Don Knuth in TAOCP Vol 2 has an exercise under
Factorization of Polynomials (section 4.6.2); see Exercise 15
(Square Roots Modulo a Prime): Design an algorithm to calculate the square root of a given integer $u$ modulo a given prime $p$.
He mentions the paper that Scott is referring to
(i.e., H. C. Pocklington, Proc. Camb. Phil. Soc. 19 (1917), p.57--59).
I think that even Don only became aware of the Pocklington phenomenon later on ... in 1996 ... but that needs double checking.
NB: There's another exercise in section 4.5.4, suggested by H.C. Pocklington in 1914 (!), but this is related to Factoring Primes.
Didn't Kurt Gödel consider quadratic time to be a sort of upper limit of feasibility in a letter to von Neumann in the 50s? Certainly not all of P directly, but a notion that a constant degree of n qualifies somewhat as a major precursor.
ReplyDeleteYour are referring to Godel's lost letter. A good article about this is Von Neumann, Godel and Complexity Theory, by Urquhart, in Bull of Symbolic Logic, Vol 16, No 4, 516--530. And YES there wer SOME notions of time complexity in the air between Pocklington and Cobham/Edmonds. but not much.
DeleteLiterally no one gives a shit
ReplyDeleteCool findings! I mention Pocklington paper in my lectures.
ReplyDeleteNitpick about "Columbus principle" is a counterexample to itself.
1) America is not named after him 2) "Things are named AFTER the last person to discover them" I highly doubt that he discover that principle at all.
A correction would be "Things are named BY the last person to discover them", but that doesn't take into account rediscoveries.
A better principle would "Things are named by the person who popularizes it the most".
But then, we can take into account that things can have many names and we get "things have many names, the popularity of a name is usually proportional to the popularity of the people using that name".
I propose to rename "Columbus principle" to "Nothing should be named after Columbus except Columbus principle"
AH, I should have said
DeleteCredit is given to the last person to discover something.
Columbus is given credit for discovering America.
And yes, life is more complicated than that. It would be interesting to study how things get their name.
I don't know to what extent this is relevant, but the computational complexity of the Euclidean algorithm for GCD has been studied since the 19th century. See https://en.wikipedia.org/wiki/Euclidean_algorithm#Algorithmic_efficiency
ReplyDeleteGreat- Yes- very relevant! If my book was not already over 540 pages I might include that (I might anyway).
DeleteMore generally, a history of awarness of computational issues and how formal they are could be... another whole book!
> I've heard rumors that IBM had search engines way before Google but could not figure out how to make money off of it.
ReplyDeleteYou might be referring to IBM's early Text Analytics platform WebFountain here (Gruhl et al. 2004, IBM Sys. J.), which contained aspects from a search engine, but I wouldn't say it qualifies to be called a Web search engine. Before Google, there notably was AltaVista, though.
Hi, for von Neumann the article [0] claims some notions of computational complexity from a paper of his in 1953. I would also add on page 28 of [1] Gregory Chaitin claims that he first heard ideas about computational complexity from von Neumann some time around 1950, so perhaps giving him an email about this might give you a bit of history.
ReplyDelete[0]: https://www.redalyc.org/articulo.oa?id=265219625004
[1]: https://link.springer.com/book/10.1007/978-1-4471-0185-7
From https://en.wikipedia.org/wiki/Euclidean_algorithm: "A more efficient version of the algorithm .... was proven by Gabriel Lamé in 1844, and marks the beginning of computational complexity theory. "
ReplyDeleteFWIW, I think I learned about H. C. Pocklington from Jeff Shallit.
ReplyDeleteRegarding IBM anticipating Google, maybe you're thinking of the CLEVER project, which Jon Kleinberg led while at IBM Almaden around 1996? It was roughly contemporaneous with Brin and Page's PageRank.
In addition to Cobham and Edmonds, the third near-simultaneous (?) "introduction" of P that I knew of was: Rabin, "Mathematical Theory of Automata" (Proc. Symp. Appl. Math. of the AMS, Vol. 19, 1967), see pp. 159-160; the symposium was held in 1966. Although technically (shortly) after Cobham and Edmonds, he does not cite them - I do not know if he was aware of them, but he makes no mention of it. I had always thought of the three (Cobham, Edmonds, Rabin) as near-simultaneous independent introductions of the same concept.
ReplyDeleteAlso, my impression is that Cobham : Edmonds / Rabin. :: Church : Turing. In the sense that Cobham introduced a definition of P but made no real argument for it, whereas Edmonds and Rabin both did. (Similarly, Church introduced lambda calculus, but Turing was the one who really made a cogent argument that Turing machines actually captured what we intuitively meant by "computation")
In the Symposium on the Theory of Graphs and its Applications held in Smolenice (Czechoslovakia) in June 1963, the Soviet graph theorist A. A. Zykov (then based at Novosibirsk) posed the problem of whether the Tutte polynomial of a graph --- or even its chromatic polynomial --- can be computed by a method that improves on brute force in the same way in which efficient methods for computing the determinant (e.g., Gaussian elimination) improve on direct calculation from the definition (as a sum over permutations of the row/column index set). This is in:
ReplyDeleteA. A. Zykov, Problem 31, in
Theory of Graphs and its Applications
(Proc. Sympos. Smolenice, 1963),
Publ. House Czechoslovak Acad. Sci., Prague, 1964, p. 165.
In his statement of this problem, Zykov didn't formulate a definition of polynomial time, and the problem is somewhat informal, but he did seem to be intuitively aware of the essential difference in character between polynomial- and exponential-time algorithms, and his problem expresses that difference in a way that his audience could immediately relate to.
He actually poses the question more generally, for representable matroids (though he does not use that term) given by an appropriate matrix representation, rather than just graphs.
He frames his question in terms of the coefficient array for the Whitney rank generating function rather than the Tutte polynomial. But it amounts to the same thing: the Whitney rank generating function and the Tutte polynomial differ only by a coordinate translation of 1 in each direction. His question about the chromatic polynomial is again framed in terms of its coefficients.
I discuss this briefly in section 12 of:
G Farr, The history of Tutte–Whitney polynomials, in:
J A Ellis-Monaghan and I Moffatt,
Handbook of the Tutte Polynomial and Related Topics,
Chapman and Hall/CRC, 2022, Chapter 34, pp. 623--668.