tag:blogger.com,1999:blog-3722233.post113820716928809651..comments2020-07-09T17:10:19.118-04:00Comments on Computational Complexity: A Theorem that should be better knownLance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.comBlogger23125tag:blogger.com,1999:blog-3722233.post-45352124881769765992016-02-28T12:17:57.515-05:002016-02-28T12:17:57.515-05:00This comment has been removed by the author.Jrachmanhttps://www.blogger.com/profile/05594000289583182258noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-86266623308730889362011-04-26T12:40:17.886-04:002011-04-26T12:40:17.886-04:00The paper appeared in THEORY OF COMPUTING SYSTEMS ...The paper appeared in THEORY OF COMPUTING SYSTEMS Vol 45, No. 3, in 2009.<br /><br />They made us take out the appendix which had alt proof of Higmans lemma; however,<br />the version on my website has those appendices.<br /><br />I doubt Steve Fenner will read this blog<br />from 2006- you should email him directly.GASARCHhttps://www.blogger.com/profile/06134382469361359081noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-78364690304064016582011-04-24T13:56:38.439-04:002011-04-24T13:56:38.439-04:00I believe that I, too, have found a simple proof o...I believe that I, too, have found a simple proof of Higman' Theorem. My point of view, and the application I have in mind, seems to ask for a different statement and a slightly stronger version: Given a finite alphabet \Sigma, if we add permissible subsequences indefinitely, we must eventually terminate at /Sigma*. I haven't fully read anyone else's proof, to develop my own ideas freely (and also because I'm lazy about reading proofs), but I suspect most or all other proofs actually prove the same thing. I will eventually read at least one other proof, but I have a query: Am I right that other proofs also prove the above stronger statement?<br /><br /> I have a question and a recommendation for Steve specifically. I have a copy of your manuscript. Where has or will it appear? My recommendation is: You avoid well quasi-orders, but you do use ordinary well orders, at least implicitly, and perhaps you should say so in the manuscript.Stefanhttp://burrs@att.netnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1167522439422087702006-12-30T18:47:00.000-05:002006-12-30T18:47:00.000-05:00There IS a recent text-book that contains Higman's...There IS a recent text-book that contains Higman's Lemma together with a short proof of it (one page): the book of Reinhard Diestel, "Graph Theory". It is stated there in the context of the graph minor theorem.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1160726848243542752006-10-13T04:07:00.000-04:002006-10-13T04:07:00.000-04:00Imagine my surprise when my co-authors (Cortes and...Imagine my surprise when my co-authors (Cortes and Mohri, "Learning<BR/>Linearly Separable Languages") pointed out that another paper in ALT06<BR/>-- Fenner and Gasarch, "The Complexity of Learning SUBSEQ(A)" -- was<BR/>using Higman's result. Imagine Steve's and my surprise when it turned<BR/>out that a third paper at that same conference was also using Higman's<BR/>theorem: de Brecht and Yamamoto, "Mind Change Complexity of Inferring<BR/>Unbounded Unions of Pattern Languages From Positive Data". Perhaps the<BR/>time was ripe for this obscure result to come start getting<BR/>mileage... or perhaps it wasn't that obscure to begin with! <BR/><BR/>-LeoAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1158757769894487612006-09-20T09:09:00.000-04:002006-09-20T09:09:00.000-04:00This theorem appear on page 64 of John Conway's bo...This theorem appear on page 64 of John Conway's book "Regular Algebra and Finite Machines". I am not an expert in this area but it seems to me that this text on finite automata contains lots of material which is fundamental to automata theory but has not been explored since the book was written in in the early 1970s. This is despite the book being cited in many papers in the computer science literature. I guess what I am trying to say is that if you want results that should be better known go and read Conway's little book!Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1138334867107739322006-01-26T23:07:00.000-05:002006-01-26T23:07:00.000-05:00To clarify my words: the 'form an antichain' claim...To clarify my words: the 'form an antichain' claim is not dependent on the partial ordering being well-. Only the finitude claim is.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1138334545932299602006-01-26T23:02:00.000-05:002006-01-26T23:02:00.000-05:00Well, here's what I think I was reaching for.Let G...Well, here's what I think I was reaching for.<BR/><BR/>Let G be a gate-set; let F(G) be the functions computable with G. Form gate-set equivalence classes: <BR/>[G] = {G': F(G') = F(G)}.<BR/><BR/>Partial-order these classes:<BR/>[G1] <= [G2] if F(G2) contains F(G1).<BR/>(not just quasi- because it's antisym.)<BR/><BR/>Suppose it turns out to be a well-partial-ordering; then looking at the set of equivalence classes of the maximal non-universal gate sets, they form an antichain. So there must be only finitely many of them.<BR/><BR/>This falls short of the result you quote, because one of these function classes might have infinitely many maximal basis gate-sets. Still, it's part-way there.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1138316609419446092006-01-26T18:03:00.000-05:002006-01-26T18:03:00.000-05:00"Scott, is the second result you reference (which ..."Scott, is the second result you reference (which I agree is very cool) also naturally in the orbit of wqo/Robertson-Seymour type results?"<BR/><BR/>I don't think so (but I could be wrong).Scotthttps://www.blogger.com/profile/12161309448306097395noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1138247803897467552006-01-25T22:56:00.000-05:002006-01-25T22:56:00.000-05:00OK, thanks. I actually just didn't notice that Bi...OK, thanks. I actually just didn't notice that Bill had actually stated a k-ary generalization of what I proved (typical CS lacuna--expecting that binary alphabets always capture the essential complexity). I'll think about k > 2.Andy Dhttps://www.blogger.com/profile/03897281159810085972noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1138244972373210332006-01-25T22:09:00.000-05:002006-01-25T22:09:00.000-05:00Am I missing something? I found a proof of Higman'...<I>Am I missing something? I found a proof of Higman's Lemma pretty quickly. It uses Dickson's Lemma ...</I><BR/><BR/>This is a good concise proof, but only of the binary case of Higman's result. It resembles some sort of hybrid between Higman's proof and mine (see the link in my previous comment). Dickson's Lemma is essentially a restatement of the fact that<BR/><BR/> (N^k, componentwise-domination)<BR/><BR/>is a wqo, and that part resembles Higman's proof. The question of whether SUBSEQ(L) has an excluded string is equivalent to that of whether strings in L have unbounded 0-1 alternation. I generalize this idea to prove the general case for a k-ary alphabet.<BR/><BR/>(Higman's full proof uses the fact that (Sigma*, subseq) is a wqo, for any finite alphabet Sigma. Once this is established, the rest of the proof is easy and straightforward.)<BR/><BR/>By the way, I wasn't deliberately trying to avoid wqo's or Dickson's Lemma in my own proof. I just didn't know about them at the time (although I knew I was reproving a known result).<BR/><BR/>Finally, I can imagine a scenario where Higman's result is useful: a language L may be <I>obviously</I> closed downward under the subseq relation, but <I>not</I> obviously regular. Higman's result says that L = SUBSEQ(L), so L is regular. For example,<BR/><BR/>L = {w in {a,b,c}* | w has at most 5 occurrences of a followed by b, and at most 3 of them have c in between}<BR/><BR/>Of course, what is obvious and not obvious is in the eye of the beholder.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1138243983338019892006-01-25T21:53:00.000-05:002006-01-25T21:53:00.000-05:00This comment has been removed by a blog administrator.GASARCHhttps://www.blogger.com/profile/06134382469361359081noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1138244249211100542006-01-25T21:57:00.000-05:002006-01-25T21:57:00.000-05:00(I feel odd commenting on my own post.)YES, the pr...(I feel odd commenting on my own post.)<BR/>YES, the proof given above of SUBSEQ Thm.<BR/>using Dickson's lemma is correct.<BR/>In fact, the proof of SUBSEQ theorem is<BR/>NOT hard. I suspect that your proof<BR/>and the standard one are the same <BR/>same proof. When I say it needs<BR/>`wqo theory' that just means that it<BR/>would take some work to get to in an<BR/>ugrad automata theory class, but it<BR/>really could be done. And it could be<BR/>in the textbooks- would not take that<BR/>many pages. <BR/><BR/>bill g.GASARCHhttps://www.blogger.com/profile/06134382469361359081noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1138242912341310432006-01-25T21:35:00.000-05:002006-01-25T21:35:00.000-05:00Scott, is the second result you reference (which I...Scott, is the second result you reference (which I agree is very cool) also naturally in the orbit of wqo/Robertson-Seymour type results?Andy Dhttps://www.blogger.com/profile/03897281159810085972noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1138242302375645502006-01-25T21:25:00.000-05:002006-01-25T21:25:00.000-05:00I guess Suresh alludes on his post to a proof-tech...I guess Suresh alludes on his post to a proof-technique that is much the same; I just want to argue that it's not arcane.Andy Dhttps://www.blogger.com/profile/03897281159810085972noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1138241581374182622006-01-25T21:13:00.000-05:002006-01-25T21:13:00.000-05:00(edited post)Thanks for the cool pointer, Bill.Am ...(edited post)<BR/><BR/>Thanks for the cool pointer, Bill.<BR/><BR/>Am I missing something? I found a proof of Higman's Lemma pretty quickly. It uses Dickson's Lemma, which now that I've looked up the terms is I guess part of w.q.o. theory, but that result has an easy, self-contained proof by induction (I learned about it in week 2 of an undergrad alg. geometry course) and is beautiful discrete math. So I'm not sure why Higman's result can't be in more texts.<BR/><BR/>Dickson's Lemma: Let S be a subset of the set of <BR/>k-tuples of natural numbers Suppose that S is 'upwards closed': if v1 is in S and v2 dominates v1 coordinate-by-coordinate, v2 is in S.<BR/><BR/>Then there's a finite subset S' of S such that v is in S iff v dominates some element of S'.<BR/><BR/>Proof is induction on k.<BR/><BR/>Proof of Higman's lemma:<BR/><BR/>Let L be a language. If every string is in subseq(L), subseq(L) is decidable; so say x is a forbidden subsequence for L.<BR/><BR/>Insert 0's and 1's into x so that 0's and 1's alternate; the resulting string x' is also forbidden. Let k be the length of x'; than no string in L can have more than k alternations between 0 and 1. <BR/><BR/>Slice up the (k+1)-alternation-restricted strings according to how many 0-1 alternations (0 <= j < k+1) a string has and which bit (b) it begins with. <BR/><BR/>Any of the strings in the (j, b) slice can be naturally encoded as a (j+1)-tuple of natural numbers in a bijective way (for that slice), e.g.<BR/><BR/>000111011 ---> (3, 3, 1, 2) in the (3, 0) slice;<BR/>11011 -----> (2, 1, 2) in the (2, 1) slice;<BR/>11------> (2) in the (0, 1) slice.<BR/><BR/>Then it holds that if any (j+1)-tuple v1 encodes a forbidden (j, b) subsequence of L and v2 dominates v1, v2 also encodes a (j, b) forbidden subsequence. Thus by Dickson's Lemma, the forbidden (j, b) strings are exactly those <BR/>j-alternation-restricted strings whose encodings dominate the encoding of one of a finite set of (j, b) strings. This is a finite disjunction of properties easy to test by finite automata; using the closure of regular languages under finite union and complement, and applying the easy check for too many 0-1 alternations, we find subseq(L) is regular. QED<BR/><BR/><BR/>This does beg the question of how to provide the finitely many strings we need, given a description of L (Dickson's Lemma is nonconstructive). But of course it's undecidable to do this given just a machine for L, and in any case, as Bill says, who actually cares about subseq(L)?Andy Dhttps://www.blogger.com/profile/03897281159810085972noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1138240129515456092006-01-25T20:48:00.000-05:002006-01-25T20:48:00.000-05:00This comment has been removed by a blog administrator.Andy Dhttps://www.blogger.com/profile/03897281159810085972noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1138228588592247302006-01-25T17:36:00.000-05:002006-01-25T17:36:00.000-05:00OK, I've got another result that ought to be bette...OK, I've got another result that ought to be better-known in our community (though it <I>is</I> well-known in a different community).<BR/><BR/>Over a Boolean alphabet, what are the largest sets of gates that are <I>not</I> universal? Assuming the constants 0 and 1 come for free, it's easy to show that there are exactly two such sets:<BR/><BR/>(1) the monotone gates (AND,OR), and<BR/><BR/>(2) the linear gates (NOT,XOR).<BR/><BR/>But what if the alphabet has 3 or more elements? Then the problem is much more complicated, but it was solved by Ivo Rosenberg in the early 70's. In particular, Rosenberg showed that for <I>any</I> finite alphabet size, there are only finitely many "maximal but not universal" gate sets.Scotthttps://www.blogger.com/profile/12161309448306097395noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1138220726543152002006-01-25T15:25:00.000-05:002006-01-25T15:25:00.000-05:00Bill, your challenge is awesome.Given a sequence o...Bill, your challenge is awesome.<BR/><BR/>Given a sequence of real numbers a(1),...,a(n), suppose we want to find the <I>monotonically nondecreasing</I> sequence that best approximates it in the least-squares norm.<BR/><BR/>It turns out there's a beautiful linear-time algorithm to accomplish this. I was elated to come up with it as a summer student at Bell Labs, until I learned that Kruskal had beat me by ~35 years.<BR/><BR/>(1) Create a linked list, where initially the ith element has a "value" of a(i) and a "weight" of 1.<BR/><BR/>(2) Repeatedly look for adjacent elements i and i+1 such that a(i)>a(i+1). Whenever you find such a pair, replace it by a <I>single</I> element of weight w(i)+w(i+1), and value equal to the weighted average<BR/>[w(i)a(i)+w(i+1)a(i+1)]/[w(i)+w(i+1)].<BR/>Continue until a(i)<=a(i+1) for all i.<BR/><BR/>(3) Output a list of n elements, where a(i) in the final list occurs with multiplicity w(i).<BR/><BR/>Exercises: Why does this work? Why can it be made to run in linear time?Scotthttps://www.blogger.com/profile/12161309448306097395noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1138219132118184012006-01-25T14:58:00.000-05:002006-01-25T14:58:00.000-05:00actually I had written about higman's lemma in thi...actually I had written about higman's lemma in <A HREF="http://geomblog.blogspot.com/2005/07/word-algebras-and-strange-grammars.html" REL="nofollow">this post</A>, but not using the formulation of Higman's lemma that you state here, and which is much more compelling than the variant that I used. We even used it in a paper (referenced in the post).Suresh Venkatasubramanianhttps://www.blogger.com/profile/15898357513326041822noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1138218489324845422006-01-25T14:48:00.000-05:002006-01-25T14:48:00.000-05:00``Steve Fenner has a proof of the |\Sigma|=2case t...``Steve Fenner has a proof of the |\Sigma|=2<BR/>case that does not need wqo theory, but is LOOOOOOOOOOOONG.<BR/>He thinks he can do a proof for the |\Sigma|=3 case, but that will be LOOOOOOOOOOOOOOOOOOOOOOOOONG.''<BR/><BR/>Actually I have a proof of the general case, not using wqo's, that's 4-5 pages. It is at<BR/><A HREF="http://www.cse.sc.edu/~fenner/papers/higman.pdf" REL="nofollow">http://www.cse.sc.edu/~fenner/<BR/>papers/higman.pdf</A>.<BR/>The binary case is easier than the general case, but the ternary case probably isn't.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1138215651330834392006-01-25T14:00:00.000-05:002006-01-25T14:00:00.000-05:00Is there any reason why everyone wants to call rec...<I>Is there any reason why everyone wants to call recursive anything computable anything beyond the hope that sticking in the word computation will make more people interested?</I><BR/><BR/>Yes indeed. The word "computable" describes much more closely the objects referred to than does "recursive," which historically refers only to a partular model of computation, and nowadays is too often confused by students with a particular strategy for designing algorithms. It's incongruous to talk about Turing machines and call the functions they compute "recursive." In what sense do Turing machines recurse?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1138212444330985482006-01-25T13:07:00.000-05:002006-01-25T13:07:00.000-05:00Is there any reason why everyone wants to call rec...Is there any reason why everyone wants to call recursive anything computable anything beyond the hope that sticking in the word computation will make more people interested?Anonymousnoreply@blogger.com