tag:blogger.com,1999:blog-3722233.post113089956645242989..comments2020-09-25T11:11:11.589-05:00Comments on Computational Complexity: Competitive AnalysisLance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.comBlogger23125tag:blogger.com,1999:blog-3722233.post-1131206018661637942005-11-05T09:53:00.000-06:002005-11-05T09:53:00.000-06:00(PCA was invented before the World War II, Viterbi...<I>(PCA was invented before the World War II, Viterbi's algorithm was invented in the context of digital communication, etc).</I><BR/><BR/>If those algorithms are created with theory tools and justified with theory techniques we can lay claim to them, there is no need to carry a theory "union card" to create a theoretical result.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1131135843869407692005-11-04T14:24:00.000-06:002005-11-04T14:24:00.000-06:00Claire's list indeed contains an examples of high ...Claire's list indeed contains an examples of high impact mathematical and algorithmic notions. However, Mike's question was more specific, he was asking about practical impact of a (specific) notion originating in TCS. While we (TCS) can claim credit for some examples (e.g., fast algorithms for suffix trees), I dont think we can do that for others (PCA was invented before the World War II, Viterbi's algorithm was invented in the context of digital communication, etc).Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1131064729804546302005-11-03T18:38:00.000-06:002005-11-03T18:38:00.000-06:00I can understand arguments for studying the comple...<I><BR/>I can understand arguments for studying the complexity of specific problems (though I reserve the right to disagree :) ). The argument Lance's original post mentions in favor of looking at algorithms with "large" competitive ratios is that it forces us to look at/ tells us something about the combinatorial structure of the problem which would lead to better practical algorithms. Is there evidence that that is happening here? If so, that's great -- that would make a believer out of me --but if not, is it worth the effort? Why is everyone so willing to just say that it is?<BR/></I><BR/><BR/>Because there are plenty of <BR/>examples where the initial stuff<BR/>that seemed flaky or not so<BR/>interesting from practice<BR/>turned out to have lot of impact.<BR/>Take k-server problem or metrical<BR/>task systems. Tree embeddings<BR/>came out of this and that had<BR/>a huge impact in understanding<BR/>many problems. Take property<BR/>testing. One could question the<BR/>usefulness of it but again it<BR/>had nice math. Not everything<BR/>we do should be drive by<BR/>practice. We explore things because<BR/>our current techniques have<BR/>limitations and that leads to<BR/>new things and expands our tool<BR/>kit.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1131062388109452972005-11-03T17:59:00.000-06:002005-11-03T17:59:00.000-06:00Today, in preparation for my upcoming undergrad Al...Today, in preparation for my upcoming undergrad Algorithms course, I scanned the non-theory undergraduate courses in my department to see for what topics I would be able to claim applications. Here is what I found:<BR/><BR/>B-trees in the database management course<BR/><BR/>Suffix trees in the computational biology course<BR/><BR/>Greedy scheduling algorithms and LRU cache algorithm in the operating systems course<BR/><BR/>Viterbi's algorithm and some other dynamic programming stuff for Markov decision processes in intro to AI<BR/><BR/>Random walk algorithm for 3SAT in intro to AI<BR/><BR/>Distributed hash tables <BR/><BR/>Spectral clustering in learning<BR/><BR/>Matrix algorithms (SVD, PCA,...) in intro to vision<BR/><BR/>I think this means that these are all high-impact problems and algorithms. And some are approximation algorithms.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1131047878723403322005-11-03T13:57:00.000-06:002005-11-03T13:57:00.000-06:00First, might I suggest it might be nice if some of...First, might I suggest it might be nice if some of these anonymous disagreers made themselves nonymous. Then we could take discussions off-line, and I'd at least have the courtesy of knowing who disagreed with me. <BR/><BR/><I> What about correlation clustering? This topic had a chain of results, the first with some large constrant factor approximation ratios. But the work that ultimately resulted was pretty interesting. What does Monsieur Mitzenmacher say about that? </I><BR/><BR/>I say, I'm not French. Does Mitzenmacher sound French to you? :)<BR/><BR/>Interesting is in the eye of the beholder. I perhaps don't know enough to say very specific things on this problem, but general thoughts: The basic problem is certainly very nice and well motivated. Results like that it's APX-hard in general seem important, in that they motivate why people should look at and use heuristic algorithms. <BR/><BR/>For competitive analysis, I have the same basic questions I've had in my other comments, that apply to any problem. Do any of these papers code up and check the performance of these algorithms? Try them on somebody's real or even fake data? Do any of these papers refer to heuristics people actually use to solve these problems? Do they compare with these heuristics? I don't know what the best current approximation constants are, but if you told me that in O(n^4) time we could get within a factor of 2, I guess I wouldn't be very excited. <BR/><BR/>I can understand arguments for studying the complexity of specific problems (though I reserve the right to disagree :) ). The argument Lance's original post mentions in favor of looking at algorithms with "large" competitive ratios is that it forces us to look at/ tells us something about the combinatorial structure of the problem which would lead to better practical algorithms. Is there evidence that that is happening here? If so, that's great -- that would make a believer out of me --but if not, is it worth the effort? Why is everyone so willing to just say that it is?Michael Mitzenmacherhttps://www.blogger.com/profile/00458446293652845258noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1131040887920022772005-11-03T12:01:00.000-06:002005-11-03T12:01:00.000-06:00What about correlation clustering? This topic had...What about correlation clustering? This topic had a chain of results, the first with some large constrant factor approximation ratios. But the work that ultimately resulted was pretty interesting. What does Monsieur Mitzenmacher say about that?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1131032460087777832005-11-03T09:41:00.000-06:002005-11-03T09:41:00.000-06:00I am thinking more of papers (or chains of papers)...<I>I am thinking more of papers (or chains of papers) I have seen taken some supposed practical problem -- usually a dramatic simplification -- and getting some large constant factor approximation or worse, often for a problem where there are already heuristics that work well, and are already used in practice or written about.</I><BR/><BR/>I don't think there is anything wrong with these chains so long as (a) they do not appear in top conferences and (b) they are honest about the fact that these are early efforts in a simplified version of the model. In particular they should list among the open questions not just "lower the competitive ratio" but also "make the model realistic". <BR/><BR/>Why do I believe these chains are important? Because science is built on the shoulders of giants... and of average folks. <BR/><BR/>A lot of this exploratory work lays down the foundation for a "giant" to formalize the questions and give new direction to an incipient field. Complexity and Analysis of Algorithms existed before their acknowledged founders started them. What made them the founders was their ability to formalize and solidify the research program started by others.<BR/><BR/>Alex Lopez-OrtizAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1131002638534741882005-11-03T01:23:00.000-06:002005-11-03T01:23:00.000-06:00Mike,Now that you've clarified your position, let ...Mike,<BR/><BR/>Now that you've clarified your position, let me be the first to say that this is a sentiment many of us share. People have various aesthetics---some people are searching for solutions that are useful, some for solutions that are beautiful, and others for solutions that are very difficult. And this diversity in goals and visions is something that tends to benefit us all in the long run.<BR/><BR/>Beauty is hard to fake, and although it's certainly possible to make your proofs look more difficult than they actually are, this cheap trick can only work so many times. Usefulness, on the other hand, seems way too easy to sell to theoreticians.<BR/><BR/>One sees a plethora of papers which are easy and ugly, but get by on the merits of "working on an actual problem that arises in the real world." Such a claim is often bogus, and almost always unjustified. In fact, there is a whole culture surrounding this notion of fake usefulness. <BR/><BR/>I have seen comments from referees that say "The ideas in this paper are very nice and the proofs are ingenious, but their techniques only apply to the case when \epsilon is very small, and these do not often arise in practice." <BR/><BR/>Forget the fact that the case of vanishing \epsilon is where all the interesting mathematics lie, where there was a hole in our previous understanding, and which relates to a host of other open problems. Just know that the "large \epsilon" domain wasn't actually applied to anything anyway--in this way, fake usefulness begins to trump creativity and beauty.<BR/><BR/>There are people working brilliantly at the interface between theory and applications, and they rock. But fake usefulness degrades both the purity and the utility of our field, in addition to making us all look foolish.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1130989583400026532005-11-02T21:46:00.000-06:002005-11-02T21:46:00.000-06:00I appear to have been greatly misunderstood. To b...I appear to have been greatly misunderstood. To be clear, I think there is great work going on in approximation algorithms, especially right now, as many people have commented and as I tried to say clearly in my first post. This work is mathematically beautiful and gives fundamental insight into computation; while practical importance might not be immediate, given it has these other properites, it's certainly very worthwhile and it's likely it will be practically important in the future. <BR/><BR/>I am thinking more of papers (or chains of papers) I have seen taken some supposed practical problem -- usually a dramatic simplification -- and getting some large constant factor approximation or worse, often for a problem where there are already heuristics that work well, and are already used in practice or written about. This seem quite clearly to be the type of work Lance was discussing in his original post, and was the type of work I was referring to in my comment. SAT algorithms, for example, do not fit into this paradigm, for reasons alluded to in other comments. There has been a lot of this type of work in our community, that we sometimes, or even often, accept into our top conferences. Lance brought up the question of how we should think of this type of work, and I gave my opinion, which I stand by.<BR/><BR/>The defense multiple people seem to have given is that yes, perhaps some of this work is not so good after all, but that is true everywhere. Of course. (When a friend told me "95% of all theory papers are crap", I immediately responded, "Well, then we're about 4% better than systems!") But when I see what I think are poor networking or information theory papers, or even worse bad directions in networking or information theory papers, I feel free to let my opinion of the work be known. (At least now that I have tenure. :) ) Should I not do the same here?<BR/><BR/><I> Since STOC deadline is tomorrow<BR/>and Michael is on the PC, are<BR/>papers on approximation algorithms going to take a hit :)? </I><BR/><BR/>Well, if your STOC paper has a competitive ratio of 2 or higher, but you've claimed it it gives insight into some practical problem, you can go ahead and blame me if it doesn't get in. :)Michael Mitzenmacherhttps://www.blogger.com/profile/00458446293652845258noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1130981715300996922005-11-02T19:35:00.000-06:002005-11-02T19:35:00.000-06:00We are in the middle of a grand classification eff...We are in the middle of a grand classification effort for approximation problems that was restarted with the PCP theorem and the GW Maxcut SDP algorithm after it hit an impasse in the late 1970's. It has been as exciting time to work on either the hardness of approximation or on approximation algorithms and the many sucesses on long-standing problems has been justification enough for this effort. Fundamental open problems such as the approximability of VertexCover still remain and there are extensions of these results to economic domains that are interesting as well and will keep researchers working for quite a while.<BR/><BR/>It seems foolish to view the results of this classification effort primarily using the lens of the practicality of the guarantees that they yield. That simply is beside the point. <BR/><BR/>The issue of heuristic algorithms is quite separate: Many times we do not (yet) possess the analytical tools to understand the behavior of these algorithms; it is not because we don't care about them. <BR/><BR/>I have spent a fair amount of time working with practical SAT solvers and related algorithms for symbolic model checking and the like. Despite the fact that these algorithms routinely solve large problems they are typically also extraordinarily brittle. Often one has to massage the instances significantly so that the methods work or settle for something that isn't exactly the problem that one began with. Everyone celebrates their successes but tends to downplay the many practical problems where the methods fail completely. Practioners are generally very happy when with theoretical analysis one can begin to predict the behavior of these algorithms (or how to use them).<BR/>Many of the most effcient SAT solvers used in practice employ algorithms that have close parallels in those that we can analyze so this has been a fruitful connection. Analyzing other algorithms used (e.g. random walk-related algorithms) seems to be quite a bit beyond current technology.<BR/><BR/>Paul BeameAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1130970633361297592005-11-02T16:30:00.000-06:002005-11-02T16:30:00.000-06:00Since STOC deadline is tomorrowand Michael is on t...Since STOC deadline is tomorrow<BR/>and Michael is on the PC, are<BR/>papers on approximation algorithms going to take a hit :)?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1130970005072387902005-11-02T16:20:00.000-06:002005-11-02T16:20:00.000-06:00There are many ways to measure quality in theoreti...<I><BR/>There are many ways to measure quality in theoretical computer science, including:<BR/><BR/>(1) Mathematical beauty.<BR/>(2) Resulting insight into the nature of computation.<BR/>(3) Practical importance and impact.<BR/><BR/>I think there are many cases where approximation algorithms have (1) and (2) covered, but of course not always. I often find approximation algorithms papers now have a "game" feel about them -- we can beat the last papers's number -- rather than any exciting insight or beauty.<BR/></I><BR/><BR/>I agree. It is usually (1) and (2) that motivate (most) people working in the area. But it seems to me that people well-versed in the area are able to be useful when they work on, or are queried on questions of practical importance. Opinion may be divided on whether the training/experience plays any role.<BR/><BR/><I>Having worked on heuristic algorithms, I would say the case for (3) is vastly overstated by the TCS community, to the point where it makes us look bad and reinforces negative stereotypes about TCS. As a corollary, I think there's too many people, and too much work, on approximation algorithms.<BR/></I><BR/><BR/>Too much relative to what? Papers in networking, or in information theory, outnumber those in approximation algorithms by a huge constant factor. Do they have a proportionately larger impact? Not clear to me.<BR/><BR/>Any field of research, as it matures will have incremental papers. If the fraction approaches one, that would be a problem. But that is far from true for approximation algorithms. The area is about developing new techniques for designing algorithms; the number of "incremental" papers is evidence of the generality of the toolbox we continue to enrich.<BR/><BR/>Of overstating the case for (3), I think it is regrettable, but is true for almost all of science. It is more a statement on our society than on our community.<BR/><BR/><I><BR/>The only question really is one<BR/>of economics. How many people<BR/>should be supported to do this<BR/>kind of theoretical work. Here I believe that the market forces will play a role.<BR/><BR/>I agree, but I think as a community we shape the market forces quite a bit, and we are susceptible to groupthink. I think in the future we will look back and see a lot of exciting ideas that have come out of the research on approximation algorithms, and even some cases of powerful practical impact. But we will also see a lot of incremental papers that were shallow and unimportant -- and that, in the end, perhaps far more resources were devoted to these problems than they merited.</I><BR/><BR/>Once again, you could replace approximation algorithms by any area of science and the first part of your claim would be true---that is the nature of science. Some of the shallow/unimportant papers result from not-entirely-successful attempts to understand deep questions and a part of science. Others result from people seeing an easy opportunity to write such a paper, which I agree is regrettable, but not unique to any one area. Far more resources again seems more generally true in hindsight. But I think there are substantially worse inefficiencies in our society that we shouldn't be worrying about this one.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1130959874305669642005-11-02T13:31:00.000-06:002005-11-02T13:31:00.000-06:00To add my 3 cents on this important topic:- I thin...To add my 3 cents on this important topic:<BR/><BR/>- I think that the practitioners knowledge of "the situations they care about" heavily varies from area to area, so it is really hard to make any generalizations. On one hand you have communications people, which seem in most cases happy with modelling noise as additive or multiplicative Gaussian noise, or its variants (bursts etc).<BR/>On the other hand, you have the examples mentioned earlier in this thread; I would also add data analysis, where the structure of the data is often so complex (is your data Gaussian, clustered, taken from a manifold, all/none of the above ?) that it is hard to converge on a well-accepted model. <BR/><BR/><BR/> And then you have also crypto/security, where malicious adversaries actually do exist<BR/><BR/>- In my opinion, perhaps the main advantage of worst-case analysis is composability: you can combine algorithms in various unexpected ways, and still have some understanding of their behavior. This is not directly beneficial for applications, but it enables discovering (often surprising) connections, which often (although perhaps not as often as one would like) have practical impact. E.g., in the aforementioned context of communications, the worst-case approach of Guruswami-Sudan for Reed-Solomon resulted in algorithms which (after lots of massaging) work great for Gaussian channels. <BR/><BR/><BR/>- I do not know how much practical impact approximation algorithms (for NP-hard problems) had, I am sure other people can comment on that. But, there are quite a few examples of an impact of (1+eps)-approximation algorithms for poly-time problems.<BR/>For example, approximate algorithms for various streaming problems, such as counting the number of distinct elements, had a quite significant impact on databases and networking. <BR/><BR/>PiotrAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1130949193440343842005-11-02T10:33:00.000-06:002005-11-02T10:33:00.000-06:00Mitzenmacher says,I think there are many cases whe...Mitzenmacher says,<BR/><BR/><I>I think there are many cases where approximation algorithms have (1) and (2) covered, but of course not always. I often find approximation algorithms papers now have a "game" feel about them -- we can beat the last papers's number -- rather than any exciting insight or beauty.</I><BR/><BR/>That might be the case but this is<BR/>true in any kind of research as<BR/>we grope for the amibitious goal<BR/>of beauty, simplicity, and utility.<BR/>I see the same phenomena in complexity (see the parameters <BR/>for extractor constructions),<BR/>learning theory, coding theory etc<BR/>etc. <BR/><BR/><I><BR/>As a corollary, I think there's too many people, and too much work, on approximation algorithms. </I><BR/><BR/>It is fine to have this opinion<BR/>but it is preferable to let<BR/>the corrective forces work <BR/>naturally instead of being <BR/>dictated by a few people. <BR/>One reason for the activity in<BR/>approximation algorithms is<BR/>the fact that we are making<BR/>substantial progress on problems<BR/>both from upper and lower bounds.<BR/>Along the way we are discovering<BR/>connections to such things as<BR/>embeddings and fourier analysis.<BR/>This is exciting and attracts <BR/>more people and in the process <BR/>it could be the case that a number<BR/>of shallow or incremental papers<BR/>might be written. However that is<BR/>true for all growth areas. Take<BR/>for example the junk that is<BR/>written in the name of wireless<BR/>networks. Kuhn has described this<BR/>process in his famous book on<BR/>scientific progress. My take is<BR/>that the approximation algorithms<BR/>bubble has a few legs left.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1130948756947830532005-11-02T10:25:00.000-06:002005-11-02T10:25:00.000-06:00The contrast between the theoreticians' view that ...<I>The contrast between the theoreticians' view that SAT is the prototypical intractable problem and the practitioners' view that SAT-solvers routinely solve problems of 1000's of variables is rather amusing.</I><BR/><BR/>Current SAT solvers fail on DES circuit with probability 1. (DES CNF has less than 2000 variables)Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1130944622821703232005-11-02T09:17:00.000-06:002005-11-02T09:17:00.000-06:00There are many ways to measure quality in theoreti...<I> There are many ways to measure quality in theoretical computer science, including:<BR/><BR/>(1) Mathematical beauty.<BR/>(2) Resulting insight into the nature of computation.<BR/>(3) Practical importance and impact.<BR/></I><BR/><BR/>I think that (1) is correlated with simplicity, which is correlated with (3). The simplest ideas seem to be the ones which catch the attention of practitioners when I give talks; and those ideas are sometimes (not always, because sometimes they are mere observations without much depth) the ones with mathematical beauty.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1130943611567166982005-11-02T09:00:00.000-06:002005-11-02T09:00:00.000-06:00Of course if you are a practitioner that has a heu...Of course if you are a practitioner that has a heuristic that solves the problem you're dealing with with 2% guarantee then you shouldn't care about worst case guarantees.<BR/><BR/>I think what Vazirani tried to say is that <BR/><BR/>1) sometimes practitioners don't already have such a heuristic<BR/>(not all the world's computational problems are already solved)<BR/><BR/><BR/>2) sometimes the hueristic is based on ideas developed for worst-case algorithms for the same or different problems.<BR/><BR/>Provable performance is a constraint the forces you to think hard about the problem. Hopefully some interesting insight will come out of it.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1130942336010660062005-11-02T08:38:00.000-06:002005-11-02T08:38:00.000-06:00There are many ways to measure quality in theoreti...There are many ways to measure quality in theoretical computer science, including:<BR/><BR/>(1) Mathematical beauty.<BR/>(2) Resulting insight into the nature of computation.<BR/>(3) Practical importance and impact.<BR/><BR/>I think there are many cases where approximation algorithms have (1) and (2) covered, but of course not always. I often find approximation algorithms papers now have a "game" feel about them -- we can beat the last papers's number -- rather than any exciting insight or beauty. <BR/><BR/>Having worked on heuristic algorithms, I would say the case for (3) is vastly overstated by the TCS community, to the point where it makes us look bad and reinforces negative stereotypes about TCS. As a corollary, I think there's too many people, and too much work, on approximation algorithms.<BR/><BR/><I> I sense some practitioners think that such heuristics can do wonders in all situations and don't understand worst-case performance. </I> <BR/><BR/>Not at all. They just often don't care about worst case performance; they care about solving problems they want solved. If you can code up your solution and show that along with a good worst-case performance it's even reasonably competitive with heuristics they use, they'll be quite happy. How many people actually code up their approximation algorithms and see how they work? It's hard to make the claim that you're having a practical impact when you don't code up your algorithm and compare against heuristics on standard benchmarks...<BR/><BR/><I> The only question really is one<BR/>of economics. How many people<BR/>should be supported to do this<BR/>kind of theoretical work. Here I believe that the market forces will play a role. </I><BR/><BR/>I agree, but I think as a community we shape the market forces quite a bit, and we are susceptible to groupthink. I think in the future we will look back and see a lot of exciting ideas that have come out of the research on approximation algorithms, and even some cases of powerful practical impact. But we will also see a lot of incremental papers that were shallow and unimportant -- and that, in the end, perhaps far more resources were devoted to these problems than they merited.Michael Mitzenmacherhttps://www.blogger.com/profile/00458446293652845258noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1130941009621694782005-11-02T08:16:00.000-06:002005-11-02T08:16:00.000-06:00Vazirani's quote does not suggestthat heuristics t...Vazirani's quote does not suggest<BR/>that heuristics that have good<BR/>performance in practice are not<BR/>worth studying or valuing.<BR/><BR/>The relationships between algorithms, complexity and their practical utility have many facets and it is not fruitful to summarize <BR/>this in one or two soundbites. The <BR/>truth is that theoreticians study <BR/>algorithms often for their <BR/>mathematical and intuitive appeal <BR/>and not for practical merit. This<BR/>has led to a number of successes<BR/>and for the same reason has not<BR/>had a uniform impact on practice.<BR/>I don't view this as a negative.<BR/>The only question really is one<BR/>of economics. How many people<BR/>should be supported to do this<BR/>kind of theoretical work. Here I believe that the market forces will <BR/>play a role. A number of algorithms <BR/>people write grants with applied CS <BR/>people and interact with them so I <BR/>think the system is working overall.<BR/>Of course we should also evaluate<BR/>from time to time our impact and <BR/>research directions. We have<BR/>been reasonably successful at not<BR/>splitting into pure theory vs<BR/>applied theory, unlike mathematics.<BR/>However there is always the danger<BR/>that this can happen and and it is in the interest of the community to<BR/>have a healthy mix.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1130940310200928652005-11-02T08:05:00.000-06:002005-11-02T08:05:00.000-06:00Recently had a query from someone who wanted a "ge...Recently had a query from someone who wanted a "genetic optimization" (his words) algorithm for an NP-complete problem. I sense some practitioners think that such heuristics can do wonders in all situations and don't understand worst-case performance. I think the people who "sell" these heuristics do a better job of salesmanship than the folks who are selling algorithms with a bounded performance (or competitive) ratio!Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1130934067858320712005-11-02T06:21:00.000-06:002005-11-02T06:21:00.000-06:00I think(?) the last two posts miss Lance's point. ...I think(?) the last two posts miss Lance's point. His point seems to be that, on the one hand, Vazirani argues that even though the best <EM>provable</EM> approximation ratio for a given algorithm is something like 2, this is ok since "in practice" the algorithm achieves approximation ration more like 1.05. But then on the other hand, Vazirani rejects "heuristic" algorithms that achieve approximation ratio 1.05 "in practice", since tthere is no proof for their worst-case performance. So it seems that Vazirani wants it both ways...<BR/><BR/>At least, that's what I took away from Vazirani's quote.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1130914534813025982005-11-02T00:55:00.000-06:002005-11-02T00:55:00.000-06:00It is quite hard to define what a "typical" input ...It is quite hard to define what a "typical" input is -- hence an algorithm that works well on typical inputs seems to be not a well-defined object. For example, practioners (some of them anyway) seem to think that so-called SAT-solvers work wonderfully "in practice" on "typical inputs." They rave about solving instances on thousands of variables. But we all "know" that SAT is not in polynomial time. The last point in Suresh's comment is particularly relevant here. I think the SAT solvers use heuristics that, consciously or unconciously, exploit structures that are rather special. Even if they work well on millions of examplees in an applicatin domain, that is an infinitismal fraction of all possible inputs, especially on 1000's of variables. Hence the claim that they work on "typical" inputs is not really justified. Most of these SAT solvers also seem to fail badly on randomly generated instances. On the other hand, little research seems to have been done to offer rigorous explanations of what kinds of special structures are exploite by these SAT solvers that make them so efficient on real-world instances. The contrast between the theoreticians' view that SAT is the prototypical intractable problem and the practitioners' view that SAT-solvers routinely solve problems of 1000's of variables is rather amusing.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1130909914357915682005-11-01T23:38:00.000-06:002005-11-01T23:38:00.000-06:00If we truly want to sell these algorithms to the p...<I>If we truly want to sell these algorithms to the practitioners, should we focus on doing well on the situations they care about and then, only secondarily, worry about the performance in the worst case?</I><BR/><BR/>More often than not, practitioners don't quite know which situations they care about. When they do, and can make some attempt at defining it, one can often find a specialized subproblem that has its own provable guarantees. <BR/><BR/>Other times, practitioners' insistence on specific algorithms that "work well on all inputs" actually reveals hidden structure that can be elucidated formally. Two examples of this are the smoothed simplex analysis by Teng and Spielman, where they provide one explanation for the phenomenal success of the simplex method, and the various analyses of EM and the k-means method, where again one can determine the specific instances where the general approachh tends to work well.<BR/><BR/>In either scenario, the formal approach has great strengths: it either gives a new method with a formal guarantee, or provides a rigorous explanation (independent of benchmark data) as to why a particular heuristic works well.Suresh Venkatasubramanianhttps://www.blogger.com/profile/15898357513326041822noreply@blogger.com