BREAKTHROUGH in Algorithms: Improved Algorithm for Metric TSP,

(Guest Blog by Mohammad Hajiaghayi)

We all recently heard about the breakthrough complexity result by Ryan
Williams on non-uniform circuit lower bounds. Here is a breakthrough from
algorithms side.

All of us may heard about the Christofides' algorithm from 1976 for metric
Traveling Salesman Problem (TSP), in which, given an undirected graph G
with nonnegative edge weights on the edges satisfying the triangle
inequality, the goal is find a shortest possible tour that visits each
city once. The algorithm and its analysis is very simple. Create the
minimum spanning tree (MST) T of G, find a minimum weight perfect matching
M (T-join) in the complete graph over the vertices with odd degree in T,
combine the edges of M and T to form an Eulerian circuit and shortcut the
circuit by skipping visited nodes. This simple algorithm is a
3/2-approximation algorithm for metric TSP since both the weight of T and
twice the weight of M are lowerbounds for the optimum. You can write a
natural LP known as Held-Karp relaxation for the problem, for which one
can show an integrality gap of at most 3/2 essentially via Christofides'
algorithm. For lowerbounds, we know that the integrality gap is at least
4/3 for this LP.
(ADDED BY BILL: According to Wikipedia it is known that you can never get
an approx better than 220/219=1.00456... times opt, unless P = NP.
ADDED LATER:
Here
is the link to the Wikipedia post and
here
is the paper by Papadrimtriou and Vempala that contains the result.
The post lead me to find it.
)

Though Christofides' algorithm seems very simple and easy to improve, so
far there has been no progress in this regard despite consistent efforts
from 1976. Today I'm happy to announce a breakthrough for this problem by
Shayan Oveis-Gharan, Amin Saberi and Mohit Singh.

They obtain a (3/2-c)-approximation algorithm, for some positive constant
c, for the graphical TSP where the metric is the shortest path distance in
an unweighted graph (a very natural case considered by many so far). Their
approach is natural also, they find the tour by sampling a random spanning
tree from a maximum entropy distribution defined by the linear programming
relaxation of the problem (similar to the approach of Asadpour, Goemans,
Madry, Oveis-Gharan, and Saberi from SODA'10 on improved algorithms for
asymmetric TSP). Since again the cost of this tree is upper bounded by the
optimum solution, it suffices to show that the cost of its Eulerian
augmentation (or T-join) is
strictly less than half of the optimum. Of course the analysis is not
simple at all. According to the authors:

``The first ingredient of the proof is a deeper study of random spanning
trees and their corresponding generating functions. We build on a
recent and very interesting study of real-stable polynomials and their
applications in discrete probability. In particular, we use these
techniques to bound the probability of events defined on the parity of
the number of edges chosen from a certain subset to be in the random
spanning tree. This is crucial for bounding the cost of the T-join.

The other essential ingredient of our proof is a more detailed
analysis of the near-minimum cuts of the LP solution. We study the
cross graph of (1+d)-near minimum cuts for some d < 1/100 and observe
several interesting properties. In particular, we show that the cross
graph corresponding to these cuts has a structure very similar to
cactuses. Our analysis generalizes to all graphs and it could be of
independent interest.''

Amin told me that they should post the paper in arxiv by the end of this month.

Wow! I wish the paper was available at the time of the announcement!

ReplyDeleteSuch an announcement should be held off until the paper is actually out there to be read.

ReplyDeletePlease post the paper before making your announcements. This is worse than NASA. :)

ReplyDeleteIs it STOC 2011 commercial?

ReplyDeleteWhat is c? If it is some infinitesimal constant, it wouldn't be a BREAKTHROUGH!!!!!!!! in any meaningful way.

ReplyDeleteBill, please don't cite Wikipedia. Instead, cite "On The Approximability Of The Traveling Salesman Problem" by Papadimitriou and Vempala.

ReplyDeleteI agree with Anonymous 1,2,3,4. I vaguely remember the IP=PSPACE result: it was announced just before the holidays, then the author took off and was unreachable for two weeks while his mailbox probably got flooded with emails. His proof was not distributed until after he returned from his travels. Bad form!

ReplyDeleteIt is a bit hard to appreciate such results when 1) the paper draft is not available 2) no mention on the value of c.

ReplyDeleteIt would be good to post the paper asap.

1- Thanks to Mohammad for his kind note.

ReplyDelete2- I do understand the main point raised in the comments section. We are polishing the writing and will post the paper tomorrow.

3- c is a very small constant.

Due to the restriction that the metric is the shortest path metric of an unweighted graph, I would hesitate to call this a breakthrough. It's certainly interesting progress on a large, but still restricted, class of instances.

ReplyDeleteI'd like to express an alternative, extreme opinion. This result is expressed as a "breakthrough in algorithms". I disagree. If it's a breakthrough, it's a breakthrough in complexity.

ReplyDeleteHas anyone programmed this algorithm? Will anyone ever program this algorithm, or anything like it? Is going from a 3/2 to a 3/2-epsilon approximation what anyone in the real world was calling for in any practical setting? This seems like a prime example of what I've often said in the past, which is that people somehow are confused that approximation algorithms are primarily part of the "algorithms" side of theory. They're not, the way they're mostly studied in theory.

I'm not saying it's not an interesting result, or that it perhaps won't lead to something algorithmically interesting down the line. But I find it amusing that, in an apparent effort to give "equal breakthrough time" to algorithms on this blog, this is the example to come up. Why, it's enough to make me consider starting to openly agree with Mihai on his opinions about "complexity breakthroughs"! (Not yet.)

Speaking of Mihai, he and Mikkel Thorup have a paper (that's even out on the arxiv) on tabulation based hashing, as noted in David Eppstein's blog, that's really quite nice. I'm a bit too careful with my words to call it a breakthrough (yet), but it's certainly a very exciting algorithms paper.

If having an algorithm coded up and used is the criteria of if its in

ReplyDeleteALGORITHMS or COMPLEXITY

(an artificial distinction?)

then what fraction of papers in SODA

are really papers in COMPLEXITY?

I ask this non-rhetorically.

(Actually some are explicitly complexity in that they show lower bounds)

What fraction of papers in CCC are really papers

in ALGORITHMS? This one I know the answer to: 0.

GASARCH

Michael, first you should understand that solving some problem that was open for 34 years despite many attempts is a breakthrough. Also if I were you I was more careful and didn't call anything that myself, Mikkel and Mihai are working on (which are esp. close in the case of hashing) algorithms and everything else complexity.

ReplyDeleteWhat about the recent faster max-flow algorithm? Is that not a breakthrough (in algorithms)?

ReplyDeleteSomehow too many "breakthroughts" came from Bill and Lace these days ... Identifying results in structural complexity with those in circuit complexity. Those in complexity of algorithms with ones in efficient algorithms. Interesting results, no doubts, but as many others in past years. Let the time to decide what is a real breakthrough, and in what field. Why you, Bill and Lance, rush in putting quick (and definite) "stamps"? Why not to call these happenings just "worth to look at" results? Why not let the readers to decide?

ReplyDeleteAnon #13: You are one of those trolls that insists on ascribing statements to me that I didn't say. Please read what I said.

ReplyDeleteI didn't say the result presented wasn't a breakthrough. Just that it wasn't an algorithmic breakthrough, in the sense that (as described) it didn't seem to be an algorithm that had been implemented or would be implemented in the near future, to improve some real-world computing process (or our understanding of that process). I think that was quite clear.

The only statement I made regarding quality was "If it's a breakthrough..." Let me be clear that the "If" simply expressed my ignorance; it's not my area of study, and the paper apparently is not out yet, so I can't really judge.

You can certainly read my comment on tabulation based hashing -- which, to be clear, is not my work, and is a work that ANOTHER THEORY BLOGGER recommended before me -- in the same light. It is in my area, and I think it's quite reasonable to me to express that it's interesting, and contrasts in many ways with the work reported here. I certainly didn't express any idea that hashing was algorithms and everything else was complexity.

(Indeed, to be clear, and I think is clear in my original comment, some work on approximation algorithms is quite "algorithmic", in the sense that the algorithms developed are used in real systems, or the analysis is of algorithms used in real systems.)

Finally, to be clear, anyone who uses the phrase, "...if I were you, I was [sic] more careful" is being quite unpleasant, as they're trying to sound threatening. Be polite. Or at least stand by your comments.

Bill -- while I wouldn't say that having an algorithm coded up is either necessary or sufficient for "algorithms" work, it's a pretty good starting clue. Quite a bit of SODA (in my opinion) is complexity, not algorithms. Not that there's anything wrong with that; people should just be aware.

@Michael: where you see a "troll", and where I've relied on your comment? Am I not able to have my own opinion? Opinion that time and readers must decide what is a breakthrough, not Bill and Lance alone.

ReplyDeleteMichael -

ReplyDeleteYour definition of algorithms is quite narrow, and it is no coincidence that it corresponds almost exactly to the subset of algorithms in which you work: "applied algorithms", per se. It is clear to me, and doubtless to many others, that you are suffering from the common though understandable psychological phenomenon that can be caricatured as: "Only what I do is real algorithms. Everything else is esoteric complexity.". Similarly, someone on the other end of the spectrum would say: "Only what I do is deep algorithmic theory, what michael does is not far removed from hacking shell scripts in your mom's basement.".

Its all algorithms. Some more theoretical, some more applied. Stop with the bickering.

"solving some problem that was open for 34 years despite many attempts is a breakthrough"

ReplyDeleteexactly what was "solved" here?

That's right, I too demand that the Wikipedia reference be REMOVED. Wikipedia is not a recognized source for academia, and uses an NP process through the use of pipe-lining servers, to collapse academia and media.

ReplyDeleteAgain if TSP is NP, whether n! or "2 to the power of n", there is NO BREAKTHROUGH. Pascal's Triangle did the trick for me; therefore, pop up the paper.

Still NP. Let me know when it's P. Let's get real, Google is violating the foundation of computer science ethics every second as I write this.

ReplyDeleteAm I not able to have my own opinion?Everyone's entitled to their own opinion, no matter how wrong it is.

Its all algorithms. Some more theoretical, some more applied.There's no reason why you can't define algorithms this way if you choose, but it's a very broad definition, and likely to mislead the 90% of computer scientists who think algorithms research is generally more practical than complexity research. (I can't help suspecting that the broad definitions are designed to let super-theoretical algorithms researchers work on whatever they want while enjoying better job prospects than equally theoretical complexity researchers.)

Wikipedia is not a recognized source for academia, and uses an NP process through the use of pipe-lining servers, to collapse academia and media.You win the comment thread!

I find this result very interesting, as I did Ryan's and Mikkel's and Mihai's. Whether they are complexity or algorithms, both or neither, is not a fruitful discussion.

ReplyDeleteI agree with Anonymous 10. The result is not for general metric but for shortest path metric and the improvement "c" is so tiny that authors even don't want to mention--This is certainly a very interesting result. But is this a breakthrough ? If this is indeed a breakthrough, then there are many others that should also be termed as breakthroughs--faster algorithm for maxflow, better than 2 approximation algorithm for makespan minimization on unrelated parallel machine, n^{1/4+epsilon} approximation for densest k subgraph etc. Calling this result a breakthrough and writing a blog like this appears to be a bit extreme to me.

ReplyDeleteWhere's the paper? Not on the authors' web pages as far as I can see. "Tomorrow" was yesterday. Where is the paper posted?

ReplyDeleteTo Anonymous #21, I find it interesting that you are complaining that the new result is for the shortest path metric of an unweighted graph.

ReplyDeleteYet the other results that you cite, you have mis-stated the main result.

The result on max-flow is for undirected graphs, and is also approximate max flow. Certainly the real problem in this domain is directed, and exact.

You also cite the improvement to scheduling unrelated parallel machines, but this result is not for the full generalized assignment problem, and is actually for the restricted assignment problem (where each job has some absolute size and for all machines, this job either costs its intrinsic cost, or infinity).

In fact, the only result that you correctly state is the improved approximation guarantee for k-densest subgraph which is indeed for the k-densest subgraph problem in its full generality. If you are complaining about a paper not handling the fully general case, you should at least properly cite other results you mention.

Typo! I meant anonymous #23

ReplyDeleteTo Anonymous 25

ReplyDeleteYes, I am totally aware of the restrictions of the other problems I cite. For makespan minimization on unrelated parallel machines, the author gives a proof of a better than 2 integrality gap for a configuration LP for the restricted version, not an algorthm. But that was not my point. My point was that like this result, those were also equally interesting results.

I am a new Anonymous..

ReplyDeleteAnonymous 25 seems to be a author of this paper (which is not up yet) :)

It's a pity the entire discussion focuses on the issues whether the result is a breakthrough or not, or whether the result is practical or not, etc. The same happened in several blogs on Ryan's result.

ReplyDeleteI have no doubt the result is very nice and I would prefer to see some comments about the algorithm itself, techniques they use, their analysis.

BTW: the paper is available on Amin's web page (http://www.stanford.edu/~saberi/).

I doubt that anonymous 25 is an author of this paper. I am not an author of this paper, though I had already found it pretty easily before the last comment (claiming the paper is not available).

ReplyDeleteIn case you haven't found it yet, here it is: http://www.stanford.edu/~saberi/tsp.pdf

The paper is now on Amin's webpage:

ReplyDeletehttp://www.stanford.edu/~saberi/tsp.pdf

Some more breakthrough results:

ReplyDelete1) http://www.eccc.uni-trier.de/report/2010/200

2)http://www.eccc.uni-trier.de/report/2010/199

3) http://www.eccc.uni-trier.de/report/2010/198

and finally, the next two are perhaps most important, will certainly be in Lance's new list of favorite theorems of this decade:

4) http://www.eccc.uni-trier.de/report/2010/197 and

5) http://www.eccc.uni-trier.de/report/2010/196

Just doing my part to make everyone aware of the new breakthroughs. By the way, can we now re-define the word "breakthrough." It does not seem to mean what it used to any more... at least in complexity and algorithms.

Yes, we need to redefine the word "breakthrough".

ReplyDeletebtw, if Lance/ Gasarch/ GuestBlogger can go through the 55 page paper that Amin has uploaded and give an overview of the ideas like Lipton does in his blog, that will be good!

Anonymous 30:

ReplyDeleteIf 1) http://www.eccc.uni-trier.de/report/2010/200 and 2)http://www.eccc.uni-trier.de/report/2010/199 are breakthroughs, then I can list 5000 other breakthroughs of the last year.

As a matter of fact, there are no more new results. There are only breakthroughs.

I agree with Anonymous 29. But it would have been good to have a blog title "A Nice Result On Algorithms", rather than "BREAKTHROUGH in Algorithms". It looks like trying to gain too much publicity for a work, while there are several others which also deserve equal attention. It would have been appropriate if the blogger had said that it is a nice result and had discussed the algorithm, technique etc. that would have been much more enlightening.

ReplyDeleteIf we had a voting system for new results (either equal votes for everyone or weighted votes based on area and reputation), then identifying what is a break-through would probably be much easier. I think it is time for adding the commenting, endorsement, and voting capabilities (other suggestions?) to ECCC and arXiv.

ReplyDeleteLance/Bill, could you write a post on what new capabilities people would like to see on ECCC and arXiv. I think that can be a very fruitful discussion and if implemented can save us considerable time.

(Anonymous tried to post this fine comment on Dec 24 but our moderator thing

ReplyDeleteblocked it for some reason. Here it is. Sorry about that.)

If this is a breakthrough algorithm, so also is Abhiram Ranade's less

well known result

Precedence Constrained Scheduling in (2-7/3p+1)*OptimalIt is a different matter that Abhiram does not trumpet his result nor

does he have any friends who do it for him.

Link:

here

or

here:

//www.cse.iitb.ac.in/~ranade/pcs.pdf

I have added a LINK to the Wikipedia post

ReplyDeleteand a LINK to the paper that contains

the lower bound result. The Wikipedia post

helped me find the paper.

My understanding is that for an awful lot of special cases, TSP has better approximation factors. The unweighted metric case seems to be quite less interesting than other metric cases, such as Eucledian metric case (for which there is a ptas).

ReplyDeleteLooking at the paper, no significant new techniques are introduced either. So the value of the paper is a bit doubtful.

The paper is surely of the standard of SODA but for STOC/FOCS, I would have given it a weak acceptance (and may be even borderline or weak reject given the raised expectation by this blog post).

This blog post is just hype. An opinion of a single professor, who first sold how difficult the metric TSP is and then talked about a very special unweighted metric case. Hajiaghayi needs to mention the past attempts on this special case in his blog post before even talking about the result or calling it a breakthrough. Otherwise the blog post is not even credible.

I hope the hype posts do not inflate the value of a paper, because that would be a bad precedence.

Hello! Yesterday I found one new great book “Traveling Salesman Problem, Theory and Applications”

ReplyDeleteBook is free to download, or you just can read it on online reading platform here: http://www.intechopen.com/books/show/title/traveling-salesman-problem-theory-and-applications This book is a collection of current research in the application of evolutionary algorithms and other optimal algorithms to solving the TSP problem. It brings together researchers with applications in Artificial Immune Systems, Genetic Algorithms, Neural Networks and Differential Evolution Algorithm. Hybrid systems, like Fuzzy Maps, Chaotic Maps and Parallelized TSP are also presented. Most importantly, this book presents both theoretical as well as practical applications of TSP, which will be a vital tool for researchers and graduate entry students in the field of applied Mathematics, Computing Science and Engineering.