Monday, November 07, 2022

Euclidean TSP is NP-hard but not known to be in NP. Why not known?

BILL: Lance, I was looking into TSP, metric TSP, Euclidean TSP since I am going to teach about P, NP, NPC, and approximating NPC problems and I came across the following from Lipton's book The P=NP Question and Godel's Letter (I paraphrase):

Graham proved that Euclidean TSP was NP-hard. But it is open if it is in NP. The difficulty hinges on a beautiful problem that is still open over thirty years later: Can we tell, given naturals a1,...,an,k if


\sqrt{a1} + ... + \sqrt{an} \le k

What I want to know is, is it still open? Lipton's book was written in 2010, so that was. uh, uh...

LANCE: 11 years ago.

BILL:  Which is a long time ago- so has it been cracked?

LANCE: No it has not. And, by the way, I blogged on it in 2003.


This raises some questions:

1) Is  the sqrt problem above in P? NP? (I have seen it stated that the problem is in PSPACE.) 

2) Where do people think the problem is?

3) Why is it still open? Some options (I am sure there are more.)

a) Not that many people are working on it. But if not, why not?

b) The problem is just dang hard! That's probably why P vs NP is still unsolved and why FLT took so long, and why my proof of the Riemann hypothesis was so long in coming.) I am reminded of Erdos' quote on The Collatz Conjecture: Mathematics may not be ready for such problems. And you all know what Erdos said about R(5). 

c) Reasons a and b above may lead to a death spiral: People THINK its hard so they don't work on it, then since nobody works on it no progress is made, reinforcing that its hard. 



9 comments:

  1. It is in ∃ℝ (see https://en.wikipedia.org/wiki/Existential_theory_of_the_reals) which implies that it is in PSPACE. It's easy to write an ∃ℝ formula expressing it as the conjunction of r_i^2=a_i, r_i≥0, and sum(r_i)≤k.But ∃ℝ contains NP, whereas https://en.wikipedia.org/wiki/Sum_of_radicals cites a result of Blömer that if the sum of radicals problem is in NP it's also in coNP, suggesting that it is unlikely to be complete for ∃ℝ.

    ReplyDelete
  2. I've recently seen results on related problems. For instance, this paper was published at LICS'22: https://arxiv.org/abs/2202.07961. This is about (generalizations of) the decision version of the problem, that is testing whether a sum of square roots equals zero. The generalization is actually to test whether a polynomial vanishes when evaluated on some given radicals (not only square roots).

    ReplyDelete
  3. Allender et al. showed that this problem is in the counting hierarchy, CH (https://epubs.siam.org/doi/epdf/10.1137/070697926) and I think that Kayal and Saha (https://dl.acm.org/doi/abs/10.1145/2382559.2382560) improve this to (coRP)^PP. I am not sure if there is anything better than their bounds on precision.

    ReplyDelete
    Replies
    1. Although I'm a big fan of the [Kayal, Saha] paper, they do NOT provide a better upper bound on the Sum.of.Square.Roots problem. In the paper of mine (joint with Peter Buergisser, Peter Bro MIltersen, and Johann Kjeldgaard-Pedersen) that Paul mentions, we show that the Sum.of.Square.Roots problem is in PH^PP^PP. I really wish that someone would improve this bound.

      Delete
    2. Thanks Eric. Sorry that I misread Kayal and Saha.

      Delete
  4. isn't this question kind of silly though? I mean we can check the inequality to basically any desired level of precision. I guess my question is, is there any real difficulty to the problem outside of that it's hard to get a handle on real numbers in a computational model? Perhaps an equivalent formulation that feels more like a typical computational problem

    ReplyDelete
    Replies
    1. The inputs to this problem are integers. Thus the problem isn't really that it's hard to get a handle on real numbers. And in fact, it is quite reasonable to conjecture that one can solve the question by computing the bits of each square roots out to a polynomial number of decimal digits, which would put the problem into P. We know that computing an exponential number of decimal digits is sufficient, but we don't know anything better. The problem arises in enough different settings, so that it isn't quite fair to dismiss it as "silly".

      Delete
  5. At some point, I was thinking that you can reduce "sorting a list of numbers" to a TSP problem. Since if the numbers are just points on the x-axis, presumably the shortest TSP tour goes through them in order. Although it might be increasing or decreasing, and might well start in the middle; so there would be a linear amount of tweaking the tour, to get a sorted list.

    I was then wondering if the lower bound for sorting, of $\Omega(n \log n)$ comparisons, applied to TSP. Unfortunately, that doesn't really make sense, because I don't think the comparison model isn't really well-defined for TSP. (By which I mean, I can't think of a good way to phrase TSP as "a series of comparisons".)

    ReplyDelete
    Replies
    1. When algorithms aren't restricted to comparisons they can sort in o(n log n) time. One natural model is the word-RAM model where the machine consists of w-bit words, the input is n integers of w bits each, and one can do normal operations like addition, multiplication, shifts, and indirect addressing on w-bit words at unit time per operation.

      In this model, the best deterministic sorting algorithms are O(n loglog n) time and the best randomized ones take O(n logloglog n) time. (In fact, if w is O(log n) then radix sort will work in linear time and if w is at least log^{2+\Omega(1)} n then a sorting algorithm of Hagerup et al. will also do this in linear time.

      Delete