Sunday, November 28, 2021

Open: 4 colorability for graphs of bounded genus or bounded crossing number (has this been asked before?)

 I have  co-authored (with Nathan Hayes, Anthony Ostuni, Davin Park) an open problems column  on the topic of this post. It is here.

Let g(G) be the genus of a graph and cr(G) be the crossing number of a graph.

As usual chi(G) is the chromatic number of a graph. 

KNOWN to most readers of this blog:

{G: \chi(G) \le 2} is in P

{G: \chi(G) \le 3 and g(G)\le 0 } is NPC (planar graph 3-col)

{G : \chi(G) \le 4 and g(G) \le 0} is in P (it's trivial since all planar graphs are 4-col)

{G: \chi(G) \le 3 and cr(G) \le 0} is NPC (planar graph 3-col)

{G: \chi(G) \le 4 and cr(G) \le 0} is in P (trivial since all planar graphs are 4-col)

LESS WELL KNOWN BUT TRUE (and brought to my attention by my co-authors and also Jacob Fox and Marcus Schaefer) 

For all g\ge 0 and r\ge 5, {G : \chi(G) \le r and g(G) \le g} is in P

For all c\ge 0 and r\ge 5, {G : \chi(G) \le r and cr(G) \le c} is in P 

SO I asked the question: for various r,g,c what is the complexity of the following sets:

{G: \chi(G) \le r AND g(G) \le g} 

{G: \chi(G) \le r AND cr(G) \le c}

SO I believe the status of the following sets is open

{G : \chi(G) \le 4 and g(G)\le 1} (replace 1 with 2,3,4,...)

{G : \chi(G) \le 4 and cr(G)\le 1} (replace 1 with 2,3,4...) 


QUESTIONS

1) If anyone knows the answer to these open questions, please leave comments. 

2) The paper pointed to above mentions all of the times I read of someone asking questions like this. There are not many, and the problem does not seem to be out there. Why is that?

a) It's hard to find out who-asked-what-when. Results are published, open problems often are not. My SIGACT News open problems column gives me (and others) a chance to write down open problems; however, such venues are rare. So it's possible that someone without a blog or an open problems column raised these questions before. (I checked cs stack exchange- not there- and I posted there but didn't get much of a response.) 

b) Proving NPC seems hard since devising gadgets with only one crossing is NOT good enough since you use the gadget many times. This may have discouraged people from thinking about it. 

c) Proving that the problems are in P (for the r\ge 6 case) was the result of using a hard theorem in graph theory from 2007. The authors themselves did not notice the algorithmic result. The first published account of the algorithmic result might be my open problems column.  This may be a case of the graph theorists and complexity theorists not talking to each other, though that is surprising since there is so much overlap that I thought there was no longer a distinction. 

d) While I think this is a natural question to ask, I may be wrong. See here for a blog post about when I had a natural question and found out why I may be wrong about the problems naturalness. 


Monday, November 22, 2021

Finding an element with nonadaptive questions

Suppose you have a non-empty subset S of {1,...N} and want to find an element of S. You can ask arbitrary questions of the form "Does S contain an element in A?" for some A a subset of {1,...N}. How many questions do you need?

Of course you can use binary search, using questions of the form "is there number greater than m in S?". This takes log N questions and it's easy to show that's tight.

What if you have to ask all the questions ahead of time before you get any of the answers? Now binary search won't work. If |S|=1 you can ask "is there a number in S whose ith bit is one?" That also takes log N questions.

For arbitrary S the situation is trickier. With randomness you still don't need too many questions. Mulmuley, Vazirani and Vazirani's isolating lemma works as follows: For each i <= log N, pick a random weight wi between 1 and 2 log N. For each element m in S, let the weight of m be the sum of the weights of the bits of m that are 1. With probability at least 1/2 there will be an m with an unique minimum weight. There's a cool proof of an isolating lemma by Noam Ta-Shma.

Once you have this lemma, you can ask questions of the form "Given a list of wi's and a value v, is there an m in S of weight v whose jth bit is 1?" Choosing wi and v at random you have a 1/O(log N) chance of a single m whose weight is v, and trying all j will give you a witness. 

Randomness is required. The X-search problem described by Karp, Upfal and Wigderson shows that any deterministic procedure requires essentially N queries. 

This all came up because Bill had some colleagues looking a similar problems testing machines for errors. 

I've been interested in the related question of finding satisfying assignments using non-adaptive NP queries. The results are similar to the above. In particular, you can randomly find a satisfying assignment with high probability using a polynomial number of non-adaptive NP queries. It follows from the techniques above, and even earlier papers, but I haven't been able to track down a reference for the first paper to do so.

Wednesday, November 17, 2021

CS Slow to Change?

Back in March of 2019 I wrote

I was also going to post about Yann LeCun's Facebook rant about stodgy CS departments but then Yann goes ahead and wins a Turing award with Geoffrey Hinton and Yoshua Bengio for their work on machine learning. I knew Yann from when we worked together at NEC Research in the early 2000's and let's just congratulate him and the others and let them bask in glory for truly transforming how we think of computing today. I'll get back to his post soon enough.

So not that soon. Yann's post was from 2015 where he went after "stodgy" CS departments naming Yale, Harvard, Princeton and Chicago.

CS is a quickly evolving field.  Because of excess conservatism, these departments have repeatedly missed important trends in CS and related field, such as Data Science. They seem to view CS as meaning strictly theory, crypto, systems and programming  languages, what some have called "core CS", paying lip service to graphics, vision, machine learning, AI, HCI, robotics, etc. But these areas are the ones that have been expanding the fastest in the last decades, particularly machine learning and computer vision in the last decade....It is quite common, and somewhat natural, that newer areas (eg ML) be looked down upon by members of older, more established areas (eg Theory and Systems). After all, scientists are professional skeptics. But in a fast evolving disciplines like CS and now Data Science, an excessive aversion to risk and change is a recipe for failure.

We've seen some changes since. Yale's Statistics Department is now Statistics and Data Science. The University of Chicago has a new Data Science undergrad major and institute.

I wonder if that's the future. CS doesn't really change that much, at least not quickly. Data science, and perhaps cybersecurity, evolve as separate fields which only have limited intersection with traditional CS. The CS degree itself just focuses on those interested in how the machines work and the theory behind them. We're busy trying to figure this out at Illinois Tech as are most other schools. And what about augmented/virtual reality and the metaverse, quantum computing, fintech, social networks, human and social factors and so on? How do you choose which bets to make? 

Most of all, universities, traditionally slowly moving machines, need to far more agile even in fields outside computing since the digital transformation is affecting everything. How do you plan degrees when the computing landscape when students graduate is different from when they start? 

Sunday, November 14, 2021

When did Computer Science Theory Get so Hard?

 I posted on When did Math get so hard? a commenter pointed out that one can also ask 


When did Computer Science Theory Get so Hard?

For the Math-question I could only speculate. For CS- I WAS THERE! When I was in Grad School one could learn all of Complexity theory in a year-long course (a hard one, but still!). The main tools were logic and combinatorics. No Fourier Transforms over finite fields. I am NOT going to say

Those were the good old days.

I will say that it was easier to make a contribution without knowing much. Oddly enough, it is MORE common for ugrads and grad students to publish NOW then it was THEN, so that may be a pair of ducks.

Random Thoughts on This Question

1) The Graph Minor Theorem was when P lost its innocence. Before the GMT most (though not all)  problems in P had easy-to-understand  algorithms using algorithmic paradigms (e.g., Dynamic  Programming) and maybe some combinatorics. Computational Number Theory used.... Number Theory (duh), but I don't think it was hard number theory. One exception was Miller's Primality test which needed to assume the Extended Riemann Hypothesis- but you didn't have to understand ERH to use it. 

1.5) GMT again. This did not only give hard-deep-math algorithms to get problems in P. It  also pointed to  how hard proving P NE NP would be--- to rule out something like a GMT-type result to get SAT in P seems rather hard. 

2) Oracle Constructions were fairly easy diagonalizations. It was bummed out that I never had to use an infinite injury priority argument. That is, I knew some complicated recursion theory, but it was never used. 

2.5) Oracles again. Dana Angluin had a paper which used some complicated combinatorics to construct an oracle, see here. Later Andy Yao showed that there is an oracle A such that  PH^A NE  PSPACE^A. You might know that result better as

Constant depth circuits for parity must have exponential size. 

I think we now care about circuits more than oracles, see my post here about that issue. Anyway, oracle results since then have used hard combinatorial and other math arguments. 

3) The PCP result was a leap forward for difficulty. I don't know which paper to pick as THE Leap since there were several. And papers after that were also rather difficult.  

4) I had a blog post here where I asked if REDUCTIONS ever use hard math. Some of the comments are relevant here:

Stella Biderman: The deepest part of the original PCP theorem is the invention of the VC paradigm in the 1990's.

Eldar: Fourier Theory was introduced to CS with Hastad's Optimal Approximation results. Today it might not be considered deep, but I recall when it was.

Also there are Algebraic Geometry codes which use downright arcane mathematics...

Hermann Gruber refers to Comp Topology and Comp Geometry and points to the result that 3-manifold knot genus is NP-complete, see here.

Anonymous (they leave many comments) points to the deep math reductions in arithmetic versions of P/NP classes, and Mulmuley's work (Geometric Complexity Theory).

Timothy Chow points out that `deep' could mean several things and points to a math overflow post on the issue of depth, here.

Marzio De Biasi points out that even back in 1978 there was a poly reduction that required a good amount of number theory: the NPC of the Diophantine binary quad equation

ax^2 + by + c = 0 

by Manders and Adelman, see here.

(Bill Comment) I tend to think this is an outlier- for the most part, CS theory back in the 1970's did not hard math. 

4) Private Info Retrieval (PIR). k databases each have the same n-bit string and cannot talk to each other. a server wants the ith bit and (in the info-theoretic case) wants the DBs to know NOTHING about the question i. 

Easy results (to understand) 2-server, n^{1/3}. here.

Hard results: 2-server n^{O(\sqrt{loglogn/log n)},  here.

(I have a website on PIR, not maintained,  here.)

5) Babai's algorithm for GI in quasi-poly time used hard math. 

6) If I knew more CS theory I am sure I would have more papers listed.

But now its your turn: 

When did you realize Gee, CS theory is harder than (a) you thought, (b) it used to be.






Thursday, November 11, 2021

20 Years of Algorithmic Game Theory

Twenty years ago DIMACS hosted a Workshop on Computational Issues in Game Theory and Mechanism Design. This wasn't the very beginning of algorithmic game theory, but it was quite the coming out party. From the announcement

The research agenda of computer science is undergoing significant changes due to the influence of the Internet. Together with the emergence of a host of new computational issues in mathematical economics, as well as electronic commerce, a new research agenda appears to be emerging. This area of research is collectively labeled under various titles, such as "Foundations of Electronic Commerce", Computational Economics", or "Economic Mechanisms in Computation" and deals with various issues involving the interplay between computation, game-theory and economics.

This workshop is intended to not only summarize progress in this area and attempt to define future directions for it, but also to help the interested but uninitiated, of which there seem many, understand the language, the basis principles and the major issues.

Working at the nearby NEC Research Institute at the time I attended as one of those "interested but unititated."

The workshop had talks from the current and rising stars in the field in both the theoretical computer science, AI and economics communities. The presentations included some classic early results including Competitive Analysis of Incentive Compatible Online Auctions, How Bad is Selfish Routing? and the seminal work on Competitive Auctions

Beyond the talks, just having the powerhouse of people at the meeting, established players, like Noam Nisan, Vijay Vazirani, Eva Tardos and Christos Papadimitriou, with several newcomers who are now the established players including Tim Roughgarden and Jason Hartline just to mention a few from theoretical computer science. 

The highlight was a panel discussion on how to overcome the methodological differences between computer scientists and economic game theorists. The panelists were an all-star collection of  John Nash, Andrew Odlyzko, Christos Papadimitriou, Mark Satterthwaite, Scott Shenker and Michael Wellman. The discussion focused on things like competitive analysis though to me, in hindsight, the real difference is between the focus on models (game theory) vs theorems (CS). 

Interest in these connections exploded after the workshop and a new field blossomed.

Sunday, November 07, 2021

Reflections on Trusting ``Trustlessness'' in the era of ``Crypto'' Blockchains (Guest Post)

 

I trust Evangelos Georgiadis to do a guest post on Trust and Blockchain. 

Today we have a guest post by Evangelos Georgiadis on Trust. It was written before Lance's post on trust here but it can be viewed as a followup to it. 

And now, here's E.G:

==========================================================

Trust is a funny concept, particularly in the realm of blockchains and "crypto".

Do you trust the consensus mechanism of a public blockchain?

Do you trust the architects that engineered the consensus mechanism?

Do you trust the software engineers that implemented the code for the consensus mechanism?

Do you trust the language that the software engineers used?

Do you trust the underlying hardware that that the software is running?

Theoretical Computer Science provides tools for some of this. But then the question becomes
Do you trust the program verifier?
Do you trust the proof of security?

I touch on these issues in: 

                   Reflections on Trusting ‘Trustlessness’ in the era of ”Crypto”/Blockchains

 which is here. Its only 3 pages so enjoy!

Wednesday, November 03, 2021

A Complexity View of Machine Learning?

Complexity is at its best when it models new technologies so we can study it in a principled way. Quantum computing comes to mind as a good relatively recent example. With machine learning playing an every growing role in computing, how can complexity play a role?

The theory community questions about machine learning typically look at finding mathematical reasons to explain why the models well with little overfitting or trying to get good definitions of privacy, fairness, explainability to mitigate the social challenges of ML. But what about from a computational complexity point of view? I don't have a great answer yet but here are some thoughts.

In much of structural complexity, we use relativization to understand the relative power of complexity classes. We define an oracle as a set A where a machine can ask questions about membership to A and magically get an answer. Relativization can be used to help us define classes like ÎŁ2P = NPNP or allow us to succinctly state Toda's theorem as PH in P#P.

As I tweeted last week, machine learning feels like an oracle, after all machine learning models and algorithms are typically accessed through APIs and Python modules. What kind of oracle? Definitely not an NP-complete problem like SAT since machine learning fails miserably if you try to use it to break cryptography. 

The real information in machine learning comes from the data. For a length parameter n, consider a string x which might be exponential in n. Think of x as a list of labeled or unlabeled examples of some larger set S. Machine learning creates a model M from x that tries to predict whether x is in S. Think of M as the oracle, as some compressed version of S.

Is there a computational view of M? We can appeal to Ockham's razor and consider the simplest model consistent with the data for which x as a set are random in the S that M generates. One can formalize this Minimum Description Length approach using Kolmogorov Complexity. This model is too ideal, for one it can also break cryptography, and typical deep learning models are not simple at all with sometimes millions of parameters.

This is just a start. One could try time bounds on the Kolmogorov definitions or try something different completely. Adversarial and foundational learning models might yield different kinds of oracles. 

If we can figure out even a rough complexity way to understand learning, we can start to get a hold of learning's computational power and limitations, which is the purpose of studying complexity complexity in the first place.