Here is a logic question I will ask today and answer next week. Feel free to leave comments with
the answer- you may come up with a different proof than me and that would be great!
Our lang will have the usual logic symbols, quantification over the domain, quantification over subsets of the domain (so second order) the = sign, and the symbol +
Examples of sentences:
(∀ x)(∀ y)[ x+y=y+x]
true over Q,R,N. False in S_n for n\ge 4 (group of perms of n elements)
(∃ x)(∀ y)[ x+y=y]
true in Q, R by taking 0. not true in {1,2,3,...}
Lets assume it is true and call the x 0
(∀ x)(∃ y)[x+y=0]
True in Q, R, Z, not true in N.
QUESTION ONE: Is there any sentence in the first order theory that is TRUE over (Q,+) but
FALSE over (R,+)?
QUESTION TWO: Is there any sentence in the second order theory that is TRUE over (Q,+)
but false over (R,+)?
Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch
Thursday, September 29, 2016
Tuesday, September 27, 2016
Who's Afraid
The playwright Edward Albee passed away earlier this month at the age of 88. I had never actually seen any of his plays so I took the opportunity to watch the 1966 movie Who's Afraid of Virginia Woolf? based on the play.
The play has four characters, George, a history professor in his forties and his wife Martha, the daughter of the University's president. Joining them for a late get together is a young biology professor and his wife. The movie had an incredible cast: Richard Burton, Elizabeth Taylor, George Segal and Sandy Dennis. All nominated for academy awards. The women won.
The movie covers many themes, mostly revolving around the disintegrating relationship between George and Martha. George did not live up to his potential as a drunk Martha did not hesitate to point out in front of all of them.
Nevertheless Albee captures a fear many academics have. That one day we may wake up and realize we've become that bog. Stuck in our job because we can't afford to give up tenure, but just going through the motions. It's a fear that motivates us, to make us continually try to do new things and achieve new heights. But it's also a reminder that academics is a long career and one that could bog down before we even notice.
Edward Albee writes a play incredibly uncomfortable to watch yet impossible not to. So rare to see that in mainstream plays or movies today.
The play has four characters, George, a history professor in his forties and his wife Martha, the daughter of the University's president. Joining them for a late get together is a young biology professor and his wife. The movie had an incredible cast: Richard Burton, Elizabeth Taylor, George Segal and Sandy Dennis. All nominated for academy awards. The women won.
The movie covers many themes, mostly revolving around the disintegrating relationship between George and Martha. George did not live up to his potential as a drunk Martha did not hesitate to point out in front of all of them.
I actually fell for him. And the match seemed practical too. For a while Daddy thought George had the stuff to take over when he was ready to retire. Anyway, I married the S.O.B. I had it all planned out. First he'd take over the History Department, then the whole college. That's how it was supposed to be! That was how it was supposed to be. All very simple. Daddy thought it was a good idea too. For a while!
Until he started watching for a couple of years and and started thinking it wasn't such a good idea. That maybe Georgie-boy didn't have the stuff! That he didn't have it in him! George didn't have much push. He wasn't aggressive. In fact, he was sort of a flop! A great big, fat flop!
So here I am, stuck with this flop, this bog in the History Department. Who's married to the president's daughter who's expected to be somebody. Not just a nobody! A bookworm who's so goddamn complacent he can't make anything out of himself.The movie reflects an earlier time in academics, when all the professors were white males and success was measured by taking charge of a department.
Nevertheless Albee captures a fear many academics have. That one day we may wake up and realize we've become that bog. Stuck in our job because we can't afford to give up tenure, but just going through the motions. It's a fear that motivates us, to make us continually try to do new things and achieve new heights. But it's also a reminder that academics is a long career and one that could bog down before we even notice.
Edward Albee writes a play incredibly uncomfortable to watch yet impossible not to. So rare to see that in mainstream plays or movies today.
Thursday, September 22, 2016
Boris Trakhtenbrot (1921-2016)
I woke up this morning to two pieces of news. Subhash Khot has just been named a MacArthur Fellow, the "genius" award, for his work on unique games. But I also just learned about Monday's passing of the great Russian theorist Boris Trakhtenbrot at the age of 95.
Trakhtenbrot has a number of important results in automata theory, model theory and logic to name just a few areas. In computational complexity we know him best for the Gap Theorem which he proved independently with Allan Borodin. Roughly the gap theorem states that for any computable f there is a computable time-bound t such that DTIME(t) = DTIME(f(t)), every problem solvable in time f(t) can also be solved in time t. For example there is a time bound t such that DTIME(t) = DTIME(2t). This doesn't violate the time hierarchy since t may not be time-constructible. There is nothing special about time here, it works for space or any abstract complexity measure.
Borodin and Trakhtenbrot worked independently because they sat on different sides of the iron curtain during the cold war which very little communication in between. Boris Trakhtenbrot wrote A Survey of Russian Approaches to Perebor (Brute-Force Searches) Algorithms (PDF) that traces this early history of Russian theory. He didn't hold back discussing some of the academic politics in Russia that stifled research into computational complexity and gave us a window into the Russian scientific community in the mid-20th century.
Thankfully the iron curtain has been replaced by a global internet and we can do science together. Let's hope it stays that way.
Trakhtenbrot has a number of important results in automata theory, model theory and logic to name just a few areas. In computational complexity we know him best for the Gap Theorem which he proved independently with Allan Borodin. Roughly the gap theorem states that for any computable f there is a computable time-bound t such that DTIME(t) = DTIME(f(t)), every problem solvable in time f(t) can also be solved in time t. For example there is a time bound t such that DTIME(t) = DTIME(2t). This doesn't violate the time hierarchy since t may not be time-constructible. There is nothing special about time here, it works for space or any abstract complexity measure.
Borodin and Trakhtenbrot worked independently because they sat on different sides of the iron curtain during the cold war which very little communication in between. Boris Trakhtenbrot wrote A Survey of Russian Approaches to Perebor (Brute-Force Searches) Algorithms (PDF) that traces this early history of Russian theory. He didn't hold back discussing some of the academic politics in Russia that stifled research into computational complexity and gave us a window into the Russian scientific community in the mid-20th century.
Thankfully the iron curtain has been replaced by a global internet and we can do science together. Let's hope it stays that way.
Wednesday, September 14, 2016
Academic Rankings Foster Competition
This week US News and World Report released their undergraduate rankings earlier this week. A time for schools to brag. US News and World Report used to publish an actual weekly news magazine, now they mostly just focuses on rankings. Besides various categories of undergrad institutions USN&WR ranks engineering and business programs. There are many other ranking systems of varying quality but in the US we take the USN&WR rankings the most seriously.
Computer Science does not get an undergraduate ranking. Computer engineering does--not the same. CS does get rankings as a PhD program, last time in 2014. I've posted on rankings in 2005, on the failed NRC rankings of 2010, on using metrics for rankings, and Bill had his own so-called non-controversial thoughts on rankings.
In the September CACM, Moshe Vardi wrote his editor's column entitled Academic Rankings Considered Harmful
More importantly rankings cause us to compete against each other. Every CS department wants to raise their rankings (or stay on top) and use that goal to work on strengthening their departments and use rankings to make the case to upper administration and alumni to get the resources needed to continue to grow. By the nature of rankings, not everyone can rise up but we all get better in the process.
Computer Science does not get an undergraduate ranking. Computer engineering does--not the same. CS does get rankings as a PhD program, last time in 2014. I've posted on rankings in 2005, on the failed NRC rankings of 2010, on using metrics for rankings, and Bill had his own so-called non-controversial thoughts on rankings.
In the September CACM, Moshe Vardi wrote his editor's column entitled Academic Rankings Considered Harmful
Academic rankings, in general, provide highly misleading ways to inform academic decision making by individuals. An academic program or unit is a highly complex entity with numerous attributes. An academic decision is typically a multi-objective optimization problem, in which the objective function is highly personal. A unidimensional ranking provides a seductively easy objective function to optimize. Yet such decision making ignores the complex interplay between individual preferences and programs' unique patterns of strengths and weaknesses. Decision making by ranking is decision making by lazy minds, I believe.No potential grad student should decide based solely on rankings but neither can we expect them to solve a highly-complex multi-objective highly-personal optimization problem over all 266 PhD-granting CS departments. They will find ways to narrow down their list of schools somehow and a reasonable independent ranking of CS departments can certainly help.
More importantly rankings cause us to compete against each other. Every CS department wants to raise their rankings (or stay on top) and use that goal to work on strengthening their departments and use rankings to make the case to upper administration and alumni to get the resources needed to continue to grow. By the nature of rankings, not everyone can rise up but we all get better in the process.
Saturday, September 10, 2016
Noam Nisan wins the Knuth Prize
Noam Nisan will receive the 2016 Donald E. Knuth Prize, the highest honor in theoretical computer science, for his work in computational complexity and algorithmic game theory. He'll receive the award at the upcoming FOCS conference.
I've known Noam since we were fellow grad students in Berkeley in 1985 and we have become good friends and colleagues. Noam Nisan started his career in computational complexity and Luca posts about several of his seminal works in derandomization, interactive proofs, communication and circuit complexity. I'll focus this post on Nisan's 1992 paper with Mario Szegedy, On the degree of boolean functions as real polynomials.
Let f be a Boolean function on n variables and p a n-variate polynomial over the reals and suppose for every x in {0,1}n, |f(x)-p(x)| ≤ 1/3. Nisan and Szegedy show that the decision tree complexity of f is bounded by a polynomial in the degree of f. The decision tree complexity of a function is the number of bits one has to query to determine whether f is 0 or 1.
The theorem didn't have an immediate application but soon afterwards I told Noam we found a result that followed from his paper. His ears perked up until I told him the actual result (if P = PSPACE then P = AWPP relative to a Cohen generic oracle).
Later on Nisan-Szegedy would have direct applications to quantum computing, showing that quantum, random and deterministic decision tree complexity for total functions are polynomially related. Just last STOC we saw two papers giving tighter bounds on these relationships.
In 1997 Noam Nisan walked away from complexity and soon thereafter became one of the founding players in algorithmic game theory, co-organizing the 2001 DIMACS workshop that would kickstart this field. He won the 2012 Gödel Prize for his early work on algorithmic mechanism design with Amir Ronen.
Nisan and Szegedy begat one of my most frustrating open questions on the relationship between rational functions and decision tree complexity. I have an application for it but I don't think you really want to know.
I've known Noam since we were fellow grad students in Berkeley in 1985 and we have become good friends and colleagues. Noam Nisan started his career in computational complexity and Luca posts about several of his seminal works in derandomization, interactive proofs, communication and circuit complexity. I'll focus this post on Nisan's 1992 paper with Mario Szegedy, On the degree of boolean functions as real polynomials.
Let f be a Boolean function on n variables and p a n-variate polynomial over the reals and suppose for every x in {0,1}n, |f(x)-p(x)| ≤ 1/3. Nisan and Szegedy show that the decision tree complexity of f is bounded by a polynomial in the degree of f. The decision tree complexity of a function is the number of bits one has to query to determine whether f is 0 or 1.
The theorem didn't have an immediate application but soon afterwards I told Noam we found a result that followed from his paper. His ears perked up until I told him the actual result (if P = PSPACE then P = AWPP relative to a Cohen generic oracle).
Later on Nisan-Szegedy would have direct applications to quantum computing, showing that quantum, random and deterministic decision tree complexity for total functions are polynomially related. Just last STOC we saw two papers giving tighter bounds on these relationships.
In 1997 Noam Nisan walked away from complexity and soon thereafter became one of the founding players in algorithmic game theory, co-organizing the 2001 DIMACS workshop that would kickstart this field. He won the 2012 Gödel Prize for his early work on algorithmic mechanism design with Amir Ronen.
Nisan and Szegedy begat one of my most frustrating open questions on the relationship between rational functions and decision tree complexity. I have an application for it but I don't think you really want to know.
Wednesday, September 07, 2016
Why I Don't Believe in ET
Is there life on other planets? The naive answer is "of course, why should Earth be special." What makes a lottery winner special? We could have just won the life lottery and the losers aren't around to complain.
The more scientific approach uses variants of the Drake Equation. I'll use the variant from the recent paper A New Empirical Constraint on the Prevalence of Technological Species in the Universe by Frank and Sullivan.
Frank and Sullivan's computations show that we expect there to only be life on Earth, say A=0.01 then f = flfift must be at most 2.5 x 10-24.
Seems small but is it really? 2.5 x 10-24 is roughly the probability of flipping 78 coin tosses and having them all come up heads. Maybe life requires 78 50/50 independent events to occur. Or 1060 independent events each with 95% probability. 1060 doesn't seem that big.
Our attempts at finding life, whether by SETI or Mars soil or UFOs have so far turned up nothing substantial. We just might be the only ones out there.
By no means am I arguing that we give up the search. I could be wrong, f could be much larger than 2.5 x 10-24. Let's keep looking, perhaps exploring the ice caps of Mars or the moons of Jupiter, keep listening the the stars, explore new ways to probe the galaxy. It would be incredible to find extraterrestrial life, but just don't be surprised if we don't.
The more scientific approach uses variants of the Drake Equation. I'll use the variant from the recent paper A New Empirical Constraint on the Prevalence of Technological Species in the Universe by Frank and Sullivan.
We define the ‘‘A-form’’ of the Drake equation, which describes the total number of technological species that have ever evolved anywhere in the currently observable Universe:
A = Nfpnpflfift
where N is the total number of stars, fp is the fraction of those stars that form planets, np is the average number of planets in the habitable zone of a star with planets, fl is theI first encountered the Drake equation in high school from Carl Sagan's book Cosmos. The conclusion that there must be life out there from the equation never satisfied me because of our inability to measure the f values.
probability that a habitable zone planet develops life, fi is the probability that a planet with life develops intelligence, and ft is the probability that a planet with intelligent life develops technology (of the ‘‘energy intensive’’ kind such as that of our own civilization).
Frank and Sullivan's computations show that we expect there to only be life on Earth, say A=0.01 then f = flfift must be at most 2.5 x 10-24.
Seems small but is it really? 2.5 x 10-24 is roughly the probability of flipping 78 coin tosses and having them all come up heads. Maybe life requires 78 50/50 independent events to occur. Or 1060 independent events each with 95% probability. 1060 doesn't seem that big.
Our attempts at finding life, whether by SETI or Mars soil or UFOs have so far turned up nothing substantial. We just might be the only ones out there.
By no means am I arguing that we give up the search. I could be wrong, f could be much larger than 2.5 x 10-24. Let's keep looking, perhaps exploring the ice caps of Mars or the moons of Jupiter, keep listening the the stars, explore new ways to probe the galaxy. It would be incredible to find extraterrestrial life, but just don't be surprised if we don't.
Subscribe to:
Comments (Atom)
