tag:blogger.com,1999:blog-37222332022-01-28T16:57:11.075-06:00Computational ComplexityComputational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill GasarchLance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.comBlogger2898125tag:blogger.com,1999:blog-3722233.post-82749795024560327002022-01-26T11:49:00.001-06:002022-01-26T12:00:46.560-06:00A Failure to Communicate<p>The screenwriter Aaron Sorkin wrote an <a href="https://ew.com/movies/aaron-sorkin-column-biopics-being-the-ricardos/">article</a> on prioritizing "Truth over Accuracy". He tells stories from his movies The Social Network and Being the Ricardos, of where he moves away from accuracy to get to the truth of a situation.</p><blockquote><p>My friend and teacher, the late William Goldman, said of his Academy Award-winning screenplay for All the President's Men, "If I'm telling the true story of the fall of the President of the United States, the last thing I'm going to do is make anything up." I understand what he meant in context, but the fact is, as soon as he wrote "FADE IN," he'd committed to making things up. People don't speak in dialogue, and their lives don't play out in a series of scenes that form a narrative. Dramatists do that. They prioritize truth over accuracy. Paintings over photographs.</p></blockquote><p>As scientists we focus on accuracy, as we should in our scientific publications. However being fully accurate can distract from the "truth", the underlying message you want to say, particularly in the title, abstract and introduction of our papers </p><p>Even more so when we promote our research to the public. A science writer once <a href="https://blog.computationalcomplexity.org/2004/04/view-of-science-writer.html">lamented to me</a> that scientists would focus too much on the full accuracy of the science and the names behind it, even though neither serves the reader well.</p><p>Reminds me of the recent Netflix movie <a href="https://www.netflix.com/title/81252357">Don't Look Up</a> satirizes scientists trying to communicate an end-of-the-world event to an untrusting society. I wish it was a better movie but still worth watching just to see Leo DiCaprio and Jennifer Lawrence play scientists frustrated with their ability to communicate a true existential crisis to the government and the general public. </p><p>So how should we as scientists try to frame our messaging to get people onboard, particularly when we say things they don't want to hear? Most importantly, how do scientists regain trust in a world where trust is in short supply. Perhaps we should paint more and photograph less.</p>Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.com4tag:blogger.com,1999:blog-3722233.post-91220755827061636192022-01-23T14:28:00.000-06:002022-01-23T14:28:00.261-06:00Personal Reflections on Dick Lipton in honor of his 1000th blog/75th bday.<p> Is it an irony that Lipton's 1000th post and 75th bday are close together? No. Its a coincidence. People use irony/paradox/coincidence interchangeably. Hearing people make that mistake makes me literally want to strangle them. </p><p>The community celebrated this milestone by having talks on zoom in Lipton's honor. The blog post by Ken Regan that announced the event and has a list of speakers is <a href="https://rjlipton.wpcomstaging.com/2022/01/14/happy-1000th-post-and-75th-birthday-dick/">here</a>. The talks were recorded so they should be available soon. YEAH KEN for organizing the event! We may one day be celebrating his 2000th blog post/Xth bday. </p><p>I will celebrate this milestone by writing on how Lipton and his work have inspired and enlightened me. </p><p><br /></p><p>1) My talk at the Lipton zoom-day-of-talks was on the Chandra-Furst-Lipton (1983) paper (see <a href="http://www.cs.umd.edu/~gasarch/BLOGPAPERS/CFG-multiparty.pdf">here</a>) that sparked my interest in Ramsey Theory, lead to a paper I wrote that improved their upper and lower bounds, and lead to an educational open problem that I posted on this blog, that was answered. There is still more to do. An expanded version of the slide talk I gave on the zoom-day is <a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/liptonbdaytalk.pdf">here</a>. (Their paper also got me interested in Communication complexity.) </p><p>2) I read the De Millo-Lipton-Perlis (1979) paper (see <a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/social.pdf">here</a>) my first year in graduate school and found it very enlightening. NOT about program verification, which I did not know much about, but about how mathematics really works. As an ugrad I was very much into THEOREM-PROOF-THEOREM-PROOF as the basis for truth. This is wrongheaded for two reasons (1) I did not see the value of intuition, and (2) I did not realize that the PROOF is not the END of the story, but the BEGINNING of a process of checking it- many people over time have to check a result. DLP woke me up to point (2) and (to a lesser extend) point (1). A scary thought: most results in math, once published, are never looked at again. So their could be errors in the math literature. However, the important results DO get looked at quite carefully. Even so, I worry that an important result will depend on one that has not been looked at much...Anyway, a link to a blog post about a symposium about DLP is <a href="https://blog.computationalcomplexity.org/2021/06/i-went-to-debate-about-program-verif.html">here</a>.</p><p>3) The Karp-Lipton theorem is: <i> if SAT has poly sized circuits than PH collapses</i> (see <a href="https://dl.acm.org/doi/10.1145/800141.804678">here</a>), It connects uniform and non-uniform complexity. This impressed me but also made me thing about IF-THEN statements. In this case something we don't think is true implies something else we don't think is true. So--- do we know something? Yes! The result has been used to get results like </p><p> If GI is NPC then PH collapses.</p><p>This is evidence that GI is not NPC. </p><p>4) Lipton originally blogged by himself and a blog book came out of that. I reviewed it in <a href="https://www.cs.umd.edu/~gasarch/bookrev/41-4.pdf">this</a> column. Later it became the Lipton-Regan blog, which also gave rise to a book, which I reviewed <a href="https://blog.computationalcomplexity.org/2014/05/review-of-people-problems-and-proofs-by.html#comment-form">here</a>. Both of these books inspired my blog book. This is a shout-out to BOTH Lipton AND Regan.</p><p>5) Lipton either thinks P=NP or pretends to since he wants people to NOT all think the same thing. Perhaps someone will prove P NE NP while trying to prove P=NP. Like in <i>The Hitchhiker's Guide to the</i> <i>Galaxy </i>where they say that to fly, you throw yourself on the ground and miss. I took Lipton's advice in another context: While trying to prove that there IS a protocol for 11 muffins, 5 students where everyone gets 11/5 and the smallest piece is 11/25, I wrote down what such a protocol would have to satisfy (I was sincerely trying to find such a protocol) and ended up proving that you could not do better than 13/30 (for which I already had a protocol). Reminds me of a quote attributed to Erdos: when trying to prove X, spend half your time trying to prove X and half trying to prove NOT(X).</p><p>6) Lipton had a blog post (probably also a paper someplace) about using Ramsey Theory as the basis for a proof system (see <a href="https://rjlipton.wpcomstaging.com/2009/06/13/a-proof-system-based-on-ramsey-theory/">here</a>). That inspired me to propose a potential randomized n^{log n) algorithm for the CLIQUE-GAP problem (see <a href="https://blog.computationalcomplexity.org/2011/07/slightly-diff-take-on-liptons-use-or.html">here</a>). The comments showed why the idea could not work-- no surprise as my idea would have lead to NP contained in RTIME(n^{log n}). Still, it was fun to think about and I learned things in the effort. </p><p><br /></p><p><br /></p><p><br /></p><p><br /></p>gasarchhttp://www.blogger.com/profile/03004932739846901628noreply@blogger.com7tag:blogger.com,1999:blog-3722233.post-73349710117365482212022-01-12T09:32:00.001-06:002022-01-12T10:33:20.816-06:00On the California Math Framework<p><i>Guest post by Boaz Barak
and Jelani Nelson</i></p><p class="MsoNormal"><o:p></o:p></p>
<p class="MsoNormal">In a <a href="http://blog.computationalcomplexity.org/2021/12/defending-status-quo.html">recent
post</a>, Lance Fortnow critiqued our <a href="https://sites.google.com/view/k12mathmatters/home">open letter</a> on the
proposed revisions for the <a href="https://www.cde.ca.gov/ci/ma/cf/">California Mathematics Framework</a> (CMF). We disagree
with Lance’s critique, and he has kindly allowed us to post our rebuttal here
(thank you Lance!).</p><p class="MsoNormal">First, let us point out the aspects where we agree with both
Lance and the authors of the CMF. Inequality in mathematical education, and in
particular the obstacles faced by low-income students and students of color, is
a huge problem in the US at large and California in particular. As a Black
mathematician, this portion of the CMF’s <a href="https://www.cde.ca.gov/ci/ma/cf/documents/mathfwchapter1.docx">introduction</a> particularly resonated
with me (Jelani):</p><blockquote><p class="MsoNormal">Girls and Black and Brown children, notably, represent
groups that more often receive messages that they are not capable of high-level
mathematics, compared to their White and male counterparts (Shah &
Leonardo, 2017). As early as preschool and kindergarten, research and policy
documents use deficit-oriented labels to describe Black and Latinx and
low-income children’s mathematical learning and position them as already behind
their white and middle-class peers (NCSM & TODOS, 2016).</p></blockquote><p class="MsoNormal"><o:p></o:p></p>
<p class="MsoNormal">We agree with the observation that bias in the public
education system can have a negative impact on students from underrepresented
groups. Where we strongly oppose the CMF though is regarding their conclusions
on how to address this concern. </p><p class="MsoNormal"><o:p></o:p></p>
<p class="MsoNormal">The CMF may state that they are motivated by increasing
equity in mathematics. However, if we read past the introduction to the actual
details of the CMF revisions, then we see they suffer from fundamental flaws,
which we believe if implemented, would <b>exacerbate</b> educational gaps, and
in particular make it <b>harder</b> for low-income students and students of
color to reach and be successful in college STEM.</p><p class="MsoNormal"><o:p></o:p></p>
<p class="MsoNormal">You can read our <a href="https://gdoc.pub/doc/e/2PACX-1vQvuzlJ8MWthsqOhRLxQc5akGS0JkgThz3umqO3K-WQiXFhWiq9qw-9iYdTyC652Ftjvv5nHvgGYTEv">detailed
critique</a> of the CMF, but the revisions we take issue with are:</p><p class="MsoNormal"><o:p></o:p></p>
<ol start="1" style="margin-top: 0in;" type="1"><li class="MsoNormal" style="mso-list: l0 level1 lfo1; tab-stops: list .5in;">Recommendation
to drop the option of Algebra I in the 8th grade<br style="mso-special-character: line-break;" />
<!--[endif]--><o:p></o:p></li><li class="MsoNormal" style="mso-list: l0 level1 lfo1; tab-stops: list .5in;">Recommendation
to offer (and in fact push and elevate above others) a “data science”
pathway for high school education as an alternative to the traditional
Algebra and Geometry curriculum. While data science can be a deep and
important field, teaching it for students without a math background will
be necessarily shallow. Indeed, the proposed data science courses focus on
tools such as using spreadsheets etc., and do not provide mathematical
foundations.</li>
</ol>
<p class="MsoNormal">1 and 2 make it all but impossible for students that follow
the recommended path to reach calculus (perhaps even pre-calculus) in the
12th grade. This means that such students will be at a disadvantage if
they want to pursue STEM majors in college. And who will be these students?
Since the CMF is only recommended, wealthier school districts are free to
reject it, and some already <a href="https://calmatters.org/education/k-12-education/2021/11/california-math/">signalled
that they will do so</a>. Within districts that do adopt the recommendations,
students with means are likely to take private Algebra I courses outside the
curriculum (as <a href="https://calmatters.org/education/k-12-education/2021/12/san-francisco-math/">already
</a><a href="https://edsource.org/2021/one-districts-faulty-data-shouldnt-drive-californias-math-policy/663374">happened</a>
in San Francisco), and reject the calculus-free “data science” pathway. Hence
this pathway will amount to a lower-tier track by another name, and worse than
now, students will be tracked based on whether their family has the financial
means to supplement the child’s public education with private coursework.</p><p class="MsoNormal">Notably, though the CMF aims to elevate data science, we’ve
had several data science faculty at the university level express disapproval of
the proposal by signing our opposition letter, including a founding faculty
member of the Data Science Institute at UCSD, and several others who are
directors of various undergraduate programs at their respective universities,
including four who direct their universities' undergraduate data science
programs (at Indiana University, Loyola University in Chicago, MIT, and the
University of Wisconsin)! </p>
<p class="MsoNormal">One could say that while the framework may hurt low-income
or students of color who want to pursue STEM in college, it might help other
students who are not interested in STEM. However, interest in STEM majors is <a href="https://www.ppic.org/blog/more-students-are-earning-stem-degrees/">rapidly
rising</a>, and with good reasons: employment in math occupations is projected
to <a href="https://www.bls.gov/ooh/math/mobile/home.htm">grow much faster </a>than
other occupations. With the increasing centrality of technology and STEM to our
society, we urgently need reforms that will diversify these professions rather
than the other way around.</p><p class="MsoNormal">As a final note, Lance claimed that by rejecting the CMF, we
are “defending the status quo”. This is not true. The CMF revisions are far
from the “only game in town” for improving the status quo in mathematics
education. In fact, unlike these largely untested proposals, there is a history
of approaches that do work for teaching mathematics for under-served
populations. We do not need to change the math itself, just invest in more
support (including extracurricular support) for students from under-resourced
communities. For example, Bob Moses’ <a href="https://www.npr.org/sections/codeswitch/2013/08/02/206813091/to-60s-civil-rights-hero-math-is-kids-formula-for-success">Algebra
Project</a> has time and again taken the least successful students according to
standardized exams, and turned them into a cohort that <a href="https://www.nytimes.com/2001/01/07/education/algebra-project-bob-moses-empowers-students.html">outperformed</a>
state averages in math. One of our letter’s contact people is Adrian Mims, an
educator with 27 years of experience, whose dissertation was on
"Improving African American Achievement in Geometry Honors" and who
went on to found <a href="https://thecalculusproject.org/">The Calculus
Project</a>, a non-profit organization creating a pathway for low-income
students and students of color to succeed in advanced mathematics. </p><p class="MsoNormal">To close, a critique of the proposed CMF revision is not a
defense of the status quo. Even if change is needed, not all change is good
change, and our letter does make some recommendations on the front, one of
which is a matter of process: if a goal is to best prepare Californian youth
for majors in data science and STEM more broadly, and ultimately careers in
these spaces, then involve college-level STEM educators and STEM professionals
in the <a href="https://www.cde.ca.gov/ci/ma/cf/mathcfccapplicants.asp">Curriculum
Framework and Evaluation Criteria Committee</a>.</p>Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.com14tag:blogger.com,1999:blog-3722233.post-33576886672586537192022-01-09T19:33:00.002-06:002022-01-09T22:16:46.782-06:00Math problems involving Food<p> A few people emailed me an Math article on arxiv about cutting a pizza, and since I wrote the book (literally) on cutting muffins, they thought it might interest me. It did, though perhaps not in the way they intended. I got curious about math problems that involve food. Here are some</p><p>The Muffin Problem. See my book (<a href="https://www.amazon.com/Mathematical-Muffin-Morsels-Problem-Mathematics/dp/9811215170">here</a>), or my website (<a href="http://www.cs.umd.edu/~gasarch/MUFFINS/muffins.html">here</a>)</p><p>The Candy Sharing Game. See this paper (<a href="https://msuweb.montclair.edu/~bald/files/candy-sharing.pdf">here</a>).</p><p>Sharing a pizza. See this paper (<a href="https://arxiv.org/abs/2109.06752">here</a>)</p><p>Cake Cutting. See this book (<a href="https://www.amazon.com/Cake-Cutting-Algorithms-Fair-You-Can/dp/1568810768">here</a>) or google <i>Fair Division </i> on amazon</p><p>Chicken McNugget Problem. See this paper (<a href="https://mikebeneschan.medium.com/the-chicken-mcnugget-theorem-explained-2daca6fbbe1e">here</a>)</p><p>The Ham Sandwich Theorem. See this paper (<a href="https://en.wikipedia.org/wiki/Ham_sandwich_theorem">here</a>)</p><p>Spaghetti Breaking Theorem. See this paper (<a href="https://news.mit.edu/2018/mit-mathematicians-solve-age-old-spaghetti-mystery-0813">here</a>)</p><p>Perfect Head on a Beer. See this paper (<a href="https://www.scientificamerican.com/article/mathematics-point-the-w/">here)</a></p><p>A smorgasbord of math-food connections, see this pinterest posting <a href="https://www.pinterest.com/mrelementarymath/food-and-math/">(here)</a></p><p>And of course the question that has plagued mankind since the time of Stonehenge: </p><p> <i>Why do Hot Dogs come in packs of 10 and Hot Dog buns in Packs of 8 (<a href="https://www.quantamagazine.org/the-secret-math-of-hot-dogs-and-buns-20211118/">here</a>)</i></p><p><br /></p><p>All of these problems could have been stated without food (The Chicken McNugget Problem is also Frobenius's Problem) but they are easier to understand with food.</p><p>I am sure I missed some. If you now of any other food-based math problems, leave a comment.</p><p><br /></p><p><br /></p><p><br /></p>gasarchhttp://www.blogger.com/profile/03004932739846901628noreply@blogger.com10tag:blogger.com,1999:blog-3722233.post-7280359596345434192022-01-02T23:14:00.001-06:002022-01-08T10:35:47.554-06:00Did Betty White die in 2021?/Why do people have their `end-of-the-year' lists before the end of the year?<p> I am looking at Parade Magazine's issue whose cover story is</p><p> <i>We say goodbye to the stars we lost in 2021.</i></p><p>The date on the magazine is Dec 19-26. Betty White is not in it. Neither is Bishop Tutu. Why not? Maybe they did not die in 2021. No, that's not it. They died after the magazine appeared. They also won't be in the issue a year from now which has cover story </p><p> <i>We say goodbye to the stars we lost in 2022.</i></p><p>Why do magazines and TV shows have their <i>end-of-the-year </i>stuff before the end of the year? Because they want to beat the competition? Because they all do it, so its a race-to-the-bottom? Tradition? </p><p>This blog does the same--- we already posted our end-of-the-year post. Every year I worry that someone will prove P=NP or P NE NP between Dec 24 and Jan 1. We don't have the Betty-White-Problem since the end-of-the-year post is based on when we blogged about it, not when it happened. So if theorist X died on Dec 27 then we would do the blog obit in Jan, and would mention it in the end-of-the-year post for the next year. (This happened with Martin Kruskal who died on Dec 26, 2006.) </p><p>Why do we have our end-of-the-year post before the end of the year? Tradition! That might not be a good reason. </p><p>ADDED LATER: Ken Regan in the comments points out that Betty White DID die in 2022 if you use AWE: Anywhere on Earth!</p><p>This leads to the following question:</p><p>a) If a celebrity dies on Dec 31 but its Jan 1 in some time zones then do they get to be in the</p><p>WE SAY GOODBYE TO THE STARS</p><p>articles for the Jan 1 year? </p><p>b) Is there a general rule on this? I doubt it and I doubt it comes up a lot. I noticed that <i>celebrities dying</i> <i>between Dec 24 and Dec 31 don't make those lists </i>about 20 years ago, and I have never seen a Dec 31 case before, though I am sure there have been some.</p><p>c) Note that this `<i>bad to die in that zone'</i> really only applies to minor celebrities. Betty White and Desmond Tutu did get proper attention when they died. </p><p><br /></p><p><br /></p>gasarchhttp://www.blogger.com/profile/03004932739846901628noreply@blogger.com4tag:blogger.com,1999:blog-3722233.post-77816257794703505362021-12-23T08:20:00.006-06:002022-01-11T10:58:29.107-06:00Complexity Year in Review 2021<p>The pandemic hampered many activities but research flourished with a number of great results in complexity. Result of the year goes to</p><p><a href="https://eccc.weizmann.ac.il/report/2021/151/">Locally Testable Codes with constant rate, distance, and locality</a> by Irit Dinur, Shai Evra, Ron Livne, Alexander Lubotzky and Shahar Mozes<br />and independently in<br /><a href="https://arxiv.org/abs/2111.03654">Asymptotically Good Quantum and Locally Testable Classical LDPC Codes</a> by Pavel Panteleev and Gleb Kalachev</p><p>They achieved the seemingly impossible, a code that can be checked with constant queries, constant rate, constant distance and constant error. Here's <a href="https://www.youtube.com/watch?v=pjc6GCRFnpg">Irit's presentation</a>, a <a href="https://www.quantamagazine.org/researchers-defeat-randomness-to-create-ideal-code-20211124/">Quanta article</a> and an <a href="https://eccc.weizmann.ac.il/report/2021/175/">overview</a> by Oded Goldreich. Ryan O'Donnell <a href="https://www.youtube.com/watch?v=k7LuOiOBYyQ">presents</a> the Panteleev-Kalachev paper, which also resolved a major open question in quantum coding theory.</p><p>Other great results include <a href="https://eccc.weizmann.ac.il/report/2021/081/">Superpolynomial Lower Bounds Against Low-Depth Algebraic Circuits</a> by Nutan Limaye, Srikanth Srinivasan and Sébastien Tavenas, <a href="https://blog.computationalcomplexity.org/2021/07/intersecting-classes.html">The Complexity of Gradient Descent</a> by John Fearnley, Paul W. Goldberg, Alexandros Hollender and Rahul Savani, <a href="https://blog.computationalcomplexity.org/2021/03/slicing-hypercube.html">Slicing the Hypercube is not easy</a> by Gal Yehuda and Amir Yehudayoff, and <a href="https://eccc.weizmann.ac.il/report/2021/164/">The Acrobatics of BQP</a> by Scott Aaronson, DeVon Ingram, and William Kretschmer. The latter paper answers a <a href="https://blog.computationalcomplexity.org/2005/12/pulling-out-quantumness.html">16-year old open question of mine</a> that suggests you cannot pull out quantumness like you can pull out randomness from a computation.</p><p>In computing overall, the story continues to be the growth in machine learning and the power of data. We're entering a phase where data-driven programming often replaces logic-based approaches to solving complex problems. This is also the year that the <a href="https://arstechnica.com/gaming/2021/11/everyone-pitching-the-metaverse-has-a-different-idea-of-what-it-is/">metaverse</a> started to gain attention. Too early to know where that will take use, but the virtual space may become as disruptive as the Internet over the next decade, and its potential effect in research and education should not be ignored. In the next pandemic, we may wonder how we survived earlier pandemics without it.</p><p>The NSF might be going through some major changes and significant increases, or not, especially with the Build Back Better bill on hold. The <a href="https://cra.org/govaffairs/blog/">Computing Research Policy Blog</a> can help you through the morass. </p><p>We remember <a href="https://cms.math.ca/news-item/the-canadian-mathematical-society-mourns-the-loss-of-matthew-brennan/">Matthew Brennan</a>, <a href="https://blog.computationalcomplexity.org/2021/06/benny-chor-1956-2021.html">Benny Chor</a>, <a href="https://www.informs.org/content/view/full/268792">Alan Hoffman</a>, <a href="https://www.nytimes.com/2021/02/09/science/arianna-wright-dead.html?unlocked_article_code=AAAAAAAAAAAAAAAACEIPuonUktbfqohkSlUbCibOVtkqqBaLwvHVwbU6gHa7MzKURjZeiugYCoTG-1vIYeArQeoP6AmhZY0LNq4zFrs1x_VDPkdpRk7w85fVxp4Lf2Btp9m4Gz5rh8mIDL1lqnq6MDLmdrkuz7Dk40rMeCK9DvykpH4qIApup5Znd0j7miBbg_eYTZMmn4V2zvwjBZhlRTwfZCDsvPDgCxh2OtjufQiLo0BtGLkfAWeP6Ibav7EQcwxSCUbFT2d_5As86dBfPdAWOcXvPrNpT77c47lyKR5DIrGzoGI&smid=url-share">Arianna Rosenbluth</a>, <a href="https://adminrecords.ucsd.edu/Notices/2021/2021-2-23-3.html">Walter Savitch</a>, <a href="https://blog.computationalcomplexity.org/2021/01/alan-selman-1941-2021.html">Alan Selman</a>, <a href="https://math.cornell.edu/professor-robert-strichartz-dies-78">Bob Strichartz</a> and <a href="https://scottaaronson.blog/?p=5730">Stephen Wiesner</a>. </p><p>We thank our guest posters <a href="https://blog.computationalcomplexity.org/2021/12/theoretics-new-tcs-journal.html">Paul Beame</a>, <a href="https://blog.computationalcomplexity.org/2021/08/the-long-road.html">Varsha Dani</a>, <a href="https://blog.computationalcomplexity.org/2021/11/reflections-on-trusting-trustlessness.html">Evangelos Georgiadis</a>, <a href="https://blog.computationalcomplexity.org/2021/09/guest-post-on-solving-or-trying-to-poly.html">Bogdan Grechuk</a>, <a href="https://blog.computationalcomplexity.org/2021/08/do-four-colors-suffice.html">David Marcus</a> and <a href="https://blog.computationalcomplexity.org/2021/12/yes-virginia-there-is-santa-clause-for.html">Hunter Monroe</a>.</p><p>In May I <a href="https://blog.computationalcomplexity.org/2021/05/emerging-from-pandemic.html">posted</a> on emerging from the pandemic. Then came Delta. Now comes Omicron pushing us back online next month. I hope Pi-day doesn't bring us the Pi-variant.</p><p>Wishing for a more normal 2022.</p>Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.com2tag:blogger.com,1999:blog-3722233.post-13970268037850265902021-12-17T13:33:00.005-06:002021-12-18T10:03:32.526-06:00Fifty Years of P vs. NP and the Possibility of the Impossible<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://cacm.acm.org/magazines/2022/1/257448-fifty-years-of-p-vs-np-and-the-possibility-of-the-impossible/fulltext" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="250" data-original-width="250" height="250" src="https://cacm.acm.org/system/assets/0004/1861/121521_CACMpg77_Fifty-Years-PNP.large.jpg?1639518160&1639518160" width="250" /></a></div><br />I have a new article <a href="https://cacm.acm.org/magazines/2022/1/257448-fifty-years-of-p-vs-np-and-the-possibility-of-the-impossible/fulltext">Fifty Years of P vs. NP and the Possibility of the Impossible</a>, to mark the anniversary of the 1971 publication of Steve Cook's <a href="https://dl.acm.org/doi/10.1145/800157.805047">seminal paper</a>, a month too late in the <a href="https://cacm.acm.org/magazines/2022/1">January 2022 Communications of the ACM</a>. <p></p><p>Initially Moshe Vardi asked me to update my 2009 CACM survey <a href="https://cacm.acm.org/magazines/2009/9/38904-the-status-of-the-p-versus-np-problem/fulltext">The Status of the P versus NP Problem</a>. The P vs NP problem hasn't changed much but computing has gone through dramatic changes in the last dozen years. I try to view P vs NP in the lens of modern optimization and learning, where we are heading to a seemingly impossible <a href="https://blog.computationalcomplexity.org/2020/12/optiland.html">Optiland</a> (a play on Impagliazzo's <a href="https://blog.computationalcomplexity.org/2004/06/impagliazzos-five-worlds.html">Pessiland</a>), where we can solve many of the toughest NP-complete problems in practice and yet cryptography remains unscathed.</p><p>CACM produced a slick <a href="https://vimeo.com/649645372">companion video</a> to the <a href="https://cacm.acm.org/magazines/2022/1/257448-fifty-years-of-p-vs-np-and-the-possibility-of-the-impossible/fulltext">article</a>.</p><p></p><div style="padding: 56.25% 0px 0px; position: relative;"><iframe allow="autoplay; fullscreen; picture-in-picture" allowfullscreen="" frameborder="0" src="https://player.vimeo.com/video/649645372?h=ab16c4fc61" style="height: 100%; left: 0; position: absolute; top: 0; width: 100%;"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script>
<p><a href="https://vimeo.com/649645372">Fifty Years of P Versus NP and the Possibility of the Impossible</a> from <a href="https://vimeo.com/user4730653">CACM</a> on <a href="https://vimeo.com">Vimeo</a>.</p><p></p>Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.com6tag:blogger.com,1999:blog-3722233.post-193047421011769912021-12-12T14:54:00.004-06:002021-12-12T16:20:29.728-06:00Did Lane Hemaspaandra invent the Fib numbers? <p> (I abbreviate Fibonacci by Fib throughout. Lane Hemaspaandra helped me with this post.) </p><p>We all learned that Fib invented or discovered the Fib Numbers:</p><p>f_0=1,</p><p>f_1=1, and</p><p>for all n\ge 2, f_n = f_{n-1} + f_{n-2}.</p><p>We may have learned that they come up in nature (NOT true, see <a href="https://www.lockhaven.edu/~dsimanek/pseudo/fibonacc.htm">here</a>) or that they were important in mathematics (questionable--see this blog post <a href="https://blog.computationalcomplexity.org/2012/01/how-important-are-fib-numbers-in-math.html">here</a> which says no, but some comments give good counterarguments). You also learned that Fibonacci was the mathematician who first studied them. Also not true! This one surprised me. </p><p>1) I came across this blog post: <a href="https://mathenchant.wordpress.com/2021/11/16/why-names-matter/">here</a> that says they were invented by Hemachandra first. Wow--I then recalled that Lane Hemaspaandra's birth surname was Lane Hemachandra (he married Edith Spaan and they are now both Hemaspaandra). So naturally I emailed him to ask how a 20th-century person could invent something earlier than 1170. He told me a picture of him in the basement ages while he stays young. </p><p>2) It would be nice to say OH, let's call them Hemachandra numbers (would that be easier than convincing the world to use tau instead of pi,? See <a href="https://tauday.com/tau-manifesto">The Tau Manifesto</a>) and let students know that there were people other than Europeans who did math back then. But even that story is not as simple as it seems. Lane emailed me this article: <a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/fibfibs.pdf">here</a> that tells the whole muddled story. (In what follows I leave out the accents.)</p><p>Virahanka seems to be the formulator of the Fib recurrence, though not quite the numbers. His motivation was Sanskrit Poetry. He did this between 600 and 800 AD.</p><p>Gopala, in work prior to 1135, was aware of Virhanka's work. In particular he know about the inductive rule. But he also set the initial values and generated numbers, so he was the first to have the sequence we now call the Fib numbers. His motivation was Sanskrit Poetry. </p><p>Hemachandra in 1150 also formulated them, independently. His motivation was Sanskrit poetry.</p><p>(I will learn some Sanskrit poetry the next time I teach Discrete Math so I can give the students an application of the material!)</p><p>So does Virhanka win? Hardly:</p><p>Acarya Pingala's writings from the 5th or 6th century BC (YES- BC!) indicate that he knew about the Fib numbers in the context of (you guessed it!) Sanskrit poetry. </p><p>3) I would support changing the name of the Fib Numbers to the Pingala numbers. This is both good and bad news for Lane:</p><p>a) Bad news in that he does not get a sequence of number that shares his pre-marriage name.</p><p>b) Good news in that if I settled on Hemachandra numbers then Lane and Edith would have to decide if 0 or 1 or 2 of them want to change their name to Hemachandra. I would guess not--too complicated. Plus one name change in a life is enough. </p><p>4) The story (especially the articles I pointed to) shows just how complicated history can get. Even a straightforward question like:</p><p><i>Who first formulated the Fib Numbers?</i></p><p>might not have a well-defined answer. Perhaps this is the wrong question since if people formulate the concept independent of each other, they should all get some credit. Even if the authors are 1000 years apart. </p><p><i>Side note</i>: Independent Discovery may be harder to assert now since, with the web, Alice could have seen Bob's paper so it may be hard to call Alice's discovery independent. As I have mentioned before on this blog, my students have a hard time with the notion of Cook and Levin coming up with NP-completeness independently since surely one would have posted it and the other would have seen it. An era before posting was possible! Unimaginable to them. Sometimes even to me. </p>gasarchhttp://www.blogger.com/profile/03004932739846901628noreply@blogger.com1tag:blogger.com,1999:blog-3722233.post-55166075929190654732021-12-08T13:22:00.001-06:002022-01-12T09:35:14.472-06:00Defending the Status Quo<p>When the Wall Street Journal's <a href="https://www.wsj.com/articles/defending-mathematics-science-stem-equity-education-california-k12-math-matters-11638728196?reflink=desktopwebshare_permalink">editorial board</a> and the <a href="https://nypost.com/2021/12/07/hundreds-of-college-professors-blast-woke-math-movement/">New York Post</a> endorse your efforts, that should ring warning bells.</p><p>Several members of the theory and mathematics community and have written and endorsed an <a href="https://sites.google.com/view/k12mathmatters/home">Open Letter on K-12 Mathematics</a> that attacks the proposed revisions to the <a href="https://www.cde.ca.gov/ci/ma/cf/">California Mathematics Framework</a>. I have mixed feelings about these efforts. </p><p>Certainly the CMF has its issues, and the FAQs <a href="https://www.cde.ca.gov/ci/ma/cf/mathfwfaqs.asp">protest too much</a>. But the letter goes too far in the other direction, arguing mainly for the status quo that worked well for those who signed the letter, very few of which have significant experience in K-12 education. The open letter allows for only incremental change unlikely to lead to any significant improvements.</p><p>Before you sign the letter, take a look at the CMF <a href="https://www.cde.ca.gov/ci/ma/cf/documents/mathfwchapter1.docx">introduction</a></p><p></p><blockquote><p>To develop learning that can lead to mathematical power for all California students, the framework has much to correct; the subject and community of mathematics has a history of exclusion and filtering, rather than inclusion and welcoming. There persists a mentality that some people are “bad in math” (or otherwise do not belong), and this mentality pervades many sources and at many levels. Girls and Black and Brown children, notably, represent groups that more often receive messages that they are not capable of high-level mathematics, compared to their White and male counterparts. As early as preschool and kindergarten, research and policy documents use deficit-oriented labels to describe Black and Latinx and low-income children’s mathematical learning and position them as already behind their white and middle-class peers. These signifiers exacerbate and are exacerbated by acceleration programs that stratify mathematics pathways for students as early as sixth grade.</p><p>Students internalize these messages to such a degree that undoing a self-identity that is “bad at math” to one that “loves math” is rare. Before students have opportunities to excel in mathematics, many often self-select out of mathematics because they see no relevance for their learning, and no longer recognize the inherent value or purpose in learning mathematics.</p></blockquote><p>You may or may not agree with the CMF approach, but it's hard to deny the real challenges they are trying to address and students they are trying to help. If you don't agree with the CMF, work with them to come up with a good alternative that helps create a more inclusive mathematical citizenry. An outright rejection of the approach won't fix problems and probably won't be taken seriously, except from the conservative press.</p><p>Update (1/12/22): Boaz Barak and Jelani Nelson <a href="https://blog.computationalcomplexity.org/2022/01/on-california-math-framework.html">respond to this post</a>.</p>Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.com16tag:blogger.com,1999:blog-3722233.post-87742673755742068152021-12-05T20:25:00.030-06:002021-12-06T07:17:34.496-06:00Yes Virginia, there is a Santa Clause for Complexity Theorists, If you Only Believe(Guest Post by Hunter Monroe)
In this guest post and <a href=https://speedupblogger.wordpress.com/2021/12/05/unifying-conjectures-on-complexity-and-noncomputability/>discussion paper</a>, I present a remarkable set of structurally similar conjectures which, if you only believe them, conjure up a dream world for theorists by asserting a new form of diagonalization based on naturally nonrelativizing facts invoking a deep linkage to underlying noncomputable languages. These conjectures, all stronger than the things to be proved, imply that the polynomial hierarchy does not collapse because the arithmetic hierarchy does not collapse, and P≠NP≠coNP. The diagonalizations imply the existence of hard instances, with the result that many complexity classes have speedup, including the Π side of PH, and proof speedup for tautologies stems from proof speedup for arithmetic. These conjectures do two things: (1) let us explore a hypothetical world where many open problems about uniform complexity classes are resolved and consider steps beyond e.g. to circuit complexity, and (2) reduce numerous open questions to a single plausible claim about how Turing machines have limited information about noncomputable languages. This would potentially allow a slew of open questions to be resolved at once with a skeleton key.<p>The following <a href=https://arxiv.org/abs/0906.3765>conjecture</a> implying $\textbf{P!=NP}$ is remarkable: it hints at a deeper, unnoticed relationship between complexity and noncomputability; it is <a href=https://www2.karlin.mff.cuni.cz/~krajicek/scan10.pdf>equivalent</a> to speedup for all paddable $\textbf{coNP}$-complete languages and in proof length for tautologies; tweaked versions would separate other complexity classes; and if true it is a nonrelativizing fact.<p />
Conjecture: (*) For any deterministic TM $M$ accepting the $\textbf{coNP}$-complete language ``nondeterministic TM $N$ on input $x$ does not halt within $t$ steps'' ($\texttt{coBHP}$), there exists a $\langle N',x'\rangle\in\texttt{coHP}$ ($N'$ does not halt on $x'$, ever) with $M$'s running time $f(t)=T_M(N',x',1^t)$ not bounded by any polynomial.
<p />
If true, (*) is a nonrelativizing fact; there is no hard $\langle N, x\rangle$ for $M$ with an exponential time oracle. The noncomputable language $\texttt{coHP}$ potentially explains why (*) is true, by analogy with this trivial theorem:
<p />
Theorem. For any $M$ accepting $\texttt{coBHP}$, there exists some non-halting $\langle N',x'\rangle\in\texttt{coHP}$ with $f(t)=T_M(N',x',1^t)$ not bounded by a constant.
<p />
Otherwise, $M$ would accept $\texttt{coHP}$ and have too much information about a non-c.e. language; (*) is just a stronger version. In the extreme, any $M$ is completely ignorant about some $\langle N',x'\rangle$ and requires on the order of $2^t$ steps to rule out every potentially halting branch. Tweaking (*) yields conjectures implying that $\textbf{PH}$ does not collapse:
<p />
Conjectures: For any $M^{\Pi^p_i}$ that accepts $\Pi^p_{i+1}=\{\langle N^{\Pi^p_i},x,1^t\rangle|$ $N^{\Pi^p_i}$ does not halt on input $x$ in $t$ steps$\}$, there is a non-halting $\langle N'^{\Pi^p_i},x'\rangle\in \Pi_i$ with $M^{\Pi^p_i}$'s running time not bounded by any polynomial.
<p />
By invoking every level $\Pi_i$ of the arithmetic hierarchy ($\textbf{AH}$), these conjectures state that the noncollapse $\textbf{PH}$ is due to the noncollapse of $\textbf{AH}$. The conjecture (*) can be calibrated depending on the desired separation to equip $M$ with an oracle or nondeterminism or constrain its resources, to choose a resource-bounded complete problem and underlying non-c.e. language, and to fine tune how hard a hard instance needs to be.
<p />
Proof speedup for tautologies (equivalent to (*)) may stem from the proof speedup for arithmetic that occurs when adding undecidable statements as new axioms, allowing new theorems to be proved and shortening the proof of existing theorems. This literature translates any arithmetic theorem free of existential quantifiers into a tautology by replacing $k$-bit numbers with $k$ Boolean variables. The analogy with (*) suggests this stronger conjecture may in fact also be equivalent:
<p />
Conjecture: The following two statements are equivalent: (1) there is no optimal propositional proof system; and (2) Any propositional proof system $P$ is outperformed by a sufficiently powerful conservative extension $T$ of the Peano arithmetic, and $T$ can be improved further by adding any undecidable statement in $T$ as a new axiom.
<p />
So (*) is a Swiss army knife for generating conjectures that give us a vision of a world in which answering essentially one question would serve as a skeleton key that unlocks many open problems.
<script src=http://cdn.mathjax.org/mathjax/latest/MathJax.js type="text/javascript">
MathJax.Hub.Config({
extensions: ["tex2jax.js","TeX/AMSmath.js","TeX/AMSsymbols.js"],
jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: {
inlineMath: [ ['$','$'], ["\\(","\\)"] ],
displayMath: [ ['$$','$$'], ["\\[","\\]"] ],
},
"HTML-CSS": { availableFonts: ["TeX"] }
});
</script>gasarchhttp://www.blogger.com/profile/03004932739846901628noreply@blogger.com5tag:blogger.com,1999:blog-3722233.post-4902949475978862202021-12-01T08:06:00.003-06:002021-12-02T08:34:08.340-06:00TheoretiCS: A New TCS Journal<p><i>Guest Post from Paul Beame on behalf of the TheoretiCS Foundation</i></p><p>I am writing to let you know of the launch today of <a href="https://theoretics-journal.org">TheoretiCS</a>, a new fully open-access journal dedicated to Theoretical Computer Science developed by the members of our community that I have been involved in and for which I gave a brief pre-announcement about at STOC.</p><p>This journal has involved an unprecedented level of cooperation of representatives of leading conferences from across the entire Theoretical Computer Science spectrum. This includes representatives from STOC, FOCS, SODA, CCC, PODC, SoCG, TCC, COLT, ITCS, ICALP, which may be more familiar to readers of your blog, as well as from LICS, CSL, CONCUR, ICDT, MFCS and a number of others.</p><p><b>Two Points of Emphasis</b></p><p></p><ul style="text-align: left;"><li>Our quality objective - TheoretiCS aims at publishing articles of a very high quality, and at becoming a reference journal on par with the leading journals in all of Theoretical Computer Science</li><li>The inclusive view of Theoretical Computer Science that this journal represents, which is evident in the choice of two excellent co-editors-in-chief, Javier Esparza and Uri Zwick, and an outstanding <a href="https://theoretics.episciences.org/page/editorial-board">inaugural editorial board</a>.</li></ul><p></p><p><b>Guiding principles and objectives</b></p><p></p><ul style="text-align: left;"><li>We believe that our field (and science in general) needs more 'virtuous' open-access journals, a whole eco-system of them, with various levels of specialization and of selectivity. We also believe that, along with the structuring role played by conferences in theoretical computer science, we collectively need to re-develop the practice of journal publications.</li><li>The scope of TheoretiCS is the whole of Theoretical Computer Science, understood in an inclusive meaning (concretely: including, but not restricted to, the Theory of Computing and the Theory of Programming; or equivalently, the so-called TCS-A and TCS-B, reminiscent of Jan van Leeuwen et al.'s Handbook of Theoretical Computer Science).</li><li>Our aim is to rapidly become a reference journal and to contribute to the unity of the Theoretical Computer Science global community. In particular, we will seek to publish only papers that make a very significant contribution to their respective fields, that strive to be accessible to a wider audience within theoretical computer science, and that are, generally, of a quality on par with the very best journals in the field.</li><li>TheoretiCS adheres to the principles of 'virtuous' open-access: there is no charge to read the journal, nor to publish in it. The copyright of the papers remains with the authors, under a Creative Commons license.</li></ul><p></p><p><b>Organization and a bit of history</b></p><p>The project started in 2019 and underwent a long gestation. From the start, we wanted to have a thorough discussion with a wide representation of the community, on how to best implement the guiding principles sketched above. It was deemed essential to make sure that all fields of theoretical computer science would feel at home in this journal, and that it would be recognized as a valid venue for publication all over the world.</p><p>This resulted in the creation of an <a href="https://theoretics.episciences.org/page/advisory-board">Advisory Board</a>, composed of representatives of most of the main conferences in the field (currently APPROX, CCC, COLT, CONCUR, CSL, FOCS, FoSSaCS, FSCD, FSTTCS, ICALP, ICDT, ITCS, LICS, MFCS, PODC, SoCG, SODA, STACS, STOC, TCC) and of so-called members-at-large. </p><p><b>Logistics and answers to some natural questions</b></p><p></p><ul style="text-align: left;"><li>The journal is published by the TheoretiCS Foundation, a non-profit foundation established under German law. Thomas Schwentick, Pascal Weil, and Meena Mahajan are officers of the foundation.</li><li>TheoretiCS is based on the platform <a href="https://episciences.org/">episciences.org</a>, in the spirit of a so-called overlay journal.</li><li>The Advisory Board, together with the Editors-in-Chief and the Managing Editors, spent much of their efforts in designing and implementing an efficient 2-phase review system: efficient in terms of the added-value it brings to the published papers and their authors, and of the time it takes. Yet, as this review system relies in an essential fashion on the work and expertise of colleagues (like in all classical reputable journals), we can not guarantee a fixed duration for the evaluation of the papers submitted to TheoretiCS.</li><li>Being charge-free for authors and readers does not mean that there is no cost to publishing a journal. These costs are supported for the foreseeable future by academic institutions (at the moment, CNRS and Inria, in France; others may join).</li><li>The journal will have an ISSN, and each paper will have a DOI. There will be no print edition.</li></ul><p></p>Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.com2tag:blogger.com,1999:blog-3722233.post-61056247134268671732021-11-28T15:47:00.000-06:002021-11-28T15:47:44.155-06:00Open: 4 colorability for graphs of bounded genus or bounded crossing number (has this been asked before?)<p> I have co-authored (with Nathan Hayes, Anthony Ostuni, Davin Park) an open problems column on the topic of this post. It is <a href="https://www.cs.umd.edu/users/gasarch/open/cross.pdf">here</a>.</p><p>Let g(G) be the genus of a graph and cr(G) be the crossing number of a graph.</p><p>As usual chi(G) is the chromatic number of a graph. </p><p>KNOWN to most readers of this blog:</p><p>{G: \chi(G) \le 2} is in P</p><p>{G: \chi(G) \le 3 and g(G)\le 0 } is NPC (planar graph 3-col)</p><p>{G : \chi(G) \le 4 and g(G) \le 0} is in P (it's trivial since all planar graphs are 4-col)</p><p>{G: \chi(G) \le 3 and cr(G) \le 0} is NPC (planar graph 3-col)</p><p>{G: \chi(G) \le 4 and cr(G) \le 0} is in P (trivial since all planar graphs are 4-col)</p><p>LESS WELL KNOWN BUT TRUE (and brought to my attention by my co-authors and also Jacob Fox and Marcus Schaefer) </p><p>For all g\ge 0 and r\ge 5, {G : \chi(G) \le r and g(G) \le g} is in P</p><p>For all c\ge 0 and r\ge 5, {G : \chi(G) \le r and cr(G) \le c} is in P </p><p>SO I asked the question: for various r,g,c what is the complexity of the following sets:</p><p>{G: \chi(G) \le r AND g(G) \le g} </p><p>{G: \chi(G) \le r AND cr(G) \le c}</p><p>SO I believe the status of the following sets is open</p><p>{G : \chi(G) \le 4 and g(G)\le 1} (replace 1 with 2,3,4,...)</p><p>{G : \chi(G) \le 4 and cr(G)\le 1} (replace 1 with 2,3,4...) </p><p><br /></p><p>QUESTIONS</p><p>1) If anyone knows the answer to these open questions, please leave comments. </p><p>2) The paper pointed to above mentions all of the times I read of someone asking questions like this. There are not many, and the problem does not seem to be out there. Why is that?</p><p>a) It's hard to find out who-asked-what-when. Results are published, open problems often are not. My SIGACT News open problems column gives me (and others) a chance to write down open problems; however, such venues are rare. So it's possible that someone without a blog or an open problems column raised these questions before. (I checked cs stack exchange- not there- and I posted there but didn't get much of a response.) </p><p>b) Proving NPC seems hard since devising gadgets with only one crossing is NOT good enough since you use the gadget many times. This may have discouraged people from thinking about it. </p><p>c) Proving that the problems are in P (for the r\ge 6 case) was the result of using a hard theorem in graph theory from 2007. The authors themselves did not notice the algorithmic result. The first published account of the algorithmic result might be my open problems column. This may be a case of the graph theorists and complexity theorists not talking to each other, though that is surprising since there is so much overlap that I thought there was no longer a distinction. </p><p>d) While I think this is a natural question to ask, I may be wrong. See <a href="https://blog.computationalcomplexity.org/2021/05/what-is-natural-question-who-should.html">here</a> for a blog post about when I had a natural question and found out why I may be wrong about the problems naturalness. </p><p><br /></p>gasarchhttp://www.blogger.com/profile/03004932739846901628noreply@blogger.com0tag:blogger.com,1999:blog-3722233.post-45839023982508062212021-11-22T14:39:00.001-06:002021-11-22T14:39:19.633-06:00Finding an element with nonadaptive questions<p>Suppose you have a non-empty subset S of {1,...N} and want to find an element of S. You can ask arbitrary questions of the form "Does S contain an element in A?" for some A a subset of {1,...N}. How many questions do you need?</p><p>Of course you can use binary search, using questions of the form "is there number greater than <i>m</i> in S?". This takes log N questions and it's easy to show that's tight.</p><p>What if you have to ask all the questions ahead of time before you get any of the answers? Now binary search won't work. If |S|=1 you can ask "is there a number in S whose <i>i</i>th bit is one?" That also takes log N questions.</p><p>For arbitrary S the situation is trickier. With randomness you still don't need too many questions. <a href="https://doi.org/10.1007/BF02579206">Mulmuley, Vazirani and Vazirani</a>'s isolating lemma works as follows: For each i <= log N, pick a random weight w<sub>i</sub> between 1 and 2 log N. For each element m in S, let the weight of m be the sum of the weights of the bits of m that are 1. With probability at least 1/2 there will be an m with an unique minimum weight. There's a <a href="https://blog.computationalcomplexity.org/2015/07/new-proof-of-isolation-lemma.html">cool proof</a> of an isolating lemma by Noam Ta-Shma.</p><p>Once you have this lemma, you can ask questions of the form "Given a list of w<sub>i</sub>'s and a value v, is there an m in S of weight v whose jth bit is 1?" Choosing w<sub>i</sub> and v at random you have a 1/O(log N) chance of a single m whose weight is v, and trying all j will give you a witness. </p><p>Randomness is required. The X-search problem described by <a href="https://doi.org/10.1016/0022-0000(88)90027-X">Karp, Upfal and Wigderson</a> shows that any deterministic procedure requires essentially N queries. </p><p>This all came up because Bill had some colleagues looking a similar problems testing machines for errors. </p><p>I've been interested in the related question of finding satisfying assignments using non-adaptive NP queries. The results are similar to the above. In particular, you can randomly find a satisfying assignment with high probability using a polynomial number of non-adaptive NP queries. It follows from the techniques above, and even earlier papers, but I haven't been able to track down a reference for the first paper to do so.</p>Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.com2tag:blogger.com,1999:blog-3722233.post-12084585143341197382021-11-17T12:44:00.000-06:002021-11-17T12:44:25.664-06:00CS Slow to Change?<p>Back in March of 2019 <a href="https://blog.computationalcomplexity.org/2019/03/scooped.html">I wrote</a></p><blockquote><p>I was also going to post about Yann LeCun's Facebook rant about stodgy CS departments but then Yann goes ahead and wins a Turing award with Geoffrey Hinton and Yoshua Bengio for their work on machine learning. I knew Yann from when we worked together at NEC Research in the early 2000's and let's just congratulate him and the others and let them bask in glory for truly transforming how we think of computing today. I'll get back to his post soon enough.</p></blockquote><p>So not that soon. Yann's <a href="https://www.facebook.com/story.php?story_fbid=10152719972317143&id=722677142">post</a> was from 2015 where he went after "stodgy" CS departments naming Yale, Harvard, Princeton and Chicago.</p><blockquote><p>CS is a quickly evolving field. Because of excess conservatism, these departments have repeatedly missed important trends in CS and related field, such as Data Science. They seem to view CS as meaning strictly theory, crypto, systems and programming languages, what some have called "core CS", paying lip service to graphics, vision, machine learning, AI, HCI, robotics, etc. But these areas are the ones that have been expanding the fastest in the last decades, particularly machine learning and computer vision in the last decade....It is quite common, and somewhat natural, that newer areas (eg ML) be looked down upon by members of older, more established areas (eg Theory and Systems). After all, scientists are professional skeptics. But in a fast evolving disciplines like CS and now Data Science, an excessive aversion to risk and change is a recipe for failure.</p></blockquote><p>We've seen some changes since. Yale's Statistics Department is now <a href="https://statistics.yale.edu/">Statistics and Data Science</a>. The University of Chicago has a new Data Science <a href="https://news.uchicago.edu/story/new-college-data-science-major-foundations-insight-impact">undergrad major</a> and <a href="https://cdac.uchicago.edu/insights/introducing-the-uchicago-data-science-institute/">institute</a>.</p><p>I wonder if that's the future. CS doesn't really change that much, at least not quickly. Data science, and perhaps cybersecurity, evolve as separate fields which only have limited intersection with traditional CS. The CS degree itself just focuses on those interested in how the machines work and the theory behind them. We're busy trying to figure this out at Illinois Tech as are most other schools. And what about augmented/virtual reality and the metaverse, quantum computing, fintech, social networks, human and social factors and so on? How do you choose which bets to make? </p><p>Most of all, universities, traditionally slowly moving machines, need to far more agile even in fields outside computing since the digital transformation is affecting everything. How do you plan degrees when the computing landscape when students graduate is different from when they start? </p>Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.com2tag:blogger.com,1999:blog-3722233.post-3782727478836567072021-11-14T21:50:00.001-06:002021-11-14T21:50:57.777-06:00When did Computer Science Theory Get so Hard?<p> I posted on <a href="https://blog.computationalcomplexity.org/2019/07/when-did-math-get-so-hard.html">When did Math get so hard?</a> a commenter pointed out that one can also ask </p><p><br /></p><p><i>When did Computer Science Theory Get so Hard?</i></p><p>For the Math-question I could only speculate. For CS- I WAS THERE! When I was in Grad School one could learn all of Complexity theory in a year-long course (a hard one, but still!). The main tools were logic and combinatorics. No Fourier Transforms over finite fields. I am NOT going to say</p><p><i>Those were the good old days.</i></p><p>I will say that it was easier to make a contribution without knowing much. Oddly enough, it is MORE common for ugrads and grad students to publish NOW then it was THEN, so that may be a pair of ducks.</p><p><b>Random Thoughts on This Question</b></p><p>1) The Graph Minor Theorem was when P lost its innocence. Before the GMT most (though not all) problems in P had easy-to-understand algorithms using algorithmic paradigms (e.g., Dynamic Programming) and maybe some combinatorics. Computational Number Theory used.... Number Theory (duh), but I don't think it was hard number theory. One exception was Miller's Primality test which needed to assume the Extended Riemann Hypothesis- but you didn't have to understand ERH to use it. </p><p>1.5) GMT again. This did not only give hard-deep-math algorithms to get problems in P. It also pointed to how hard proving P NE NP would be--- to rule out something like a GMT-type result to get SAT in P seems rather hard. </p><p>2) Oracle Constructions were fairly easy diagonalizations. It was bummed out that I never had to use an infinite injury priority argument. That is, I knew some complicated recursion theory, but it was never used. </p><p>2.5) Oracles again. Dana Angluin had a paper which used some complicated combinatorics to construct an oracle, see <a href="https://www.sciencedirect.com/science/article/pii/0304397580900274?via%3Dihub">here</a>. Later Andy Yao showed that there is an oracle A such that PH^A NE PSPACE^A. You might know that result better as</p><p><i>Constant depth circuits for parity must have exponential size. </i></p><p>I think we now care about circuits more than oracles, see my post <a href="https://blog.computationalcomplexity.org/2015/04/the-new-oracle-result-new-circuit.html">here</a> about that issue. Anyway, oracle results since then have used hard combinatorial and other math arguments. </p><p>3) The PCP result was a leap forward for difficulty. I don't know which paper to pick as THE Leap since there were several. And papers after that were also rather difficult. </p><p>4) I had a blog post <a href="https://blog.computationalcomplexity.org/2021/04/do-any-np-reductions-use-deep.html#comment-form">here</a> where I asked if REDUCTIONS ever use hard math. Some of the comments are relevant here:</p><p>Stella Biderman: The deepest part of the original PCP theorem is the invention of the VC paradigm in the 1990's.</p><p>Eldar: Fourier Theory was introduced to CS with Hastad's Optimal Approximation results. Today it might not be considered deep, but I recall when it was.</p><p>Also there are Algebraic Geometry codes which use downright arcane mathematics...</p><p>Hermann Gruber refers to Comp Topology and Comp Geometry and points to the result that 3-manifold knot genus is NP-complete, see <a href="https://dl.acm.org/doi/10.1145/509907.510016">here</a>.</p><p>Anonymous (they leave many comments) points to the deep math reductions in arithmetic versions of P/NP classes, and Mulmuley's work (Geometric Complexity Theory).</p><p>Timothy Chow points out that `deep' could mean several things and points to a math overflow post on the issue of depth, <a href="https://mathoverflow.net/questions/139607/what-are-some-deep-theorems-and-why-are-they-considered-deep">here</a>.</p><p>Marzio De Biasi points out that even back in 1978 there was a poly reduction that required a good amount of number theory: the NPC of the Diophantine binary quad equation</p><p>ax^2 + by + c = 0 </p><p>by Manders and Adelman, see <a href="https://www.sciencedirect.com/science/article/pii/0022000078900442?via%3Dihub">here</a>.</p><p>(Bill Comment) I tend to think this is an outlier- for the most part, CS theory back in the 1970's did not hard math. </p><p>4) Private Info Retrieval (PIR). k databases each have the same n-bit string and cannot talk to each other. a server wants the ith bit and (in the info-theoretic case) wants the DBs to know NOTHING about the question i. </p><p>Easy results (to understand) 2-server, n^{1/3}. <a href="http://www.cs.umd.edu/~gasarch/TOPICS/pir/first.pdf">here</a>.</p><p>Hard results: 2-server n^{O(\sqrt{loglogn/log n)}, <a href="https://www.cs.princeton.edu/~zdvir/papers/DvirGopi14.pdf">here</a>.</p><p>(I have a website on PIR, not maintained, <a href="http://www.cs.umd.edu/~gasarch/TOPICS/pir/pir.html">here</a>.)</p><p>5) Babai's algorithm for GI in quasi-poly time used hard math. </p><p>6) If I knew more CS theory I am sure I would have more papers listed.</p><p>But now its your turn: </p><p>When did you realize Gee, <b>CS theory is harder than (a) you thought, (b) it used to be.</b></p><p><b><br /></b></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p>gasarchhttp://www.blogger.com/profile/03004932739846901628noreply@blogger.com8tag:blogger.com,1999:blog-3722233.post-27275373201968557232021-11-11T06:57:00.000-06:002021-11-11T06:57:09.842-06:0020 Years of Algorithmic Game Theory<p>Twenty years ago DIMACS hosted a <a href="http://dimacs.rutgers.edu/archive/Workshops/gametheory/program.html">Workshop on Computational Issues in Game Theory and Mechanism Design</a>. This wasn't the very beginning of algorithmic game theory, but it was quite the coming out party. From the <a href="http://dimacs.rutgers.edu/archive/Workshops/gametheory/announcement.html">announcement</a></p><p></p><blockquote><p>The research agenda of computer science is undergoing significant changes due to the influence of the Internet. Together with the emergence of a host of new computational issues in mathematical economics, as well as electronic commerce, a new research agenda appears to be emerging. This area of research is collectively labeled under various titles, such as "Foundations of Electronic Commerce", Computational Economics", or "Economic Mechanisms in Computation" and deals with various issues involving the interplay between computation, game-theory and economics.</p><p>This workshop is intended to not only summarize progress in this area and attempt to define future directions for it, but also to help the interested but uninitiated, of which there seem many, understand the language, the basis principles and the major issues.</p><p></p></blockquote><p>Working at the nearby NEC Research Institute at the time I attended as one of those "interested but unititated."</p><p>The workshop had talks from the current and rising stars in the field in both the theoretical computer science, AI and economics communities. The presentations included some classic early results including <a href="https://doi.org/10.1016/S0304-3975(03)00391-8">Competitive Analysis of Incentive Compatible Online Auctions</a>, <a href="https://doi.org/10.1145/506147.506153">How Bad is Selfish Routing?</a> and the seminal work on <a href="https://doi.org/10.1016/j.geb.2006.02.003">Competitive Auctions</a>. </p><p>Beyond the talks, just having the powerhouse of people at the meeting, established players, like Noam Nisan, Vijay Vazirani, Eva Tardos and Christos Papadimitriou, with several newcomers who are now the established players including Tim Roughgarden and Jason Hartline just to mention a few from theoretical computer science. </p><p>The highlight was a panel discussion on how to overcome the methodological differences between computer scientists and economic game theorists. The panelists were an all-star collection of John Nash, Andrew Odlyzko, Christos Papadimitriou, Mark Satterthwaite, Scott Shenker and Michael Wellman. The discussion focused on things like competitive analysis though to me, in hindsight, the real difference is between the focus on models (game theory) vs theorems (CS). </p><div>Interest in these connections exploded after the workshop and a new field blossomed.</div>Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.com4tag:blogger.com,1999:blog-3722233.post-67996511209686143962021-11-07T14:47:00.000-06:002021-11-07T14:47:09.554-06:00Reflections on Trusting ``Trustlessness'' in the era of ``Crypto'' Blockchains (Guest Post) <p> </p><p><i>I trust Evangelos Georgiadis to do a guest post on Trust and Blockchain. </i></p><div>Today we have a guest post by Evangelos Georgiadis on Trust. It was written before Lance's post on trust <a href="https://blog.computationalcomplexity.org/2021/08/trusting-scientists.html">here</a> but it can be viewed as a followup to it. </div><div><br /></div><div>And now, here's E.G:</div><div><div><br /></div><div>==========================================================</div><div><br /></div><div>Trust is a funny concept, particularly in the realm of blockchains and "crypto".</div><div><br /></div><div>Do you trust the consensus mechanism of a public blockchain?</div><div><br /></div><div>Do you trust the architects that engineered the consensus mechanism?</div><div><br /></div><div>Do you trust the software engineers that implemented the code for the consensus mechanism?</div><div><br /></div><div>Do you trust the language that the software engineers used?</div><div><br /></div><div>Do you trust the underlying hardware that that the software is running?</div><div><br /></div><div>Theoretical Computer Science provides tools for some of this. But then the question becomes</div><div>Do you trust the program verifier?</div><div>Do you trust the proof of security?</div><div><br /></div><div>I touch on these issues in: </div><div><br /></div><div> <i>Reflections on Trusting ‘Trustlessness’ in the era of ”Crypto”/Blockchains</i></div><div><br /></div><div> which is <a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/cbit-4-2.pdf">here</a>. Its only 3 pages so enjoy!</div></div>gasarchhttp://www.blogger.com/profile/03004932739846901628noreply@blogger.com0tag:blogger.com,1999:blog-3722233.post-88629145688678877242021-11-03T09:39:00.000-05:002021-11-03T09:39:36.053-05:00A Complexity View of Machine Learning?<p>Complexity is at its best when it models new technologies so we can study it in a principled way. Quantum computing comes to mind as a good relatively recent example. With machine learning playing an every growing role in computing, how can complexity play a role?</p><p>The theory community questions about machine learning typically look at finding mathematical reasons to explain why the models well with little overfitting or trying to get good definitions of privacy, fairness, explainability to mitigate the social challenges of ML. But what about from a computational complexity point of view? I don't have a great answer yet but here are some thoughts.</p><p>In much of structural complexity, we use relativization to understand the relative power of complexity classes. We define an oracle as a set A where a machine can ask questions about membership to A and magically get an answer. Relativization can be used to help us define classes like Σ<sub>2</sub><sup>P</sup> = NP<sup>NP</sup> or allow us to succinctly state <a href="https://doi.org/10.1137/0220053">Toda's theorem</a> as PH in P<sup>#P</sup>.</p><p>As I <a href="https://twitter.com/fortnow/status/1453827400383488002">tweeted</a> last week, machine learning feels like an oracle, after all machine learning models and algorithms are typically accessed through APIs and Python modules. What kind of oracle? Definitely not an NP-complete problem like SAT since machine learning fails miserably if you try to use it to break cryptography. </p><p>The real information in machine learning comes from the data. For a length parameter n, consider a string x which might be exponential in n. Think of x as a list of labeled or unlabeled examples of some larger set S. Machine learning creates a model M from x that tries to predict whether x is in S. Think of M as the oracle, as some compressed version of S.</p><p>Is there a computational view of M? We can appeal to Ockham's razor and consider the simplest model consistent with the data for which x as a set are random in the S that M generates. One can formalize this Minimum Description Length approach using <a href="https://doi.org/10.1109/18.825807">Kolmogorov Complexity</a>. This model is too ideal, for one it can also break cryptography, and typical deep learning models are not simple at all with sometimes millions of parameters.</p><p>This is just a start. One could try time bounds on the Kolmogorov definitions or try something different completely. Adversarial and foundational learning models might yield different kinds of oracles. </p><p>If we can figure out even a rough complexity way to understand learning, we can start to get a hold of learning's computational power and limitations, which is the purpose of studying complexity complexity in the first place. </p>Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.com6tag:blogger.com,1999:blog-3722233.post-27543442695525242932021-10-31T14:46:00.001-05:002021-10-31T14:46:57.995-05:00When did Math Get So Hard?<br />
I have been on many Math PhD thesis defense's as the Dean's Representative. This means I don't have to understand the work, just make sure the rules are followed. I've done this for a while and I used to understand some of it but now there are times I understand literally none of it. As a result, when the student leaves the room and we talk among ourselves I ask<br />
<br />
<br />
When did Math get so hard?<br />
<br />
I mean it as a statement and maybe a joke, but I decided to email various people and ask for a serious answer. Here are some thoughts of mine and others<div><br /></div><div>1) When you get older math got harder. Lance blogged on this <a href="https://blog.computationalcomplexity.org/2021/10/a-young-persons-game.html">here</a></div><div><br /></div><div>2) When math got more abstract it got harder. Blame Grothendieck.</div><div><br /></div><div>3) When math stopped being tied to the real work it got harder. Blame Hardy. </div><div><br /></div><div>4) Math has always been hard. We NOW understand some of the older math better so it seems easy to us, but it wasn't at the time. </div><div><br /></div><div>5) With the web and more people working in math, new results come out faster so its harder to keep up.</div><div><br /></div><div>6) All fields of math have a period of time when they are easy, at the beginning, and then as the low-hanging fruit gets picked it gets harder and harder. So if a NEW branch was started it might initially be easy. Counterthought- even a new branch might be hard now since it can draw on so much prior math. Also, the low hanging fruit may be picked rather quickly. </div><div><br />
<br /><br />
</div>GASARCHhttp://www.blogger.com/profile/03615736448441925334noreply@blogger.com13tag:blogger.com,1999:blog-3722233.post-31133767657069107972021-10-27T16:52:00.001-05:002021-10-27T16:52:27.888-05:00Fall 2021 Jobs Post<p>We're in the midst of a great transformation in computing, one where data takes center stage and I predict this will start to have a larger effect on hiring in computer science departments. We'll see a bigger need to grow in data science, particularly machine learning and autonomous systems. Cybersecurity and quantum computing will also grow with a push due to competition with China. Quantum winter might be coming but we're not there yet.</p><p>Harder to predict is the rest of computer science, such as traditional areas like networks, operating systems, programming languages and, yes, theory, particularly theory unrelated to quantum, learning or security. There is still a need for CS departments to grow in these areas, but we may be moving away from a rising tide raising all boats. On the other hand due to the digital transformation of just about everything, non-CS departments are hiring people who look a lot like computer scientists.</p><p>Other factors may cause US universities to be more conservative in hiring such as a <a href="https://www.wsj.com/articles/college-university-fall-higher-education-men-women-enrollment-admissions-back-to-school-11630948233?st=1oz46jkmsdq8jcq&reflink=desktopwebshare_permalink">drop in male students</a>, the upcoming <a href="https://www.cupahr.org/issue/feature/higher-ed-enrollment-cliff/">demographic cliff</a>, an unclear future for international students coming to the states, and a lingering COVID budget hangover.</p><p>So go get a job while the going is still good though I would not suggest forgoing a faculty position for a postdoc, particularly if you aren't working in data science.</p><p>I also wonder how the post-COVID world will affect the job search. We'll probably see more virtual interviews than the pre-COVID days at least in the early rounds. It's also harder for students to network and make themselves known at virtual and hybrid conferences which will likely persist for some time.</p><p>Give yourself a good virtual face. Have a well-designed web page with access to all your job materials and papers. Maintain your Google Scholar page. Add yourself to the CRA's <a href="https://cra.org/cv-database/">CV database</a>. Find a way to stand out, perhaps a short video describing your research. </p><p>Best source for finding jobs are the ads from the <a href="https://cra.org/ads/">CRA</a> and the <a href="https://jobs.acm.org/">ACM</a>. For theoretical computer science specific postdoc and faculty positions check out <a href="https://cstheory-jobs.org/">TCS Jobs</a> and <a href="http://dmatheorynet.blogspot.com/">Theory Announcements</a>. If you have jobs to announce, please post to the above and/or feel free to leave a comment on this post. Even if you don't see an ad for a specific school they may still be hiring, check out their website or email someone at the department. You'll never know if you don't ask.</p>Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.com4tag:blogger.com,1999:blog-3722233.post-84027680719913383922021-10-24T13:51:00.004-05:002021-10-31T14:34:35.937-05:00Squaring the circle is mentioned in a Gilbert and Sullivan comic Opera.<p>The problem of <i>squaring the circle</i>: Given a circle, construct (with straightedge and compass) a square with the same area. While browsing the web for more information on this problem (for the blog entry on problems that might be similar to P vs NP: <a href="https://blog.computationalcomplexity.org/2021/10/is-math-ready-for-pnp-is-alexandra.html">here</a>) I came across the following:</p><p>In the Gilbert and Sullivan comic opera <i>Princess Ida</i>, in the song <i>Gently, Gently</i> is the line:</p><p> <i> ... and the circle they will square it one fine day.</i></p><p>(To hear the song see <a href="https://www.youtube.com/watch?v=1r2hEcDmeIY">here</a>. The line is towards the end.) </p><p>They lyrics are <a href="https://www.gsarchive.net/princess_ida/webop/pi_12.html">here</a>. That website begins <i>gsarchive.ne</i>t which made me wonder<i> Did I at one time set</i> u<i>p a website of math refs in Gilbert and Sullivan plays (gsarch is very close to gasarch) ? </i>which IS the kind of thing I would do. The answer is no: <i> gsarch</i> stands for <i>Gilbert and Sullivan archive.</i> They could have called it <i>gasarch </i>if they used the <i>and </i>in <i>Gilbert and Sullivan</i> but abbreviated <i>archive </i>as arch. Then I would have been far more confused. </p><p>Moving on...</p><p>In 1884<i> Princess Ida</i> opened in 1884. For more on this comic opera see <a href="https://en.wikipedia.org/wiki/Princess_Ida">here</a>.</p><p>In 1882 pi was proven transcendental and hence one cannot square the circle. For more on pi being transcendental see <a href="https://en.wikipedia.org/wiki/Squaring_the_circle">here</a>.</p><p>Kolmogorov Random Thoughts on all of this</p><p>0) The song is sung my three men who are making fun of the notion of a women's college. The song is about all the things the women are trying to do that are absurd such as squaring the circle. They also mention perpetual motion machines. </p><p>1) Did G and S know that the squaring the circle had been proven impossible, or just that it was thought to be impossible, or just that it was thought to be hard?</p><p>2) Was it known that perpetual motion machines were impossible? Or just very hard? </p><p>3) G and S used Mathematics in at least one other song: <i> I am the very model of a modern major general, </i>from<i> The Pirates of Penzance </i>has the lines:</p><p><br /></p><p> <i>I'm very well acquainted too with matters mathematical</i></p><p><i> I understand equations, both the simple and quadratical,</i></p><p><i> About binomial theorems I'm teeming with the a lot o' news---</i></p><p><i> With many cheerful facts about the square of the hypotenuse</i></p><p><br /></p><p>and later </p><p><i> I'm very good at integral and differential calculus</i></p><p>See <a href="https://naic.edu/~gibson/poems/gilbert1.html">here</a> for all the lyrics. The website mentioned in the next point has a pointer to a YouTube video of people singing it. </p><p>4) There are many parodies of <i>Modern Major General. </i>The earliest ones I know of is Tom Lehrer's <i>The Elements. </i>Since making a website of them IS the kind of thing I would do, while writing this post I did it (Are we compelled to do things that fit our image of ourselves? Yup.) The website is <a href="http://www.cs.umd.edu/~gasarch/FUN//modmajgen.html">here</a>. It has 36 parodies (as of Oct 17, 2021 when I wrote this blog--- it may have more if you read this later.) That may seem like a lot, but it pales in comparison to the most satirized song of all time: <i>The 12 days of Christmas </i>which I did an ugly lyrics-only website for back before html had nice tools, see <a href="http://www.cs.umd.edu/~gasarch/12days.html">here</a>. It has 143 songs on it but I am sure there are many more. (Note to self: redo that website when you have time. Maybe when I retire.) </p><p>4) I suspect that G and S knew more math, or perhaps knew of more math, than Broadway composers know now. I suspect this is a more general trend: people are more specialized now. Having said that, I need to mention the off-Broadway musical <a href="https://en.wikipedia.org/wiki/Fermat%27s_Last_Tango">Fermat's last Tango</a> which I liked more than Lance (see his post on it <a href="https://blog.computationalcomplexity.org/2004/02/fermats-last-tango.html">here</a>). </p><p>5) How much math would you need to know in order to insert some into your play or movie? With Wikipedia and other web sources you could find out some things, but you would have to have some idea what you are looking for. And perhaps you would need some math background in order to even want to insert some math into your work in the first place. </p><p>6) Here's hoping someone will make a musical about William Rowan Hamilton using this song <a href="https://www.youtube.com/watch?v=SZXHoWwBcDc">here</a> as a starting point. I blogged rather optimistically about that possibility <a href="https://blog.computationalcomplexity.org/2017/04/william-rowan-hamilton-musical.html#comment-form">here</a>.</p><p><br /></p>gasarchhttp://www.blogger.com/profile/03004932739846901628noreply@blogger.com2tag:blogger.com,1999:blog-3722233.post-20109947755175764082021-10-17T14:59:00.006-05:002021-10-18T08:11:25.705-05:00Is MATH Ready for P=NP? Is Alexandra Fahrenthold Ready for P=NP? <div>(This post was inspired by Harry Lewis emailing me about his granddaughter.)</div><div><br /></div><br /><div class="separator" style="clear: both; text-align: center;"><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-EaezFO7EQKg/YWbi3-_1hKI/AAAAAAAB-8o/S76g-orqsg48R9MHQqPf1SZi8M7GFchswCLcBGAsYHQ/s4032/pnp.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="4032" data-original-width="3024" height="320" src="https://1.bp.blogspot.com/-EaezFO7EQKg/YWbi3-_1hKI/AAAAAAAB-8o/S76g-orqsg48R9MHQqPf1SZi8M7GFchswCLcBGAsYHQ/s320/pnp.jpg" width="240" /></a></div><a href="https://1.bp.blogspot.com/-xlbZstxaM_0/YWbibmPgZWI/AAAAAAAB-8Y/TDBhMXf7D2sQ8ZV3M3tlyWYeaJhNzC5dACLcBGAsYHQ/s4032/singing.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="4032" data-original-width="3024" height="320" src="https://1.bp.blogspot.com/-xlbZstxaM_0/YWbibmPgZWI/AAAAAAAB-8Y/TDBhMXf7D2sQ8ZV3M3tlyWYeaJhNzC5dACLcBGAsYHQ/s320/singing.jpg" width="240" /></a><br /></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div>Harry Lewis's grand daughter Alexandra Fahrenthold (see both pictures) wants information</div><div>on how to claim the Millennial prize, so she will be ready.</div><div><br /></div><div>This raises the question: How likely is it that Alexandra will resolve P vs NP (or perhaps some other Millennium problem if she wants to rebel against her grandfather)?</div><div><br /></div><div>And more seriously:</div><div><br /></div><div>1) Have we made progress on P vs NP? (I think not.)</div><div>(By <i>we</i> I mean the community, not <i>Harry and I </i>or <i>Harry and I and Alexandra</i>,</div><div>for which the answer is a more definite NO.)</div><div><br /></div><div>2) If not then why not?</div><div><br /></div><div>3) How does this compare historically to other open problems in Math?</div><div><br /></div><div>We will refer to progress made in solving an open problem, though that is a tricky notion since only after a problem is solved can you look back and say what was progress. One might also count subcases (e.g., n=4 case of FLT) as progress even if they don't help lead to the final proof. I quote a letter from Harry Lewis to me upon reading a first draft of this post:</div><div><blockquote><i>The one larger point I would suggest adding is to add my operational definition of progress: Progress is being made on a problem if, when the solution is published, it will cite work being published today. Of course that is “operational” only after the fact. Demillo Lipton Perlis at the end have a nice riff on this. The alchemists thought they were making progress on turning lead to gold but they weren’t, even though we know that was actually a solvable problem. Likewise jumping off of higher and higher buildings was not making progress toward heavier than air flight.</i></blockquote></div><div><br /></div><div>---------------------------------------------------------</div><div><br /></div><div><br /></div><div><div>1) Have we made progress on P vs NP?</div><div><br /></div><div>a) I tell my students that we have made progress on ruling out certain techniques.</div><div>They laugh at that, at which point I decide to<i> not </i>tell them that my PhD thesis was about that sort of thing (oracles). I could say</div><div><br /></div><div><i>Once you know what's not going to work you can concentrate one what is going to work.</i></div><div><br /></div><div>But that sounds hollow since very few people are working on techniques that</div><div>might work (The Geometric Complexity Program, see <a href="https://en.wikipedia.org/wiki/Geometric_complexity_theory">here</a>, is the only exception I know of.)</div><div><br /></div><div>b) Are there any partial results? Ryan Williams showed that SAT (and also counting mod versions of it) cannot be done in time n^c and space n^{o(1)} where c is 2cos(2pi/7) (see <a href="https://link.springer.com/content/pdf/10.1007/s00037-008-0248-y.pdf">here</a>). That is the only concrete lower bound on SAT that I know of. Is it progress? Sam Buss and Ryan Williams later showed (see <a href="https://link.springer.com/content/pdf/10.1007/s00037-015-0104-9.pdf">here</a>) that, using current techniques, this cannot be improved. If that inspires new techniques that push it further, that would be great. So it is progress? Hard to know now. </div><div><br /></div><div>c) There are some circuit lower bounds. One can debate if this is progress.</div><div>It will be a much better informed debate once the problem is solved.</div><div><br /></div><div>So I would say VERY LITTLE PROGRESS.</div><div><br /></div><div>------------------------------------------------</div><div>2) If not then why not?</div><div><br /></div><div>a) It's only been open for 50 years. A drop in the mathematical bucket.</div><div><i>Counterargument</i>: 50 years of 20th and 21st century mathematics is A LOT.</div><div><br /></div><div>b) Sociology: The academic computer science conference-model induces us to get out a paper in time for the next conference deadline, and not think deeply about a problem. Carl Smith thought that P vs NP would be solved by the son of a member of the communist party in the USSR (when there was a USSR) who did not have the pressure to get tenure and grants and such. He may be right.</div><div><i>Counterargumen</i>t: there are some (1) mavericks who buck the system, and (2) people like Carl's son-of-a-party-member who are allowed to think deeply for years.</div><div><br /></div><div>c) It's just dang hard! That's the real question. Paul Erdos said of the Collatz Conjecture:</div><div> <i> Mathematics may not be ready for such problems.</i></div><div>Is that true of P vs NP as well?</div><div><br /></div></div><div><div><br /></div><div>----------------------------------</div><div>3) History and Philosophy.</div><div>(In college I once took the following four courses in one semester: History of Philosophy, Philosophy of History, Philosophy of Philosophy, History of History.)</div><div><br /></div><div>Let's look at problems that were open and then solved:</div><div><br /></div><div>a) The Three Greek Problems of Antiquity: Squaring the circle (given a circle, construct a square with the same area), doubling the cube (given a line that is the edge of cube, construct another line that is the edge of a cube with twice the volume), trisecting an angle (given an angle, construct two lines whose angle is 1/3 of the given angle), with a straightedge and compass. (When I first heard of this problem I wondered how knowing what direction was North would help trisect an angle.) Posed in roughly 400BC. Not clear what <i>posed</i> means in this context. Did the ask for a construction OR did they ask for EITHER a construction OR a proof that there wasn't one?</div><div><br /></div><div>This might be the closest analogy to P vs NP: At the time the problem was stated</div><div> <i>MATHEMATICS WAS NOT READY FOR SUCH PROBLEMS. </i></div><div>It took lots of new math, a better notation, and a different way of looking at numbers, to show that they could not be done: Pierre Wantzel--doubling the cube (1837),Pierre Wantzel--trisection (1837), Lindemann-Weierstrass--squaring the circle (1882).</div><div>NOTE: Some sources list a fourth problem: constructing every regular polygon. Pierre Watnzel proved, in 1837, that a regular n-gon is constructible iff n is the product of a power of 2 and distinct Fermat primes. (Why isn't Wantzel better known?) </div><div><br /></div><div>b) Fermat's Last Theorem. Given its origin, not quite clear when it was posed but 1640's seems fair. This could not be solved when it was posed (On an episode of Dr. Who they claim that Fermat had a simple proof. Note that Dr. Who is fictional and their PhD (if they has one) is probably not in mathematics.) </div><div><i>MATHEMATICS WAS NOT READY FOR SUCH PROBLEMS</i>, </div><div>but not as much as the three Greek problems. Very steady progress on it, see <a href="https://www.cs.umd.edu/~gasarch/BLOGPAPERS/progressflt.pdf">here</a>. One of the real milestone was connecting it to other problems in Math. And then Wiles proved it in the 1990's. While the solution was a surprise when it happened it was not that much of a surprise.</div><div><br /></div><div>QUESTION: Is P vs NP more similar to Greek3 or to FLT? </div><div><br /></div><div>c) Peano Arithmetic (and similar systems) are incomplete. Hilbert's 2nd problem (1900) asked to show the axioms of PA were consistent. Godel (1931) showed this could not be done. Moreover, there are TRUE statements about numbers that PA cannot prove. I think people mostly thought PA was complete so one of the innovations was to think it was incomplete. </div><div><i>MATHEMATICS WAS READY FOR SUCH PROBLEMS </i></div><div>but it took the boldness to think PA was incomplete to solve it. The math needed was known when Hilbert posed the problem. But of course, how to put it together was still quite a challenge.</div></div><div><br /></div><div><div><br /></div><div>d) The Continuum Hypothesis, CH, is that there is no cardinality between N and R. Cantor in 1878 asked for a proof that CH was true. It was Hilbert's first problem in 1900.</div><div>When Hilbert posed this problem in 1900</div><div><i>MATHEMATICS WAS NOT QUITE READY FOR SUCH PROBLEMS.</i></div><div>The math to solve it wasn't quite there, but wasn't so far off (of course, that's in hindsight). Godel's model L (1940) was brilliant, though Lowenhiem-Skolem had constructed models. A model of set theory that was defined by levels was, I think, though of by Russell (though in a very diff way than L). When Cohen did a model where CH is false (1963) he invented forcing for Set Theory, though forcing had already been used in Recursion theory (The Kleene-Post construction of intermediary Turing degrees.)</div><div><br /></div><div>e) Hilbert's tenth problem (1900): Find an algorithm that will, given a poly in many variables over Z, determine if it has a solution in Z.</div><div><i>MATHEMATICS WAS ALMOST READY FOR SUCH PROBLEMS.</i></div><div>I turns out that there is no such algorithm. Similar to CH: Once it was thought that it was unsolvable, the proof that it was unsolvable just took a few decades. However, it did need the definition of computable to be pinned down. Davis-Putnam-Robinson outlined what was needed in the 1950's,and Matiyasevich finished it in 1970. While it required just the right combination of ideas, and lots of cleverness, the math needed was known.</div><div>CAVEAT: There are many restrictions of H10 that are still open. My favorite: is the following solvable: given k, does x^3 + y^3 + z^3 = k have a solution in Z? (See my blog post on this problem <a href="https://blog.computationalcomplexity.org/2019/04/x-3-y-3-z-3-33-has-solution-in-z-and.html">here</a>.) For a survey of what is known about subcases see (1) my paper <a href="http://export.arxiv.org/pdf/2104.07220">here</a>, though it is has been argued that I am looking at the wrong subcases (see my blog post on this <a href="https://blog.computationalcomplexity.org/2021/05/what-is-natural-question-who-should.html">here</a>), and (2) Bogdan Grechuk's paper <a href="https://arxiv.org/abs/2108.08705">here</a></div><div>CAVEAT: Matiyasevich has suggested that Hilbert really meant to ask about equations and solutions over Q. That problem is still open. If it is unsolvable, that might be proven reasonably soon. If it is solvable, then</div><div><i>MATHEMATICS IS NOT READY FOR SUCH PROBLEMS.</i></div><div><br /></div><div>f) The four color theorem. Posed in 1852 by Francis Guthrie, proven in 1976. Haken, Appel, and Koch (more on that last name later) did do some very impressive math to set the problem up, and the computer program to finish it off. When the problem was posed (1852) the computing power was not up to the task. So </div><div><i>COMPUTER SCIENCE WAS NOT READY FOR SUCH PROBLEMS.</i></div><div>Could the ideas to set it up have been done earlier? Maybe, but not much earlier. The result is often attributed to Haken and Appel, but actually there are two papers, and Koch is an author on the second one. Note that (1) Robertson, Sanders, Seymour, Thomas had a simpler, though still computer proof (1996), and (2) Werner Gonthier formalized the proof inside the Coq proof assistant in 2005.</div><div>CAVEAT: An open problem that is hard to state precisely is to come up with a non-computer proof.</div></div><div>CAVEAT: There is a non-computer proof that every planar graph is 4.5-colorable, see my blog post in this <a href="https://blog.computationalcomplexity.org/search?q=chromatic+number">here</a>. (No, this is not a joke. If it was I would make if funnier and claim there is a non-computer proof that every planar graph is 4 + 1/e colorable.)</div><div><br /></div><div><div>g) Poincare Conjecture. Conjectured in 1904 and solved in 2002. To bad---if it was solved in 2004 it would be exactly 100 years. There was some progress on this all along so I don't know which step was <i>the hard one </i>though probably they were all hard. This one is harder for me to speculate on. When it was solved and Darling wanted to know why it was worth $1,000,000 I told her that it says <i>if something</i> <i>tastes and smells and feels like a sphere, its a sphere</i>. She was unimpressed. But back to our story: in hindsight,</div><div><i>MATH WAS READY FOR SUCH PROBLEMS</i></div><div> since there was steady progress. I think of NOT READY as meaning NO progress, NO plan.</div><div><br /></div><div>h) The Erdos Distance Problem: Show that for any n points in the plane the number of distinct distances is Omega(n/\sqrt{log n}). Not quite solved, but a big milestone was Gutz and Katz proof of Omega(n/log n). For that result</div><div><i>MATH WAS READY FOR SUCH PROBLEMS</i></div><div><i>Steady progress</i>: see the Wikipedia entry <a href="https://en.wikipedia.org/wiki/Erd%C5%91s_distinct_distances_problem#:~:text=Erd%C5%91s%20distinct%20distances%20problem%20From%20Wikipedia%2C%20the%20free,by%20Larry%20Guth%20and%20Nets%20Katz%20in%202015.">here</a>. What's of interest to us is that there was a barrier result of Omega(n^{8/9}) by Ruzsa (apparently unpublished) that said the techniques being used could not do better-- so people, in short order, found new techniques. Here is hoping that happens with P vs NP.</div><div><br /></div><div>--------------------------------------------------------------------------------</div><div>Let's look at problems that are open and unsolved.</div><div><br /></div><div>a) Collatz Conjecture (also called the 3x+1 conjecture). I asked</div><div>Jeff Lagarias, who is an expert on the problem:</div><div><br /></div><div><i>Is it true? When will it be resolved?</i> He said <i>Yes</i> and <i>Never</i>.</div><div><br /></div><div>I once heard there has been NO progress on this problem, though I later heard that Terry Tao has made some progress. In any case, not much progress has been made. Maybe Erdos was right.</div><div><br /></div><div>QUESTION: Why does my spell checker think that<i> Collatz</i> is not a word? </div><div><br /></div><div>b) Small Ramsey Numbers. I asked Stanislaw Radziszowski, who is an expert on Small Ramsey Numbers (he has a dynamic survey on small Ramsey numbers <a href="https://www.combinatorics.org/files/Surveys/ds1/ds1v15-2017.pdf">here</a>) </div><div><br /></div><div><i>What is R(5)? When will we know? </i>He said <i>43</i> and <i>Never</i>.</div><div><br /></div><div>Worse than being hard, I don't think any nice math has come out of trying to find R(5,5). Too bad. The coloring that gives the lower bound for R(4) and some (perhaps all) of the R(i,j) where i,j\le 4 can be derived from group theory. YEAH! But then connections to interesting math just... stopped. For now? Forever? Joel Spencer told me this is an example of the <i>law</i> <i>of small numbers</i>: patterns that hold for small numbers stop holding when the numbers get too big. (I've seen other things called <i>the law of small numbers</i> as well.) </div></div><div><div><i>MATH MAY NEVER BE READY FOR SUCH PROBLEMS</i> </div><div>If no interesting math comes out of the attempt to find the exact values of the Ramsey Numbers, then it is not a good problem. </div><div><br /></div><div><i>Note: </i>The conversations about Collatz and R(5) were within 10 minutes of each other. Depressing day!</div><div><br /></div><div>c) The Twin Primes Conjecture. Sieve methods have been used to get partial result. YEAH! Yitang Zhang showed there exists infinite x such that x and x + 70million (something like that are prime. YEAH. Its been gotten down to x, x+246 and with various assumptions x,x+12 or x, x+6). YEAH! but Sieve methods are known to NOT be able to prove the conjecture. Dang it!</div><div><i>DO NOT KNOW IF MATH IS READY FOR SUCH PROBLEMS</i>.</div><div>I think people are kind of stuck here. Much like P vs NP, though at least they have some partial results. By contrast, with regard to P vs NP we don't even have that (unless you count Ryan's lower bound on SAT---maybe you do).</div><div><br /></div><div><i>Note</i>: I found that information <a href="https://www.britannica.com/science/twin-prime-conjecture">here</a> which seems to be an Encyclopedia Britannica website. I would have thought that, with the web and Wikipedia, they would be out of business. Good for them to still be in business! </div><div><br /></div><div>d) I am not qualified to write about any of the Millennium prizes except P vs NP (am I even qualified for that?) so I ask my readers to leave opinions (informed or not) about, for which of them, </div><div><i>MATH IS NOT READY FOR SUCH PROBLEMS</i></div><div>One of the people who worked on the Riemann Hypothesis said: </div><div><br /></div><div><i>I do not recommend spending half your life on the Riemann Hypothesis. </i></div><div><i><br /></i></div><div>That raises a different question: When do you give up? (topic for a different blog post). </div><div><i><br /></i></div><div>e) I am also not qualified to write about the Hilbert Problems where are still unsolved. Note that some of them are not well enough defined to ever be resolved (H6: Make Physics rigorous) and some are either solved or unsolved depending on who you ask (H4: Construct ALL metrics where lines are geodesics-- <i>surely,</i> he didn't mean ALL metrics. Probably right, but stop calling me <i>Shirley</i>!) For a byte more about Hilbert's problems, including a few paragraphs on H4, see my reviews of two books on them, <a href="http://www.cs.umd.edu/~gasarch/bookrev/44-4.pdf">here</a>. Same as the last item- if you have an opinion (informed or not) about, for which of them that are though to be sort-of open, is math ready for them, leave a comment. </div><div><br /></div><div>CODA: Alexandra will be working on Collatz this summer!</div><div>Let's wish her luck!</div></div><div><br /></div><div><br /></div><div><br /></div>gasarchhttp://www.blogger.com/profile/03004932739846901628noreply@blogger.com18tag:blogger.com,1999:blog-3722233.post-78162273236460049612021-10-15T08:32:00.000-05:002021-10-15T08:32:10.845-05:00 A Young Person's Game?<p>When László Babai first announced his graph isomorphism in quasipolynomial time result, I <a href="https://blog.computationalcomplexity.org/2015/11/a-primer-on-graph-isomorphism.html">wrote</a></p><blockquote><p>We think of theory as a young person's game, most of the big breakthroughs coming from researchers early in their careers. Babai is 65, having just won the Knuth Prize for his lifetime work on interactive proofs, group algorithms and communication complexity. Babai uses his extensive knowledge of combinatorics and group theory to get his algorithm. No young researcher could have had the knowledge base or maturity to be able to put the pieces together the way that Babai did.</p></blockquote><p>Babai's proof is an exceptional story, but it is exceptional. Most CS theorists have done their best work early in their career. I got myself into a <a href="https://twitter.com/fortnow/status/1447681436513882112">twitter discussion</a> on the topic. For me, I'm proud of the research I did through my forties, but I'll always be best known, research wise, for my work on interactive proofs around 1990. It would be hard to run a scientific study to determine cause and effect but here are some reasons, based on my own experiences, on why we don't see research dominated by the senior people in theory.</p><p></p><b>The field changes - </b>Computation complexity has moved from a computational-based discipline to one now dominated by combinatorics, algebra and analysis. I'm not complaining, a field should evolve over time but it plays less to my strengths. It's hard to teach this old dog new tricks.<div><b>The fruit hanged lower - </b>there were important problems with easier proofs available then not available now<div><b>Responsibilities </b>- You have fewer as a PhD student, postdoc or assistant professor.</div><div><b>Family - </b>becomes more of a focus.</div><div><b>Taking on new jobs - </b>Many academics, though not all, take on administrative roles at their university or , or leave academics completely. </div><div><b>The young people have the new ideas </b>- And older people get settled in their ways</div><div><b>The thrill is gone or at least decays - </b>Your first theorem, your first talk, your first conference paper gives you a level of excitement that's hard to match.</div><div><b>Existentialism - </b>The realization that while computing has no doubt changed the world, my research, for the most part, hasn't.</div><div><b>Cognitive Decline</b> - Probably the most controversial but for me I find it hard to focus on problems like I used to. Back in the day I prided myself on knowing all the proofs of my theorems, now I can't even remember the theorems.</div><div><br /></div><div>Honestly there is just nothing wrong with taking on new roles, writing books, surveys and blogs, focusing on teaching and mentorship and service and leaving the great research to the next generation.</div><div><p></p></div></div>Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.com6tag:blogger.com,1999:blog-3722233.post-46809871137716180162021-10-10T20:18:00.004-05:002021-10-10T20:25:58.917-05:00I have a book out on muffins (you prob already know that)<p><i>Lance</i>: How come you haven't blogged on your muffin book? You've blogged about two books by Harry Lewis (see <a href="https://blog.computationalcomplexity.org/2021/10/how-have-computers-changed-society.html">here</a> and <a href="https://blog.computationalcomplexity.org/2021/08/what-are-most-important-46-papers-in.html">here</a>) one book by the lesswrong community (see <a href="https://blog.computationalcomplexity.org/2021/09/review-of-blog-book-based-on-less-wrong.html">here</a>), and you even did a mashup of a post by two different Scott A's (see <a href="https://blog.computationalcomplexity.org/2021/08/combing-two-posts-blankface-scott-aa.html">here</a>), but not on your own work.</p><p><i>Bill</i>: I thought I did a post on my muffin book.</p><p><i>Lance</i>: No. You have blogged <i>about </i>the muffin problem, and sometimes you <i>mention</i> either the book or the problem in passing, but you haven't had a post that says</p><p><i><b>HEY, I wrote a book!</b></i></p><p>And this is all the more strange since you asked me to have the book on our blog page. </p><p><i>Bill: </i>(Searches blog with keyword muffin and finds no ref to muffin book). Well pierce my ears and call be drafty! I have not posted on the muffin book! Do you recall my thoughts on when to tell people you are working on a book?</p><p><i>Lance</i>: No</p><p><i>Bill</i>: I had a college roommate who was an aspiring science fiction writer who told me there are two kinds of people: Those who talk about writing a book, and those who write a book. I have adapted this to:</p><p><i><b>Do not tell people you are writing a book until you are picking out the cover art.</b></i></p><p><i>Lance: </i>I posted about my book when I hadn't even decided on <a href="https://blog.computationalcomplexity.org/2012/06/name-my-book.html">the title</a>. But your cover art is picked out (see <a href="https://www.amazon.com/Mathematical-Muffin-Morsels-Problem-Mathematics/dp/9811215170">here</a>). And, by the way, its very nice, though it makes me hungry. So I think you can begin talking about the book.</p><p><i>Bill</i>: Indeed! I will!</p><p>------------------------------------------------------------------------------------</p><p><b>Hey I have a book! (See <a href="https://www.amazon.com/Mathematical-Muffin-Morsels-Problem-Mathematics/dp/9811215170">here</a> to buy it on amazon.) </b></p><p><b>Title: Mathematical Muffin Morsels: Nobody Wants a Small Piece</b></p><p><b>by Gasarch, Metz, Prinz, Smolyak</b></p><p><b>(The other authors were undergraduates when we wrote the book. Prinz and Smolyak are now grad students in CS, Metz is in Finance.) </b></p><p><b>Origin: </b></p><p><a href="https://en.wikipedia.org/wiki/Martin_Gardner">Martin Gardner</a> wrote a Mathematics Recreational column for Scientific American for many years, starting in 1956 and ending in the early 1980s. For many STEM people of my generation (Using my <a href="https://blog.computationalcomplexity.org/2020/10/nature-vs-nurture-close-to-my-birthday.html">fake birthday</a> of Oct 1, 1960, I am 62 years old) Martin Gardner's columns were both an inspiration and an early exposure to mathematics. His columns also made the line between Mathematical Recreation and so-called serious mathematics thin or nonexistent. (See <a href="http://www.cs.umd.edu/~gasarch/bookrev/FRED/gardner.pdf">here</a> for a review of Martin Gardner in the 21st century, a book about the kind of math Gardner wrote of. The book makes a mockery of the distinction between recreational and serious mathematics.) He passed away in 2010 at the age of 95.</p><p>There is a gathering in his honor that is hold roughly every 2 years, called <a href="https://www.gathering4gardner.org/">Gathering For Gardner</a>. (It was cancelled in Spring 2020 and Spring 2021 because of COVID- though its in Atlanta where the CDC is, so they could have had it as an experiment and told the CDC the results). You have to be invited to goto it. I got an invite for 2016 from my contact at World Scientific who published my previous book, <i>Problems with a Point: Exploring Math and Computer Science co-authored with Clyde Kruskal </i> (I had two blogs on it, <a href="https://blog.computationalcomplexity.org/2019/02/problems-with-point-exploring-math-and.html">here</a> and <a href="https://blog.computationalcomplexity.org/2019/04/problems-with-point-not-plug-just-some.html">here</a>, and you can buy it on amazon <a href="https://www.amazon.com/Problems-Point-Exploring-Computer-Science/dp/9813279974">here</a>.) I did three posts on G4G-2016 (<a href="https://blog.computationalcomplexity.org/2016/04/some-short-bits-from-gathering-for.html">here</a>, <a href="https://blog.computationalcomplexity.org/2016/05/some-more-bits-from-gathering-for.html">here</a>, and <a href="https://blog.computationalcomplexity.org/search?q=Gathering">here</a>).</p><p>Aside from seeing some great talks that I understood and liked, I also picked up a pamphlet titled:</p><p><b>The Julia Robinson Math Festival</b></p><p><b>A Sample of Mathematical Puzzles</b></p><p><b>Compiled By Nancy Blackman</b></p><p>One of the problems, credited to Alan Frank, was</p><p>How can you divide and distribute 5 muffins for 3 students so that everyone gets 5/3 and the smallest piece is as big as possible?</p><p>They had some other values for muffins and students as well. </p><p>I solved the (5,3) problem and the other ones as well. That was fun. </p><p>When I got home I began looking at the problem for m muffins and s students. I let f(m,s) be the biggest smallest piece possible for giving out m muffins to s students. I proved a general theorem, called the <i>Floor-Ceiling theorem</i>, that always gives an upper bound, FC(m,s) on f(m,s). I worked out formulas for </p><p>f(m,1) (trivial), </p><p>f(m,2) (trivial), </p><p>f(m,3) (its always FC(m,3),</p><p> f(m,4) (its always FC(m,4)).</p><p>While working on f(m,5) I found that f(m,5) was always FC(m,5) EXCEPT for m=11. So what's up with f(11,5)? </p><p>By the Floor Ceiling theorem f(11,5) \le 11/25. We (at that point several ugrads and HS students had joined the project) were unable to find a protocol that would show f(11,5)\ge 11/25. Personally I thought there WAS such an protocol but perhaps it was more complicated than the ones we had found (We were finding them by hand using some easy linear algebra.) Perhaps a computer program was needed. We did find a protocol for f(11,5)\ge 13/30, which surely was not optimal. </p><p>While on an Amtrak I began working out the following train of thought: The protocol for f(11,5)\le 11/25 MUST have </p><p>(1) every muffin cut into two pieces,</p><p>(2) 3 students get 4 pieces, </p><p>(3) 2 students get 5 pieces. </p><p>While working on getting a protocol for f(11,5)\le 11/25 with these properties I found that... <i>there could be no such protocol</i>! Then by reworking what I did I found that f(11,5)\le 13/30. So it was done! and we had a new technique, which we call <i>The Half Method. </i>To see the full proof see my slides <a href="http://www.cs.umd.edu/~gasarch/MUFFINS/muffintalkGen.pdf">here</a></p><p>The story above is typical: We get f(m,k) for all 1\le k\le SOMETHING, we get stuck, and then we find ANOTHER technique to show upper bounds (which in this case are limits on how well we can do). This happened about 8 times depending on how you count. After a while we realized that this could not just be an article, this was a book! World Scienfiic agreed to publish it, and its out now.</p><p>Misc Notes</p><p>1) I got a conference paper out of it, in the Fun with Algorithms Conference, with some of the co-authors on the book, and some other people. <a href="https://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8806">here is the conf paper</a>.</p><p>2) Early on we realized that f(m,s) = (m/s)f(s,m) so we only had to look at the m>s case.</p><p>3) The fact that f(m,s) exists and is rational is not obvious, but is true. In fact, f(m,s) can be found by a mixed-int program. </p><p>4) Late on in the process I found that there was a by-invite-only math newsgroup that had discussed the problem, and in fact was where Alan Frank first posted it. I obtained their materials and found that they had already shown f(m,s)=(m/s)f(s,m) and also that the answer is always rational and exists. Aside from that our results did not overlap.</p><p>5) Even later in the process Scott Huddleston emailed me (out of the blue) that he had a program that solved the muffin problem quickly. I was skeptical at first, but he did indeed have a whole new way to look at the problem and his code was very fast (I had Jacob Prinz, one of the co-authors on the book, recode it). Later Richard Chatwin (see <a href="https://arxiv.org/abs/1907.08726">here</a>) seems to have proven that Scott's method always works. The approach of Scott and Richard is where to go if you want to do serious further research on Muffins. My book is where you want to go if you want to learn some easy and fun math (a HS student could read it). </p><p>6) I co-authored a column with Scott H, Erik Metz, Jacob Prinz on Muffins, featuring his technique, in Lane's complexity column, <a href="http://www.cs.umd.edu/~gasarch/papers/sigmuffins.pdf">here</a>.</p><p>7) I had an REU student, Stephanie Warman, write a muffin package based on the book.</p><p>8) I gave a talk an invited talk on The Muffin Problem at a Joint AMS-MAA meeting. </p><p>9) I gave a talk at Gathering for Gardner 2018 on The Muffin Problem. </p><p>10) I often give talks on it to groups of High School students.</p><p>11) When I teach Discrete Math Honors I talk about it and assign problems on it- it really is part of the course. As such its a good way to reinforce the pigeon hole principle. </p><p>12) I contacted Alan Frank about my work. We arranged to meet at an MIT combinatorics seminar where I was to give a talk on muffins. He brought 11 muffins, with 1 cut (1/2,1/2), 2 cut (14/30,16/30),</p><p>and 8 cut (13/30,17/30) so that the 11 of us could each get 11/5 with smallest piece 13/30. </p><p>13) Coda: </p><p>Why did I keep working on this problem? I kept working on it because I kept hitting barriers and (with co-authors) breaking them with new techniques that were interesting. If early on a barrier was not breakable then I would have stopped. If (say) Floor-ceiling solved everything than I might have gotten a paper out of this, but surely not a book.</p><p>Lesson for all of us: look around you! Its not clear what is going to inspire a project!</p><p>Lasting effect: I am reluctant to throw out old math magazines and pamphlets since you never know when one will lead to a book.</p><p><br /></p><p><br /></p><p><br /></p><p><br /></p>gasarchhttp://www.blogger.com/profile/03004932739846901628noreply@blogger.com3tag:blogger.com,1999:blog-3722233.post-53846532887107657562021-10-08T10:10:00.002-05:002021-10-08T10:12:46.275-05:00C++ is for Cookie and That's Good Enough for Me<p>Potbelly, a local sandwich chain, made me an offer I couldn't refuse: change my password and earn a free (and quite tasty) oatmeal chocolate chip cookie. A free cookie is a great motivator, and checking that this wasn't some clever phishing attack, changed my password and got my cookie. Not sure why Potbelly wanted me to change my password but happy to take their cookie.</p><p>Potbelly likely didn't make this offer to everyone so what if you want a cookie?</p><p></p><ol style="text-align: left;"><li>Use an app to get a cookie delivered.</li><li>Visit a specialty cookie store.</li><li>Go to your local supermarket and pick up a package of <a href="https://www.marianos.com/p/chips-ahoy-original-chocolate-chip-cookies-family-size/0004400003338">Chip's Ahoy</a>.</li><li>Buy some pre-made <a href="https://www.marianos.com/p/pillsbury-ready-to-bake-chocolate-chip-cookie-dough/0001800081778">cookie dough</a> and put it in the oven.</li><li>Buy some <a href="https://www.marianos.com/p/betty-crocker-chocolate-chip-cookie-mix/0001600030650">cookie mix</a>, add ingredients and bake.</li><li>Find a <a href="https://www.bettycrocker.com/recipes/ultimate-chocolate-chip-cookies/77c14e03-d8b0-4844-846d-f19304f61c57">cookie recipe</a>, buy the ingredients and get cooking</li><li>Get fresh ingredients direct from a farm stand</li><li>Grow and gather your own ingredients, ala <a href="https://shop.scholastic.com/teachers-ecommerce/teacher/books/pancakes-pancakes-9780545653619.html">Pancakes Pancakes</a></li></ol><div>In machine learning we seem to be heading into a similar set of choices</div><div><ol style="text-align: left;"><li>Not even realize you are using machine learning, such as recommendations on Netflix or Facebook.</li><li>Using ML implicitly, like talking to Alexa</li><li>Using pre-trained ML through an app, like Google Translate</li><li>Using pre-trained ML through an API</li><li>Using a model like GPT-3 with an appropriate prompt</li><li>Use an easily trained model like <a href="https://aws.amazon.com/fraud-detector/">Amazon Fraud Detector</a></li><li>An integrated machine learning environment like <a href="https://aws.amazon.com/sagemaker/">Sagemaker</a></li><li>Use pre-built ML tools like TensorFlow or PyTorch</li><li>Code up your own ML algorithms in C++</li><li>Build your own hardware and software</li></ol><div>and probably missing a few options.</div><div><br /></div><div>When you want cookies or learning, do you buy it prepackaged or do you roll your own? And when people offer it to you for free, how wary should you be?</div></div><p></p>Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.com3