Sunday, September 12, 2021

Review of A Blog Book based on the Less Wrong Blog

There is a blog called lesswrong. Many people contribute to it (how many people must contribute to a website before it stops being called a blog and starts being called... Not sure what?). The theme is rationality. They (not sure who they are) have made a best-of collection from lesswrong which is named

A Map that Reflects the Territory (available here)

Actually, its not one book, but five mini-books. I quote the titles and paraphrase the first sentence of each:

Epistemology:  How we come to know the world.

Agency: The ability to take action in the world and control the future.

Coordination:  The ability of multiple agents to work together.

Curiosity: The desire to understand how the world works.

Alignment: The problem of aligning the thoughts and goals.

I have written a review of the the book. The book was my first exposure to the blog, except sometimes reading about the blog, probably on Scott's blog.

I am posting this to both complexity blog and to lesswrong, though with lesswrong I will have a different intro since they know lesswrong but might not know complextyblog. 

My review is here.

I would appreciate intelligent comments and suggestions, which I will use to improve the review.

Thursday, September 09, 2021

The Death of Expertise

Four years ago I tried to catch up with deep learning and this summer I aimed to try to catch up again. Who would've thought 2017 is ancient history.

I watched the lectures in the latest MIT course, played with GPT-3 and Codex, read the new Stanford manifesto on what they call foundation models, models trained that can preform on a wide range of tasks instead of a single goal. We've seen machine learning solve protein folding, detect cancer from x-rays better than radiologists, not to mention effectively solving many of the traditional AI problems (language translation, voice and face recognition, game playing, etc.) Watching Codex generate computer code from plain English is a game changer. Far from perfect but this technology is in its infancy.

From what I can tell, the main technological advances focus on learning when we don't have labelled data such as new techniques to transfer knowledge to new domains and using generative models to provide more data to train ML algorithms.

The trend to worry us all is that deep learning algorithms in the long run seem to do better if we limit or eliminate previous human knowledge from the equation. The game playing algorithms now train from just the rules alone (or even just the outcomes). We do use some knowledge: words come in some linear order, faces have hierarchical features, but not much more than that. Human expertise can help as we start solving a problem, even just to know that good solutions look like, but then it often gets in the way.

When we longer use a skill we tend to lose it, like my ability to navigate from a good map. If we eliminate expertise we may find it very difficult to get it back.

There's a more personal issue--people spend their entire careers creating their expertise in some area, and that expertise is often a source of pride and a source of income. If someone comes along and tells you that expertise is no longer needed, or even worse irrelevant or that it gets in the way, you might feel protective, and under guise of our expertise tear down the learning algorithm. That attitude will just make us more irrelevant--better to use our expertise to guide the machine learning models to overcome their limitations. 

You can't stop progress, but you can shape it.

Sunday, September 05, 2021

Guest Post on Solving (or trying to) Poly Diophantine Equations by Bogdan Grechuk

(Guest post by Bogdan Grechuk)


Motivated by Mathoverflow question here I have recently became interested in solving Polynomial Diophantine equations, that is, equations of the form

P(x1,...,xn)=0

for some polynomial P with integer coefficients. Because there are many such equations, I have decided to ask a computer to help me. Our conversation is presented here.


Highlights of the conversation: the height of a polynomial over Z in many variables is what you get when you make all of the coefficients positive and plug in x=2. For example, the height of

x3 - y4 + 5

is
23 + 24 + 5 = 8 + 16 + 5 = 29.

Note that, for all h, there are only a finite number of polynomials in many vars over Z with height h. With the help of my friend the computer we have looked at the equations with h=0,1,2,... and so on, and tried to  determine which ones have any integer solutions. As expected, the first equations were trivial, but at about h=22 we have started to meet quite interesting equations for which we needed help from  Mathoverflow to solve. The project is currently at h=29, with only one remaining open equation of this height.  Read the conversation with the computer, or my mathoverflow question

here
or my paper:

here
to find out more. I INVITE you to join me in this quest!

Thursday, September 02, 2021

The hierarchy and GapP

There is a great but little-known theorem from the early 90's by Seinosuke Toda and Mitsunori Ogihara (buried as Lemma 2.3 in their paper) that shows the polynomial-time hierarchy is randomly low for Gap-P.

Let M be a non-deterministic polynomial-time machine. #M(x) is the number of accepting paths of M on x. #P is the set of functions f such that f(x) = #M(x) for an NP machine M. 

Gap-M(x) is the difference between the number of accepting and rejecting paths. Unlike #M(x), Gap-M(x) could be negative. Gap-P is the set of functions such that f(x)=Gap-M(x) for an NP machine M. Equivalently Gap-P is the difference of two #P functions. Gap-P inherits all the nice closure properties of #P while being closed under negation and subtraction.

Theorem (Toda-Ogihara): Let q be a polynomial and f be a function in Gap-PPH. There is a function g in Gap-P and a polynomial p such that for all x, if you choose a binary string r randomly of length p(|x|), f(x) = g(x,r) with probability at least 1-2-q(|x|).

In other words, with randomness the PH as an oracle of Gap-P disappears.

Let me show you how the proof works for a specific f in GapPPH, namely the indicator function for SAT: f(φ) = 1 if φ is satisfiable and 0 otherwise.

Recall Valiant-Vazirani showed how to randomly reduce a formula φ to a set of formula ψ1,…,ψk such that

  1. If φ is not satisfiable then for all i, ψi will not be satisfiable
  2. If φ is satisfiable then with high probability for some i, ψi will have exactly one satisfying assignment.
Picking k > 4|x|q(|x|) will reduce the error in (2) to under 2-q(|x|). Let g(φ,r) use r to choose ψ1,…,ψk and output the Gap-P function 1-∏(1-#ψi) using the closure properties of Gap-P where #ψi is the number of satisfying assignments to ψi.
  • If φ is not satisfiable then for all i, #ψi=0 and g(φ,r) = 0.
  • If φ is satisfiable then with high probability for some i, #ψi=1 and thus g(φ,r) = 1.
Note that if r is a bad random sequence and φ is satisfiable, g(φ,r) might be an exponentially large integer, positive or negative. It's right most of the time but when it's wrong it's terribly wrong.

Tuesday, August 31, 2021

Since we will soon be back in the classroom, how was Zoom? Anything you want to maintain?

UMCP will have all classes on campus this Fall. There is a Mask Mandate. All students and faculty have to get vaccinated unless they have a health or religious exception. 92% are vaccinated, which I interpret as people are NOT abusing the exceptions (though I still wish it was higher, and it may go higher). (ADED LATER- right after I posted this I got an email saying that UMCP is now up to 97%).  Those NOT vaccinated have to get tested - I think twice a week. 

Now that we are back in the live-classroom, here are some thoughts about teaching on zoom. 

I taught on zoom:

Spring 2020: The last half of both Ramsey Theory and Automata Theory(Reg, CFG,P,NP,Dec,Undec)

Fall 2021:  Cryptography 

Spring 2021: Honors Discrete Math and Automata theory


a) I taught in the usual time slot but I recorded the lecture so those who could not make it (more common during the pandemic) could still see it. Attendance was low, verbal interaction was low, but chat-interaction was very good. Looking into if we can do a chat in an in-person class. I was recording lectures before the pandemic and will keep doing so.

b) My exams were open-notes, open-book, open-web. That cuts down on ways they can cheat, though they can still phone-a-friend. Or ask their cat.  Unusual-but-correct answers can happen, as I discussed in this blog.

c) I gave my muffin talk a few times on zoom. In person it goes very well as my enthusiasm is contagious.  On Zoom that affect is dampened so the audience was more sedate.  I gave it as a special lecture to High School students and to my REU students. Note that it was NOT part of a class so the usual motivation to learn it to do the HW is gone. Hence its more important they be excited about it. 

d) In person I carefully make sure that I wear a funny T-shirt every day, and its a diff one, and usually a math one to (if possible) match the topic. On Zoom I did not bother, though I sometimes used wallpaper to match the topic. 

e) I had to make up slides for Aut theory and for some of Discrete Math. For Crypto I already had slides. I like the slides I made up and will use them in the future. But see next point. 

f) In Discrete Math I went  faster than usual- perhaps because its on slides, perhaps because there were less questions since it was on zoom, perhaps because Emily my  TA was so awesome that they had less questions. (She is very interested in education and did a guest post about the pandemic and education here.) As a result I actually learned and presented the proofs that (1) the e is irrational (my slides are here) and that Liouville numbers are transcendental (my slides are here). While I enjoyed learning those theorems and I think the students understood them on some level, I will slow down next time.

g) Ramsey Theory: It is impossible to teach the Poly VDW theorem on slides, so I had to omit that part of the course. 

h) Bottom Line: Did the students learn more? less? the same? My impression is that the students learned about the same, but really really didn't like it. And thats legit- that is NOT just students complaining. 

Wednesday, August 25, 2021

The Long Road

Guest blogger Varsha Dani tells us why it's never too late.

This week, I am starting as an Assistant Professor at RIT and I am super excited about it. What's the big deal, you are probably thinking. Don't lots of people get hired in tenure track positions every year? Sure. But the difference in my case is that I got my Ph.D. in 2008. 

Why didn't I look for a position right away? There were a number of reasons. I was burned out. I was going to have a baby. The job market was not particularly good that year. I would have had a two-body problem. And most importantly, I thought it was just going to be a short break. I thought that there was no rush...

What did I do in those intervening years? A lot of things. I spent a lot of time with my kids, including part home-schooling them for some time. I found some new interests, both in Math and CS and outside. I found new collaborators and did some research in new areas, just for fun. Sometimes I was funded for it through grants, but mostly I wasn't. I wrote a lot of Computer Science and some math materials for Brilliant.org. I organized math clubs at my kids' elementary and middle schools. I hiked. I wrote poetry. I never intended nearly 13 years to go by, but somehow they did.  At some point I remembered that I had meant to take a short break, not to give up on being an academic altogether. But by then it seemed too late. At the beginning of my meant-to-be-short hiatus, I used to jokingly refer to myself as a "scholar at large" but by the time a decade had gone by I had started to feel extremely isolated and being a scholar for its own sake was not something to joke about anymore.  Each year I rolled the job-search dice, but with each passing year it seemed more and more futile to do so, and more and more of an imposition to ask people to write recommendations for me. And then, out of the blue, last year, I found a whole community of independent researchers who, like me, were pursuing their scholarly interests despite not being employed to do so, and who felt unapologetically unembarrassed, even proud of it.  And, even more out of the blue, this year I got an offer. And now, this Fall, I am actually doing it. I am actually an academic! To be honest, I feel more than a little trepidation about it, mixed in with the excitement. 

So why am I telling you this? Partly to celebrate. Partly to publicly thank my spouse, who has been extremely supportive of all my decisions (and all my actions that were born of indecision) over many years. Partly to give a shout-out to the wonderful folks at the Ronin Institute who helped me remember that we do science (and computer science) because we love it, not because it pays the bills. Ironically, I believe I needed to be reminded of that before I could get a job! But mostly, I'm writing this to reach out to anyone out there who thinks that their decisions have led to a one-way street they no longer want to be on. It may be hard, and there are, of course, no guarantees, but you won't know whether you can turn around, unless you try. 

Sunday, August 22, 2021

When Words Get Stretched Beyond Their Original meaning

STORY ONE:

 On a Jeopardy rerun with Alex Trebek the question (actually the answer, given the shows format) was (I paraphrase)


Who resigned his commision in the US Army Air Force in April 1941 after President Roosevelt publicly rebuked him for his views?

The answer (actually the question--Why does Jeopardy do this answer-question thing, drives me nuts!) was

Charles Lindbergh.

Alex Trebek then said  Charles Lindberg's views on WW II were not politically correct.

This really struck me since Politically correct means, to quote Wikipedia:

a term used to describe language, policies, or measures that are intended to avoid offense of disadvantage to members of particular groups in society. 

Wikipedia also adds that the term is generally used pejoratively with an implication that these policies are excessive or unwarranted. 

 But Alex Trebek is using the term to mean  incorrect or perhaps incorrect given what we know now or if you think history is written by the winners, then perhaps incorrect since Germany lost the war.  But my point is that I really don't think the term  politically incorrect  makes sense here.

STORY TWO

More recently I heard an anti-masker say

We should not let some woke school board take the right to not wear a mask away from parents and children.

Independent of if you are anti-mask-mandates or pro-mask-mandates, this seems like a strange use of the word  woke  which means, to paraphrase Wikipedia:

Having an awareness of racial prejudice, gender prejudice, sexual orientation prejudice, and the past and current discrimination they have and do cause. 

I've seen it both positively and negatively.

The anti-masker's using of the term seems odd in that mask wearing is not a woke issue. Perhaps he should have said 

We should not let some Nazi school board take the right to not wear a mask away from parents and children.

The term  Nazi while not actually correct, conveys that the school board is authoritarian. However, he really could not use the term that since he was was a neo-Nazi and proud of it. That raises a question: what pejorative  term can a Neo-Nazi use when they want to say someone is  Authoritarian? I ask non-rhetoically. 

But I am getting off topic here- my real point is that the word woke is being used to mean Authoritarian which is not even close to its original meaning. 


MY POINT

The above are examples of how a word in English may change its definition over time, which is not really news, but I found the examples interesting since I saw the origin of these words.

BILL, THIS IS A COMPLEXITY BLOG! SO TALK ABOUT COMPLEXITY. OR MATH!

In math do words change their meaning over time? Yes. Here are a few

Function: at one time `function' implicitly means a function that occurs in nature. So only continous and perhaps diff functions qualified. 

Sets: probably similar.

Efficient: At one time this was an informal notion (Joe Kruskal's paper on MST (see here) is an example of that), then it seemed to be P or perhaps BPP. For some its linear or O(n log n) with a small constant. Rather than say the notion changed, its more like it was never that well defined in the first place, and still isn't. 

Constructive: The many diff definitions of this word could be a blog post of its own. In fact, I thought it was, but I could not find it. I did find lots of blog posts that use the word constructive in diff ways. 

Elementary: Also has many definitions, though they are closer together than for Constructive. This one I did do a post on here



Friday, August 20, 2021

Trusting Scientists

 A tweet that made me think.

The point here is subtle. We don't get on a plane because we "trust scientists", rather we do so because of the strong safety record of commercial aviation. I knew some physicists who won't get on a commuter plane because they worry about the science. Never stopped me.

It is science that we trust to tell us why planes fly, or the water is our tap is (mostly) safe and healthy. I'm not a big fan of beans but not because of the science. Of course I trust science that created the vaccines.

It's not just science, but solid engineering and lots and lots of testing. 

Science isn't always right or consistent. When I was a kid not that long ago, we had nine planets in this solar system, dinosaurs were killed off by climate change and homosexuality was a mental illness. Science is fluid, updating as we learn with new data, models and experimentation. Science is at its best when it doesn't trust itself.

Sometimes people say trust in science to reinforce their beliefs. I've seen smart people say "Trust in the science" about whether vaccinated people should wear masks with completely different conclusions.

I'm a scientist, should you trust me? Let me quote another Paul G. 

“There’s a slightly humorous stereotype about computational complexity that says what we often end up doing is taking a problem that is solved a lot of the time in practice and proving that it’s actually very difficult,” said Goldberg.

The quote comes from a recent Quanta Magazine article about Paul's recent work with John Fearnley, Alexandros Hollender and Rahul Savani on the hardness of gradient descent. Even many NP-complete problems these days can often be solved in practice.

Let's end with the quote attributed to statistician George Box, "All models are wrong, but some are useful". Science gives us ways to understand the world and we need to both trust in the science but know the limitations of what it has to say.

Sunday, August 15, 2021

What are the most important 46 papers in Computer Science? Harry Lewis has a book about them!

 (Disclosure:  Harry Lewis was my PhD advisor. For a blog post on  disclosures and bias see my post on that topic here.)

Harry Lewis has a book out: Ideas that Created the Future: Classic Papers in Computer Science

He picked out the 46 (why 46? Why not 46?) classic papers in computer science and, for each one, has a short article saying why its important, and then has the paper itself, though perhaps shortened (leave out the boring parts) or in some cases he has an excerpt of a book (e.g., The Mythical Man Month which is why I blogged about that book recently here).

Harry Lewis has blogged about his book here where he points to my review which is in SIGACT News. 

OR you an use my link to my review here

The list of 46 papers had some constraints, so if you wonder why isn't X there it might have hit one of those constraints.

1) No paper past 1980 (he had to stop somewhere).

2) He preferred short readable papers to long or unreadable ones (don't we all!). Before thinking `Gee why isn't paper X in the book' go read paper X. 

3) Some papers cost to much to get permission to reprint. My review points to one such paper that I found 5 links to on the web. 

4) We don't need X papers on topic Y.

Of more interest is some papers that you had not heard of but we can now see are important.

For more thought, read my review!

For even more information, buy the book!



Thursday, August 12, 2021

Recognizing Faces

I sometimes have trouble recognizing faces, matching faces to people I've interacted with in the past. It's not a disease like prosopagnosia, I can certainly tell the difference between faces and have no trouble with people I work with directly. But if I haven't seen someone in a while, I may not recognize them or confuse them for someone else. It's especially bad out of context, say running into a professor in my campus on the streets of Frankfurt. It's gotten worse with age but I've had challenges my whole life.

I have my coping mechanisms. I start a conversation to get enough clues to figure out who I'm talking to. I'll google an image before I'm supposed to meet someone I haven't seen in a while. Sometimes I'll just say "Remind me how to pronounce your name again". Sometimes I'll just say something embarrassing thinking the person I'm talking to is someone else.

Name tags are useful, if it isn't obvious you are looking at them. Zoom has been great--everyone's name is just there. I worry that 18 months of zoom meetings means I've lost much of my coping ability, much the way I can no longer navigate by maps the way I used to.

We have technological solutions but mostly unable to make use of them. Through the magic of machine learning, computers have gotten extremely good at recognizing faces. Nevertheless Google Googles actively prevented their one killer app, telling you who you were looking at, for privacy reasons. Perhaps they could limit it to people in your contacts with pictures you uploaded. It would only recognize people you already know.

I know I'm not alone, and I'm writing this post so others won't feel alone. And next time you see me and I look confused, remind me of your name.

Sunday, August 08, 2021

Combing two posts: Blankface (Scott Aa) and Is Science Slowing Down? (Scott Al)

(I also posted this to the Less Wrong Website. At least I tried to- I don't quite know if or when it will appear there as its my first post there.) 

Some papers result from taking two papers and combining them. Perhaps nobody else had read both of them so you can say something new! Or (looking over this post) it may guide people to two really good papers, or in this case two really good posts. 

This blog will draw from two excellent blog posts.

Scott Aaronson  blogged on  his website Aug 2, 2021 about blankfaces, people who let stupid or undefined rules dictate what you can do  without apology (see his post for a better explanation). One example that struck me I quote

No, I never applied for that grant. I spend two hours struggling to log in to a web portal designed by the world's top blankfaces until I finally gave up in despair. 


Scott Alexander blogged  on LessWrong on Nov 26, 2018 about Is science slowing down? which answers with an emphatic yes. His point is science-per-researcher is much less than it used to be, and he has graphs and stats to prove it (see his post for the evidence and some speculation as to why this is) One of the reasons he gave struck me which I quote

Certain features of the modern academic system like undepaid PhD's, interminably long postdocs, endless grant writing drudgery, and clueless funders have lowered productivity. The 1930's academic system was ineed 25x more effective at getting researchers to actually do good research.

(A commenter reminded me that Scott Alexander himself dismisses this reason. I do not.) 

(I note that he gives other reasons as well, most notably for our field that the low hanging fruit is gone. Our lack of progress on P vs NP is likely that its a hard problem, rather than the reason above. Of course, if its solved tomorrow by an outsider without funding, I will happily be proven wrong.) 

Scott Alexander hits upon two types of blankfaces (without using the term).

Grant writing drudgery: the rules for how to submit get more and more detailed an onerous. This is  what Scott Aaronson was alluding to. There are other ways its drudgery as well. 

Clueless Funders: the people deciding who gets funded might not know the area (actually in my experience the grant I've reviews have been quite good and the problem is more not enough money to award all that are deserving.) 

SO I pose the following non-rhetorically as always

1) How big a factor is the slowing down of science that blankfaces get in the way?

2) What can we do about it?








Thursday, August 05, 2021

Pole Vault Live Blogging

As I write this I'm watching the women's pole vault final in the Olympics. Of the 15 women who made the finals, only four remain after two heights.

To expand on my tweet, I find the pole vault the purest of the Olympic Sports. No electronic monitors and timers, no biased judges, no video review. No points deducted for bad form or failing to stick the landing. No disqualification for a false start or stepping over a line. Either you clear the bar without knocking it down, or you don't.

The high jump has similar properties, but just not as cool looking.

All four made the third height. Now onto 4.85 meters. An American, a Greek, a Brit and a Russian (sorry I meant member of the Russian Olympic Committee).

Back in the day, the TV coverage was rather limited. We'd only see the Americans and the medal winners with too much time spend on human interest backgrounds. Now in the streaming world I can watch every competitor. The good and the bad. Live as it happens.

The Russian Anzhelika Sidorova just cleared 4.85 on her first attempt. So did the Brit Holly Bradshaw and the American Katie Nageotte. The Greek Katerina Stefanidi missed her first attempt but decided to pass on the rest. All now go to 4.90 but Stefanidi only gets two attempts while the rest get three.

Stefanidi missed her first attempt at 4.90. She gets one attempt left.

Sidorova and Bradshaw fail to even reach the bar. Nageotte can't clear the bar.

Now the moment that means everything for Stefanidi. Her last attempt. Make it or the rest get the medals. Stefaidi fails to get a good plant and doesn't get into the air at all. Her Olympics are over.

Second attempt for the others. Sidorva and Bardshaw knock down the bar. Nageotte clears the bar, putting her in prime position. Go USA!

Imagine if we judged research papers this way. Either they get into a conference or they don't. Wait, that is they way they happen, although not always without biased judging.

Sidorova is passing on her last attempt at 4.90. Bradshaw goes for it but hits the bar. She has to settle for Bronze.

Bar is now at 4.95 meters. 

Sidorova gets only one attempt at 4.95. If she makes it, she takes the lead, if she misses, she gets the silver. 

Sidorova doesn't clear and the gold goes to the American Katie Nageotte! 

Just for excitement Nageotte is going for 5.01 meters, which would be her first over five meters in competition. In the men's pole vault, the Swede Armand Duplantis (great pole vault name!) easily won the gold. He moved the bar to 6.19 meters to break his own world record. Came all so close in his first attempt but failed to clear. 

Nageotte is just too excited winning the gold to focus enough to make a serious attempt at 5.01. Can't blame her.

Thus ends the best sport in the Olympics.

Sunday, August 01, 2021

Do Four Colors Suffice?

(Guest Post by David Marcus)

Comment by Bill: Haken and Appel proved that all planar maps are 4-colorable. Or did they? David Marcus emailed me that its not quite true and I asked him to post on it, so here it is. The meta point is that math can be very subtle.

And now David Marcus's post:

Is the Four Color Map Theorem true?

It is commonly believed that the Four Color Map Theorem says that four colors suffice to color a planar map. While this is true for any map a non-mathematician would dream up, it is not true for maps a mathematician might dream up without some restriction on the regions that are allowed. This is shown in Hud Hudson's Four Colors Do Not Suffice which appeared in the American Math Monthly, Volume 110,  No. 5, May 2003, pages 417--423. 

Hudson's article is written in a very entertaining style. I recommend that you read it. He constructs a map consisting of six regions R1,...,R6.  Each region is bounded and path connected. There is a line segment B that is  in the boundary of all six regions.  So, six colors are needed, since all six regions share a common boundary. The construction is similar to the topologist's sine curve. For each i , the union of Ri and B is not path connected. Hudson also shows that for any n, there is a map that requires at least n colors.


Hudson thus disproves the following statement:

1)  Four colors are sufficient to color any map drawn in the plane or on a sphere so that no two regions with a common boundary line are colored with the same color.

Appel and Haken actually proved the following:

2) Four colors are sufficient to color any planar graph so that no two vertices connected by an edge are colored with the same color.

Thursday, July 29, 2021

Covid Stats


People take this as proof that once vaccinated no worries. But I have so many challenges with this statistic.
  • This statistic is down from 100% a year ago. Are vaccines working poorer now?
  • There are likely correlations to those vaccinated and those who take precautions like mask wearing and social distancing, though I'm sure which way those correlations go.
  • Those unvaccinated are more likely to be near others unvaccinated so more likely get infected and hospitalized.
Even though one shouldn't draw the conclusion from the statistic, that doesn't mean the conclusion is false. The gold standard are the double-blind vaccine trials which clearly showed the vaccines more efficient and safe. So get the vaccine, not that this blog post will convince those who have been making the conscious choice not to vaccinate to change their minds.

The other stat I found odd was that the life expectancy dropped 1.5 years in 2020. This doesn't mean on average we'll live 1.5 year less. Rather it means that someone who lives their whole life in 2020 conditions, widespread Covid without vaccines, would on average live 1.5 years less than someone who live their entire life in 2019 conditions pre-Covid.

Oddly enough if you made it to 2021 your life expectancy will probably increase (assuming you've been vaccinated). We made tremendous progress in understanding vaccine technology in 2020. Also people with conditions that could have limited their later life span were more likely to be fatal Covid victims, meaning those who are left would live longer.

Sunday, July 25, 2021

I wish problems I have with computers really were my fault

 As you know, the website for out for a few days, as Lance explained here.

When I first could not get to the this blog  my thought was

OH, I must have changed some setting by accident. When Lance gets back (he was on vacation) he'll know how to fix it. Bad timing that it happened when he was gone, though prob not an accident- with him on vacation I was at the site more often and had more of a chance to screw things up. AND Lance will tell me what I did and I'll know to not do it again. And I will learn more about how this all works which will help me in the future!

When Lance got back we found out that NO Bill didn't do anything wrong. The blog site company  that we work with did an update and BLAH BLAH BLAH.  Reminds me of the theme behind the TV show Seinfeld: No Hugs, No Learning. At least no learning. I am not in the slightest more enlightened. 

Lance worked with them and YADA YADA YADA the problem is fixed, so I am very happy about that. 

On the one hand I wish it had been my fault so I would learn something.  On he other hand, if it was my fault would it have been as easy to fix? Would I really have learned something? 

When something does not work my protocol is

1) Turn the machine off and on again (e.g., log out and log in again). I want to say 

this works surprisingly often

but I doubt this surprises any of my readers, or is even news to them.

2) Spend at most 5 minutes trying to fix it myself . You will soon see that 5 minutes is a good choice for me.

3) Ask staff or Lance or Darling or my TA  (depending on the problem). 

4) They tell me to log off and log on again. When I tell them I already have they do something magical and it works again. I then ask them:

a) Could I have fixed this myself. 2/3 of the time the answer is no. They don't mean intellectually. They mean that I do not have access to what I need to fix it.

b) Did I do something wrong? I want to know so I won't do it again. about 99/100 times the answer is that I did nothing wrong (I don't recall that last time that I did).

Given a and b, I think 5 minutes is all the time I want to spend to try to fix it myself. 





Friday, July 23, 2021

Technical Difficulties


After returning from vacation last weekend (hello North Dakota--my 49th state visited), all sorts of odd problems arose. This blog stopped working, a P v NP paper was published on the ACM Transactions of Computing website and my personal emails were getting marked as spam. All is better, I hope.

Years ago I donated the URL computationalcomplexity.org to the Computational Complexity Conference, coincidentally held this past week, with the condition that I could continue to use the "blog" subdomain for this blog. Organizations continue but the people in them change, and when the website was "upgraded" on Saturday the pointers to make this blog work were left out. Thanks to Ashwin Nayak for getting it all straightened out and we're back online.

For the ToCT paper, a paper claiming to reduce 3-SAT to 2-SAT, and thus show P = NP, was originally rejected by the journal but a "disposition field" got inadvertently set to accept and wasn't caught until it showed up online. Editor-in-Chief Ryan O'Donnell quickly got on the case and ACM has removed the paper. P v NP remains as open as ever.

Fixing the email required me to learn far more about SPFs than I ever wanted to know.

By the way if anyone in Idaho wants to invite me to give a talk, I might be interested.

Sunday, July 18, 2021

Political Intersections: Trump honors Antifa member who was shot dead by police

 1) Trump and other reps have said the following about the Jan 6 event at various times:

a) The Jan 6 event was freedom fighters who were fighting the noble fight to overturn a fraudulent election. Rah Rah!

b) The Jan 6 event was a peaceful protest to overturn a fraudulent election. 

c) The Jan 6 event was democrats trying to make republicans look bad.

d) The Jan 6 event was Antifa. (See here)

e) Ashli Babbitt (a protestor who was shot by police at the Jan 6 event) should have been honored by flying the flag at half-mast (see here

If we take the intersection of d and e we find that Trump wants to honor a member of Antifa who was shot by police. 

To be fair, Trump is entitled to change his mind. But I wonder- did he ever really think it was Antifa or was that a talking point? If he really thought so then  when did he change his mind? I ask non rhetorically---  NOT a `gotcha question'

(NOTE: I wrote this post a while back. Since then Trump and some other republicans are tending towards the Freedom Fighters narrative.) 

2) Vaccines:

a) Some people think that the vaccines are bad to take. I suspect they would give some (incorrect) health reasons, while the real reason may be political. (One reason is that the vaccine make you magnetic. That sounds awesome! Others think that there is a microchip in the vaccine so that Bill Gates can track our movements. Gee, Mark Zuckerberg can already do that. Some thing it will rewrite our DNA. A bio major I know  tells me that such people are confusing messenger RNA with DNA. Great- we can now have a nice conversation and point out where they are wrong.) 

b) Some people think we should NOT give vaccines to poor countries that need them, or to people in prison,  since Americans should have priority. (NOTE- from a purely health-viewpoint this is not correct since a pandemic does not respect boundaries- if there is an outbreak in a diff country or in a prison it will affect people not in those countries and not in prison.) 

Are there people who believe a and b? I ask non-rhetorically. 


I am not surprised when people hold contradictory thoughts in their heads, but these two cases just struck me as particularly strange. Not sure why.

 


Sunday, July 11, 2021

Would you take this bet (Part 2) ?

 Recall from my last post (here)


I offer you the following bet: 

I will flip a coin.

If  HEADS you get 1 dollar and we end there.

If TAILS I flip again


If  HEADS you get 2 dollars and we end there.

If  TAILS I flip again


If HEADS you get 4 dollars and we end there.

If TAILS I flip again


The expected value is infinity.


Would you pay $1000 to play this game?

Everyone who responded said NO. Most gave reasons similar to what I have below. 

This is called The St Petersburg Paradox. Not sure it's a paradox, but it is odd. The concrete question of would you pay $1000 to play might be a paradox since most people would say NO even though the expected value is infinity.  See here for more background.

Shapley (see here) gives a good reason why you would not  pay $1000 to play the game, and also how much you should pay to play the game (spoiler alert: not much). I will summarize his argument and then add to it. 


1) Shapley's argument: Lets say the game goes for 40 rounds. Then you are owed 2^{40} dollars. 

The amount of money in the world is, according to this article around 1.2 quadrillion dollars  which is roughly 2^{40} dollars. 

So the expected value calculation has to be capped at (say) 40 rounds. This means you expect to get 20 dollars! So pay 19 to play. 


2) My angle which is very similar: at what point is more money not going to change your life at all? For me it is way less than 2^{40} dollars. Hence I would not pay 1000. Or even 20. 

Exercise: If you think the game will go at most R rounds and you only wand D dollars, how much should you pay to play? You can also juggle more parameters - the bias of the coin, how much they pay out when you win. 

Does Shapley's discussions  resolve the paradox? It depends on what you consider paradoxical. If the paradox is that people would NOT pay 1000 even though the expected value is infinity, then Shapley  resolves the paradox  by contrasting the real world to the math world. 




Tuesday, July 06, 2021

Would you take this bet (Part 1) ?

 I am going to present a well known paradox (I didn't know it until last week, but the source I read said it was well known) and ask your opinion in this post, and reveal my thoughts in my next post.

I don't want you to go to the web and find out about it, I want your natural thoughts. Of course I can't stop you, but note that I did not give the name of the paradox. 

Here it is:

I offer you the following bet: 

I will flip a coin.

If  HEADS you get 1 dollar and we end there.

If TAILS I flip again


If  HEADS you get 2 dollars and we end there.

If  TAILS I flip again


If HEADS you get 4 dollars and we end there.

If TAILS I flip again


etc. 

1) Expected value: 

Prob of getting 1 dollar is 1/2

Prob of getting 2 dollars is 1/2^2

Prob of getting 2^2 dollars is 1/2^3

etc

Hence the Expected Value is 

1/2 + 1/2 + 1/2 + ... = INFINITY


QUESTION: Would you pay $1000 to play the game?

Leave your answer in the comments and you may say whatever you want as well,

but I request you don't give the name of the paradox if you know it. 






Thursday, July 01, 2021

Intersecting Classes

If you have two complexity classes that have complete sets, the intersection might not, for example NP ∩ co-NP. The world of total-function classes acts differently.

Christos Papadimitriou and others defined a number of classes based on finding solutions to problems where solutions are known to exists for some combinatorial reason. While TFNP, the set of all such problems, might not have complete sets, all the other classes are defined basically based on the complete search problem for the class, such as PLS, finding a local minimum. If you have two such classes A and B with complete search problems A and B define the search problem D as 

D(x,y): Find a solution to either A(x) or B(y)

D is complete for the intersection of A and B: First the problem D is in A since you can reduce the problem of finding a solution to either A or B to finding a solution to A. Likewise D is in B.

Suppose you have a problem Z in AB. Then since Z is in A and A is complete for A, finding a solution to Z(u) reduces a finding a solution of A(x) where x is easily computed from u. Likewise Z(u) reduces to finding a solution of B(y). So whatever solution D(x,y) gives you, it allows you to find a solution to Z(u). Thus D is complete for AB.

Some people nerd out to helicopters on mars. I nerd out to the complexity of complete sets. 

I learned about complete sets of intersections of total function classes from the talk by one of last week's STOC best paper awardees, The Complexity of Gradient Descent by John Fearnley, Paul W. Goldberg, Alexandros Hollender and Rahul Savani. The part above was well known but the paper goes much further.

Consider PPAD famously with Nash Equilibrium as a complete problem and PLS. PPAD ∩ PLS has complete sets by the argument above. But we can go further.

The class CLS is a variation of PLS where you find a local minimum in a continuous domain under some Lipschitz conditions and is known to sit in the intersection of PPAD and PLS. Fearnley et al. look at finding a minimum using gradient descent (the main tool for deep learning), and showing not only is it CLS-compete but complete for PPAD ∩ PLS. As a consequence CLS = PPAD ∩ PLS. Pretty cool stuff.

Sunday, June 27, 2021

Someone thinks I am a fine artist! Why?

A while back I got an  email asking me to submit to a Fine Arts Journal. Why me? Here are some possibilities:

1) They were impressed with my play: 

Sure he created the universe, but would he get Tenure? (see here) which did get into a play-writing contest and was performed (one of the actresses  scolded me since I took a slot from a real playwright).

2) They were impressed  with my Daria Fan Fiction (see the four entries here labelled as Daria Fan Fiction).

3) They were impressed with my play JFK: The Final chapter (see here). Unlikely since this was rejected by a play writing contest and is not well known (as opposed to my other works in the fine arts which are well known?)

4) They were impressed with my collection of satires of  Nobel Laureate Bob Dylan (here) .

5) They were impressed with some subset of (a) complexityblog, (b) Problems with a Point,  (c)  Mathematical Muffin Morsels, and (d) Bounded Queries in Recursion Theory. Or maybe just having 3 books on amazon is their threshold.  If it's complexityblog then Lance and I should co-author something for them.

6) It is a vanity-journal where you pay to  publish. So why email me who (a)  is not an artist, (b) is  not a fine artist, and most important (3) does not think of himself as a fine artist. The PRO of emailing me or people like me is they cast a wide net. The CON is--- there is no CON! It costs nothing to email me, and emailing me does not affect their credibility. That still raises the question of how they got my name.

7) Could it be a phishing? If I click on something in the email would they  get my credit card number? Their email begins Dear Professor not  Dear Professor Gasarch.   So they know I am a professor. Then again, I have known of ugrads who get emails that begin Dear Professor. (The emails to HS student Naveen and ugrad Nichole in the story I tell here were addressed to Dear Professor.) 

8) They mistook me for my parents who, in 1973,  put together an anthology of short stories titled Fiction:The Universal elements,  for a Freshman Comp course my mom taught, see here. I note that their book ranks around 18,000,000, so even that explanation is unlikely. Actually the rank changes a lot- it was 12,000,000 this morning. Still, not what one would call a best seller. It's fun to see what is doing better: Bounded Queries in Recursion Theory (currently at around rank 6.000.000)  or Fiction: The Universal Elements.

 If I ever get one of these emails from a History Journal I will submit my Satirical Ramsey Theory and the History of Pre-Christian England: An Example of Interdisciplinary Research (see here) just to see what happens- but  I will stop short of paying-to-publish. Or maybe I will pay-to-publish so that the next time I try to fool a class with it I can point to a seemingly real journal which has the article. 



Wednesday, June 23, 2021

I went to the ``debate'' about Program Verif and the Lipton-Demillo-Perlis paper

On Thursday June 17 I went to (on zoom- does that need to be added anymore?)

A Debate on Program Correctness

There was no subtitle but it could have been:

Have the points made in Social Processes and Proofs of Theorems and Programs by DeMillo, Lipton, Perlis, survived the test of time ? (Spoiler Alert: Yes.)

I found out about it from the Lipton-Regan blog here

The debaters were Richard DeMillo and Richard Lipton and the moderator was Harry Lewis (Alan Perlis passed away in 1990).  Calling it a debate is not correct since DeMillo and Lipton (and Lewis) all agree. (DeMillo and Lipton even have the same first name!)  The DLP paper is in Harry Lewis's collection Ideas that created the future.  The event  should have been advertised as a discussion. However, it was a good discussion so this is not a complaint.

Here are some things that came out of the discussion.

1) The main topic was the 1979 DeMillo-Lipton-Perlis paper (see here) that gave arguments why Proofs of Program correctness could not work.

An all-to-brief summary of the DLP paper: Some researchers are trying to set up frameworks for doing proofs that programs are correct, analogous to the certainty  we get with a proof of a Theorem in Mathematics. But proofs in Mathematics are, in reality, NOT that rigorous. Often details are left out or left to the reader. This is fine for mathematics (more on that later) but unacceptable for programs which need rather precise and rigorous proofs.

How do theorems in mathematics really get verified? By having enough people look at them and make sure they match intuitions, what DLP call A Social Process.  (NOTE FROM BILL: Papers that are not important do not get looked at so there may well be errors.)

2) The notion of proving-programs-correct was very seductive; however, the people who were trying to do this had a blind spot about how the analogy of proving-programs-correct and proving-theorem-correct differ.  In particular, a program is rather complicated and even stating carefully what you want to prove is difficult. By contrast, for most math statements, what you want to prove is clear. Note also that a program has lots of code (far more now than when DLP was written) and so much can happen that you cannot account for.

3) The DLP paper had a large effect on the field of program verification.  Funding for it was reduced and students were discouraged from going into it.

4) When DLP appeared DeMillo and Lipton were pre-tenure. Hence it took lots of courage to publish it. Alan Perlis had tenure and had already won a Turing award.  This did give DeMillo and Lipton some cover; however, it still took courage.

5) How did the Program Verification  Community deal with the objections in DLP?  DeMillo said that he looked at a large set of papers in the field, and very few even mentioned DLP. He recommends reading the book Mechanizing Proof: Computing, Risk, and Trust by David McKenzie see here.

6) So how can we be more certain that programs are correct?

a) Testing.
b) Modularize and test. Fix errors. Modularize  and test. Fix errors...
c) Try to isolate side effects.
d) More testing.

Some point to Model Checking, which could be considered very sophisticated testing, but that's used to verify circuits and perhaps low-level code, not programs. Model checking is a success story and note that Ed Clark, E. Allen Emerson, and Joseph Sifakis shared a (well deserved) Turing award for this work. But see next note.

6.5) An audience member pointed out that Program Verification people have won several Turing Awards

Dijkstra 1972

Floyd 1978 

Hoare 1980

Pnueli 1996

(Are there more?) 

so the field is alive and healthy. DeMillo responded that prizes for academic research are a poor measure of  success. 

7) Can computers themselves help with proofs of correctness? That is the only hope; however, there are scaling problems.

8) When DLP was written a program with 100,000 lines of code was considered large. Now we have programs with millions of lines of code. And now we have more concurrency. So the lessons of the DLP paper are probably more relevant now then they were then.

9) Since Program Verification does not seem to be used, how come we don't have a Software crisis?

a) We do! The Q+A mechanism at the meeting was terrible. 
b) We do! FILL IN YOUR OWN FAVORITE STORY OF BAD SOFTWARE.
c) See the answer to question 6.

10) SPECS are a problem. Tony Hoare once gave a talk where he proves that a program sorted correctly and then pointed out that if the program just output 0,...0 that would have also satisfied the SPEC since all that was required was that the output be sorted, not the (overlooked!) requirement that it be the same numbers as the input. So one needs to be careful!

11) Despite being a leader in the field, Tony Hoare has come to see the limitations of the Proofing-programs-correct approach to Software Verification.  His paper An Axiomatic basis for Computer Programming (1969)  (which is also in Harry Lewis's collection Ideas that Created the Future).
Much later, commenting on the paper,  Hoare says the following:

Ten years ago, researchers into formal methods (and I was the most mistaken among them) predicted that the programming world would embrace with gratitude every assistance promised by formalization to solve the problems of reliability that arise when programs get large and more safety-critical. Programs have now got very large and very critical--well beyond the scale which can be comfortably tackled by formal methods. There have been many problems and failures, but these have nearly always been attributable to inadequate analysis of requirements or inadequate management control. It has turned out that the world just does not suffer significantly from the kind of problem that our research was originally intended to solve.'

12) Richard Lipton told a story where he showed that the program in question satisfied the SPEC, but the SPEC was a tautology that any program would satisfy.  Again, one needs to be careful!

13) The test of time: Verifying large scale programs does not seem to be common in industry. Is industrial adaptation a fair measure? 

14) Harry Lewis's  book Ideas that created the future collects up, edits, and comments on 46 important papers in Computer Science (I reviewed it in the issue of SIGACT News that is in your mailbox---I will blog about it at a later time.) There are several papers in it about program verification, including DLP, Hoare's paper, and three papers by Dijkstra.

a) When Harry discussed including DLP some people said `You're going to include that!  Its a polemic, not a paper!'

b) When Harry teaches a course from this book (it must be an awesome class!) and asks the students at the end which papers they learned the most from, the top two are an excerpt from Fred Brooks The Mythical Man Month (see my post on Brook's work here ) and DLP.

c) I am hoping that this is just one of 46 talks with authors of the papers in his book.  I look forward to his interview with Aristotle, Leibnitz, Boole, Turing, ...

Monday, June 21, 2021

Collusion Rings

Grigory Yaroslavtsev now maintains the Theory Jobs Spreadsheet, a crowd-sourced listing of where your favorite CS theorists have taken new positions. Read his blog post for more info.

Michael Littman wrote a CACM Viewpoint on collusion rings designed to tilt the odds into getting papers into conferences. See also this post by T. N. Vijaykumar. I started to write a post on the topic but couldn't get it going so I asked the AI algorithm GPT-3 to write a guest post. Here is its story.


I have been part of a collusion ring and I know how it feels. I was a postdoc at the University of California, Berkeley, in the early 2000s when I first learned about the practice of “paper rings.” A paper ring is a group of researchers who agree to submit each other’s papers to conferences without disclosing this fact to the conference organizers. The idea is that if you have several submissions from the same conference, your chances of getting accepted are higher. The pressure to be part of such a ring can be immense. You are told that you need to publish or perish and that you need to get papers into top conferences. If you don’t play along, you will be left behind. I was lucky enough to have a mentor who warned me against being part of such rings. He told me that if I did not want my name on a paper, then I should not submit it to a conference. He also told me that if I did not want to be part of a paper ring, then I should not submit papers for others in the ring. He said that he would help me with my own research and that he would help me get my own papers into conferences. He was right: My own papers were accepted into conferences and they were published in journals.

I am now an associate professor at the University of Massachusetts Amherst and I have been on the other side of this issue as well. In 2012, I was asked by an editor at a top computer science conference to review a paper submitted by one of my students. The student had been part of a paper ring and had submitted his own paper along with those submitted by his collaborators in the ring. The editor told me that he had rejected the student’s paper because it was too similar to other papers submitted by members of the ring. However, he wanted me to review it so that he could reconsider accepting it for publication if I recommended doing so.

What should I do? Should I tell him about the paper ring? Should I recommend accepting the paper? Should I tell him that there is no way he should accept it? Should I tell him that he should reject it but then publish it anyway? What would you do? I decided not to tell him about the paper ring and not to recommend accepting the paper for publication. Instead, I told him that he should reject it but then publish it anyway because this would send a strong message about what we expect from our students and what we think about such practices. The editor agreed with my decision and published the student’s paper along with two other papers submitted by members of the same paper ring.

The student received his PhD degree but did not get any credit for his work on these three papers because they were considered as part of his dissertation work when he defended his thesis later that year. In retrospect, I think that my decision was wrong because it allowed these students to continue their unethical behavior without any consequences for them or their advisors (who were also part of this ring). But at least in this case, there were no negative consequences for anyone involved in this incident except for these students who did not get credit for their work on these three papers because they were considered as part of their dissertation work when they defended their thesis later that year.

Thursday, June 17, 2021

Benny Chor (1956-2021)


Benny Chor passed away on June 10, 2021. Luca Trevisan had a blog post on the news.

We present a guest post on Benny Chor's life and works by Oded Goldreich. The post has many refs to papers. They will appear at the end with pointers to them.


Benny Chor was born on December 23rd 1956  and grew-up in Tel-Aviv, Israel. He studied Mathematics in the Hebrew University, receiving a B.Sc. in 1980 and an M.Sc. in 1981. He then switched to studying Computer Science at MIT, and graduated in 1985 with a PhD thesis titled

 Two Issues in Public Key Cryptography -- RSA Bit Security and a New Knapsack Type System

which received an ACM Distinguished Dissertation award. After post-doctoral periods at MIT and Harvard, he took a faculty position at the Computer Science Department of the Technion (1987-2001), and then at Tel-Aviv University, where he served as the chairman of the department for two years. He died on June 10th 2021, from a terminal disease.

Although Benny was a very articulated and verbal person, I find it impossible to describe his personality in words. The point is that words cannot capture the experience of interacting with him, which was always sheer fun. Still, I guess I should say something personal as a close friend of his. So I will just mention that he lived happily with Metsada, whom he met in the summer of 1981, and that they lovingly raised three kids: Arnon, Omer and Aya. Actually, let me also mention his life-long close relationship with his brother, Kobi.

Focusing on his contributions to science, I will confine myself to the areas of cryptography and randomized computation. Still, let me mention that in the mid 1990s, Benny's research interests gradually shifted to computational biology, but I will not review his contributions to that area, since it is very remote from my own expertise.

In my opinion, Benny's most important contribution to cryptography is the fundamental
paper on Verifiable Secret Sharing [CGMA].  Loosely speaking, Verifiable Secret Sharing
is a method to share a secret so that any majority of share-holders may reconstruct the
secret, whereas no minority may do so, and still each share-holder can verify that the
share that they got is a valid one.  This primitive had a tremendous impact on the
development of secure multi-party protocols, and almost all subsequent works in this
area utilize it.

Additional research milestones of Benny, in the area of Cryptography, include the proof
of the existence of a ``hard-core'' in the RSA and Rabin functions [BCS, ACGS], the
introduction and design of Private Information Retrieval (PIR) schemes [CGKS] (as well
as their computational counterparts [CG97]), and the introduction and design of
Tracing Traitor schemes [CFNP].  Each of these works deserves more than the foregoing
brief mention, yet I'm not even mentioning other important works like [CK].

Turning to the area of Randomized Computation, Benny's contributions span diverse areas
ranging from the manipulation of randomness to the use of randomness in complexity theory
and distributed computing.  In particular, his work on weak sources of randomness [CG88]
identified min-entropy (of the source) as the key parameter for randomness extraction and
presented a simple two-source extractor.  Distilling an aspect of [ACGS], his work on
pairwise-independent sampling [CG89] demonstrates the general applicability of this method. His works in distributed computing include a randomized Byzantine Agreement protocol [CC] that beats the deterministic bound without using unproven assumptions.  Lastly, let me just mention a couple of his complexity-theoretic works: [BCGL, CCGHHRR].

Benny was a superb teacher and a devoted graduate student adviser.  It is tempting to try
listing the numerous educational projects that he initiated, and his former students, but
these lists will be too long and the risk of a crucial omissions is too high.  So let me
end by expressing the feeling of loss that is shared by many in our community,
which adds to my personal sorrow.

Oded and Benny

Benny with his family's namesake (Chor = Bull)

[ACGS] Werner Alexi, Benny Chor, Oded Goldreich, Claus-Peter Schnorr:
RSA and Rabin Functions: Certain Parts are as Hard as the Whole.
SIAM J. Comput. 17(2): 194-209 (1988) LINK

[BCGL] Shai Ben-David, Benny Chor, Oded Goldreich, Michael Luby:
On the Theory of Average Case Complexity. J. Comput. Syst. Sci. 44(2): 193-219 (1992) LINK

[BCS] Michael Ben-Or, Benny Chor, Adi Shamir: On the Cryptographic Security of Single RSA Bits STOC 1983: 421-430. LINK

[CCGHHRR] Richard Chang, Benny Chor, Oded Goldreich, Juris Hartmanis, Johan Håstad, Desh Ranjan, Pankaj Rohatgi: The Random Oracle Hypothesis Is False.
J. Comput. Syst. Sci. 49(1): 24-39 (1994) LINK

[CC] Benny Chor, Brian A. Coan:
A Simple and Efficient Randomized Byzantine Agreement Algorithm.
IEEE Trans. Software Eng. 11(6): 531-539 (1985) LINK

[CFNP] Benny Chor, Amos Fiat, Moni Naor, Benny Pinkas:
Tracing traitors. IEEE Trans. Inf. Theory 46(3): 893-910 (2000) LINK

[CG97] Benny Chor, Niv Gilboa:
Computationally Private Information Retrieval. STOC 1997: 304-313 LINK

[CG88] Benny Chor, Oded Goldreich:
Unbiased Bits from Sources of Weak Randomness and Probabilistic Communication Complexity.
SIAM J. Comput. 17(2): 230-261 (1988) LINK

[CG89] Benny Chor, Oded Goldreich:
On the power of two-point based sampling. J. Complex. 5(1): 96-106 (1989) LINK

[CGKS] Benny Chor, Oded Goldreich, Eyal Kushilevitz, Madhu Sudan:
Private Information Retrieval. J. ACM 45(6): 965-981 (1998) LINK

[CGMA] Benny Chor, Shafi Goldwasser, Silvio Micali, Baruch Awerbuch:
Verifiable Secret Sharing and Achieving Simultaneity in the Presence of Faults.
FOCS 1985: 383-395. LINK

[CK] Benny Chor, Eyal Kushilevitz:
A Zero-One Law for Boolean Privacy.  STOC 1989: 62-72. LINK




Thursday, June 10, 2021

The Future of Faculty Hiring

Faculty hiring in computer science is a process long due for an overhaul. The pandemic certainly changed some of the dynamics moving most of the interviews online and saving a ton of money and time. Will this be the start of a fresh approach to recruiting?

A typical search in the past few years had some schools flying in 30-40 candidates, typically costing over a $1000 each and a full-time job for a staff member during the search. We'd justify the expense as small compared to the millions we'd invest in a faculty member throughout their career, but it is generally the largest discretionary expense for a CS department. It also gives advantages to rich departments over others.

During the pandemic all those interviews moved online and worked reasonably well at virtually no additional cost. Also no need to scrounge around to find faculty willing to skip family meals to have dinner with the candidates. And if a faculty had a conflict with a candidate on the interview day, they could schedule on a different day. There really is no reason to have all the meetings on the same day.

With the pandemic mostly behind us, will we go back to in-person interviews moving forward. I suspect the airport interview, where you fly out 20 or so candidates to have hour long interviews in a hotel near an airport with a search committee for an administrative position, will be the first to go completely virtual. 

Even for regular faculty interviews, there will be great pressure to reduce the number of in-person visits, perhaps to just the top candidates, or just the ones who have offers--make the "second visit" the only visit. Richer departments may find the expense worthwhile to make a bigger impression on the candidates and that will only expand the advantage of wealthier universities.

Times like this are the perfect opportunity for CS leadership to come in and give some sanity to the hiring process but I'm not holding my breath.

Sunday, June 06, 2021

When do you use et al. as opposed to listing out the authors? First names? Middle initials? Jr?

 If I was refering to the paper with bibtex entry: 


@misc{BCDDL-2018,

  author    = {Jeffrey Bosboom and

               Spencer Congero and

               Erik D. Demaine and

               Martin L. Demaine and

               Jayson Lynch},

  title     = {Losing at Checkers is Hard},

  year      = {2018},

  note      = {\newline\url{http://arxiv.org/abs/1806.05657}},

}

(The paper is here.)

I would write 

Bosboom et al.~\cite{BCDDL-2018} proved that trying to lose at checkers (perhaps you are playing a child and want to keep up their self-esteem, or a Wookie and don't want your arms to be torn off your shoulders   (see here),  or a Wookie child) is hard. 


Why did I use et al. ? Because it would be a pain to write out all of those names. 

How many names does it take to make you write et al. ? Are there exceptions? 

I have not seen any discussion of this point on the web. So here are my rules of thumb and some questions.(CORRECTION- a commenter points out that this IS discussed on the web. Even so, you can comment on my thoughts or give your thoughts or whatever you like.) 

1) If  there are 3 or more people than use et al. Exception: If the three are VERY WELL KNOWN as a triple. E.g., the double play was Tinker to Evers to Chance. Well, that would not really come up since in baseball nobody ever says the double play was Tinker et al.  More relevant examples:

Aho, Hopcroft, and Ullman

Cormen, Leiserson, and Rivest, also known as CLR

The more recent edition is

Cormen, Leiserson, Rivest, and Stein. I have heard CLRS but I don't know if people WRITE all four names. 

Lenstra-Lenstra-Lovasz also usually mentions all three. 

2) If there is one name do not use et al.  unless that one person has a multiple personality disorder.

3) If there are 2 people it can be tricky and perhaps unfair. If the second one has a long name then I am tempted to use et al. For example

Lewis and Papadimitriou (If I mispelled Christos's name-- well- that's  the point!- to avoid spelling errors I want to use et al. )

Lampropoulos and Paraskevopoulou (the first one is UMCP new faculty!). After typing in the first name I would not be in the mood to type in the second. 

Similar if there are lots of accents in the name making it hard to type in LaTeX (though I have macros for some people like Erdos who come up a lot) then I might use et al. 

(ADDED LATER- some of the commenters object to my `rule' of leaving out the last name if its complicated. That is not really my rule- the point of this post was to get a discussion going about the issue, which I am happy to say has happened.) 

----------

There are other issues along these lines: when to include the first name (when there is more than one person with that last name, e.g. Ryan Williams and Virginia  Vassilevska Williams), when to use middle initials (in the rare case where there is someone with the same first and last name- Michael  J. Fox added the J and uses it since there was an actor named Michael Fox.)

I will soon give a quote from a math paper that amused me, but first some context.  The problem of determining if a poly in n variables over Z has an integer solution is called E(n). By the solution to Hilbert's 10th problem we know that there exists n such that E(n) is undecidable. E(9) is undecidable, but the status of E(8) is unknown (as of May 2021) and has been since the early 1980's. 

Here is the quote (from here).

Can we replace 9 by a smaller number? It is believed so. In fact, A. Baker, Matiyasevich and J.Robinson  even conjectured that E(3) is undecidable over N.

Note that Baker and Robinson get their first initial but Matiyasevich does not.

I suspect that they use J. Robinson since there is another mathematician with last name Robinson: Rafael Robinson who was Julia's Robinson's husband (to my younger readers--- there was a time when a women who got married took her husband's last name). There is at least one other Math-Robinson: Robert W Robinson. I do not think he is closely related. 

Baker: I do not know of another mathematician named Baker. I tried Google, but the Bakers  I found were   not in the right time frame. I also kept finding hits to an anecdote about Poincare and a man whose profession was a baker (see here though its later in that blog post). However, I suspect there was another mathematician named Baker which is why the author uses the first initial.  Its possible the author did not want to confuse Alan  Baker with Theodore Baker, one of the authors of Baker-Gill-Solovay that showed there were oracles that made P = NP and others that made P NE NP.  But somehow, that just doesn't seem right to me. I suspect there is only one mathematician with last name Matijsavic. 

Thomas Alva Edison named his son Thomas Alva Edison Jr.  This was a bad idea but not for reasons of authorship, see here.


Thursday, June 03, 2021

What happened to self-driving cars?

In 2014, I wrote a blog post about a fake company Elfdrive.

With a near record-setting investment announced last week, the self-driving car service Elfdrive is the hottest, most valuable technology start-up on the planet. It is also one of the most controversial.

The company, which has been the target of protests across Europe this week, has been accused of a reckless attitude toward safety, of price-gouging its customers, of putting existing cabbies out of work and of evading regulation. And it has been called trivial. In The New Yorker last year, George Packer huffed that Elfdrive typified Silicon Valley’s newfound focus on “solving all the problems of being 20 years old, with cash on hand.”

It is impossible to say whether Elfdrive is worth the $117 billion its investors believe it to be; like any start-up, it could fail. But for all its flaws, Elfdrive is anything but trivial. It could well transform transportation the way Amazon has altered shopping — by using slick, user-friendly software and mountains of data to completely reshape an existing market, ultimately making many modes of urban transportation cheaper, more flexible and more widely accessible to people across the income spectrum.

It was a spoof on Uber but now it looks more like Tesla, expect that Tesla's market value is over half a trillion, about six times larger than General Motors.

The post was really about self-driving cars which I thought at the time would be commonplace by 2020. We are mostly there but there are issues of silent communication between drivers or between a driver and a pedestrian on who goes first that's hard to duplicate for a self-driving car. There is the paradox that if we make a car that will always stop if someone runs in front of it, then some people will run in front of it.

There is also the man-bites-dog problem. Any person killed by a self-driving car will be a major news item while the person killed by a human-driven car while you've been reading this post will never be reported.

We'll get to self-driving cars eventually, it just won't be all at once. We're already have basically self-driving cars on highways and in many other roads as well. As the technology improves and people see that it's safe at some point people will say, "So why do we even need the steering wheel anymore?"

Sunday, May 30, 2021

What is a natural question? Who should decide?


(Thanks to Timothy Chow for inspiring this post.)

My survey on Hilbert's Tenth Problem(see  here) is about variants of the problem. One of them is as follows: 

For which degrees d and number-of-vars n, is Hilbert's tenth problem decidable? undecidable? unknown? 

 I wondered why there was not a website with this information. More generally, the problem didn't seem to be getting much attention. (My survey does report on the attention it has gotten.) 

I got several emails telling me it was the wrong question. I didn't quite know what they meant until Timothy Chow emailed me the following eloquent explanation:

-----------------------------------

One reason there isn't already a website of the type you envision is that from a number-theoretic (or decidability) point of view, parameterization by  degree and number of variables is not as natural as it might seem at first glance. The most fruitful lines of research have been geometric, and so geometric concepts such as smoothness, dimension, and genus are more natural than, say, degree. A nice survey by a number theorist is the book Rational Points on Varieties by Bjorn Poonen. Much of it is highly technical; however, reading the preface is very enlightening. Roughly speaking, the current state of the art is that there is really only one known way to prove that a system of Diophantine equations has no rational solution.

----------------------------------

AGAINST THE NUMBER THEORISTS VIEWPOINT:

1) ALICE: Why are you looking for your keys under the lamppost instead of where you dropped them?

   BOB: The light is better here.

2) I can imagine the following conversation:

BILL: I want to know about what happens with degree 3, and number of variables 3.

MATHPERSON: That's the wrong question you moron. The real question is what happens for fixed length of cohomology subchains.

BILL: Why is that more natural?

MATHPERSON: Because that is what we can solve. And besides, I've had 10 papers on it.


FOR THE NUMBER THEORISTS VIEWPOINT

1) They are working on really hard problems so it is natural to gravitate towards those that can be solved.

2) I suspect that the math that comes out of studying classes of equations based on smoothness, dimension, genus is more interesting than what comes out of degree and number of vars. Or at least it has been so far. 

META QUESTION

Who gets to decide what problems are natural?

People outside the field (me in this case) are asking the kind of questions that a layperson would ask and there is some merit to that.

People inside the field KNOW STUFF and hence their opinion of what's interesting to study has some merit. But they can also mistake `I cannot solve X' for `X is not interesting'


Tuesday, May 25, 2021

Does the university matter?

As we come out of a pandemic with online teaching and research collaborations, how much do we actually need the university?

Theoretical research in computer science, math and elsewhere it hardly slowed down with everyone hunkered down at home, and when it did it was more because of supervising kids with their on-line learning than being away from the university. Collaborating with those around the world was basically the same as collaborating with your colleagues on campus.

Many courses, especially the larger ones, worked about as well on-line as they do in person.

The pandemic is accelerating changes already in place. Before say 1960, the fastest travel for the masses was on train and boats and fast communication limited to expensive phone calls. You needed strong colleagues, a strong library and a strong support staff to be a successful academic. Traveling to meet colleagues and attend conferences was a luxury that few could do often.

The 60's gave us air travel though it wasn't until the 90's that we could readily send academic papers electronically. It's really only recently that we have the infrastructure to allow high-quality teaching and research collaboration online.

Suppose universities now just disappeared. Professors would be free agents, supported by grants and tuition from students taking their on-line courses and consulting. Students would pick and choose courses from the best instructors, their "transcript" being recorded on a blockchain-like database. PhD students would be more of an apprenticeship, a low salary in exchange for personalized mentoring from a professor. 

For the experimental sciences, there would be a set of national labs that professors could join.

Startups would create apps to enable all these things, professors would just become yet another piece of the gig economy. Superstar academics can pull in large salaries, the rest would struggle to make a decent wage--not that different from actors and musicians. 

Don't worry, universities aren't going anywhere soon. Or are they?

Thursday, May 20, 2021

Emerging from the Pandemic

The City of Chicago yesterday agreed with the latest CDC guidelines that those of us fully vaccinated no longer have to wear masks in most settings. Lollapalooza, the big Chicago music festival, will be held in full capacity this summer. There are still some restrictions but barring any surprise setbacks should be mostly gone by the Fourth of July.

It's not appropriate to call the pandemic over. Too many countries are suffering and lack good access to vaccines or strong health care. But in most of the US vaccinations are readily available and it really feels like we are putting the pandemic in the past.

Variants that beat the vaccines could emerge. Vaccines could become ineffective over time. Too many still need to be vaccinated and too many don't want to do so. Typically problems start small and quickly get big by exponential growth but likely with enough warning if we need boosters or extra precautions. And all those who cut in line to get shots early are now my canaries for the effects wearing off.

What will this post-pandemic world look like? Continued virtual meetings from people who don't want to spend the 10-15 minutes to get across campus or to come to campus at all. A bigger move to casual dress, even in the corporate world. No more snow days. Changes in ways we won't expect. 

We all have our different attitudes but I'm ready to move on. Time to start the "after times".

Sunday, May 16, 2021

Why do countries and companies invest their own money (or is it?) in Quantum Computing (Non-Rhetorical)

 There have been some recent blog posts by Scott (see here) and Lance (see here)  about the hype for SHORT TERM APPLICATIONS of Quantum Computing, which they both object to. 

I have a question that has been touched on but I want to get it more out there.

PREAMBLE TO QUESTION:  The following scenarios, while distasteful, do make sense: 

a) A researcher on their grants exaggerates or even out-right lies about the applications of their work. 

b) A journalist in their articles exaggerates or even out-right lies about the applications of the science they are covering.

c) A company exaggerates or even out-right lies about the applications of their project to a venture capitalist or other kind of investor. 

QUESTION: 

Why does a company or country invest THEIR OWN MONEY into Quantum Computing which is unlikely to have a short term profit or benefit? Presumably they hire honest scientists to tell them the limits of the applications in the short term. 

ANSWERS I HAVE GOTTEN FROM ASKING THIS

1) QC might be able to do something cool and profitable, like factoring, or simulating physics quantum experiments or something else, in the short term. Quantum Crypto is already happening, and that's a close cousin of Quantum Computing. 

2) QC might be able to do something cool and profitable (like in (1)) in the long term, and both companies and countries think they will be around for a long time. (For a list of America's 10 oldest companies see here.) 

3) The company or country is in this for the long term, not for a practical project, but because they realize that doing GOOD SCIENCE is of a general benefit (this might make more sense for a country than a company). And funding Quantum Computing is great for science. 

4) Bait and Switch: The company claims they are doing Quantum to attract very smart people to work with them, and then have those smart people do something else.

5) (this is a variant of 1) While Quantum Computing may not have any short term applications, there will be classical applications INSPIRED by it (this has already happened, see here).

6) Some of these companies make money by applying for GRANTS to do QC, so its NOT their money. In fact, they are using QC to  GET money.

7) For a country its not the leaders money, its that Taxpayer's money- though that still leaves the question of why spend Taxpayer money on this and not on something else.

8) For a company its not their money- its Venture Capitalists  and others (though for a big company like Google I would think it IS their money). 

9) The scientists advising the company or country are giving them bad (or at least self-serving) advice so that those scientists can profit- and do good science. So this is a variant of (3) but without the company or country knowing it. 

10) In some countries and companies group-think sets in, so if the leader (who perhaps is not a scientist) thinks intuitively that QC is good, the scientists who work for them, who know better, choose to not speak up, or else they would lose their jobs...or worse. 

11) For countries this could be like going to the moon: Country A wants to beat Country B to the moon for bragging rights. Scientists get to do good research even if they don't care about bragging rights. 

12) (similar to 11 but for a company) If a company does great work on QC then it is good publicity for that company. 

13) Some variant of the greater fool theory. If so, will there be a bubble? A bail-out?