Monday, September 30, 2019

Richard Guy is 103 years old today

Richard Guy is a mathematician. He co-authored the classic book Winning Ways for your Mathematical Plays with Elywn Berlekamp and John Conway.

On Sept 30 (today) he turned 102. According to this list he is the oldest living mathematician, and he would need to live to 110 to be the oldest mathematician ever.

I have met him twice. He was at the Gathering for Gardner Conference as a young 98-year old. I told him that his book Winning Ways had a great influence on me. He asked it if was positive or negative. I later saw him at a Math conference where he went to my talk on The Muffin Problem. So he is still active.

His Wikipedia site says that he says he regards himself as an Amateur Mathematician. While it is awkward to disagree with how someone sees himself, I'll point out that he is an author or co-author of 11 books, has many papers, and has solved Erdos Problems. He has taught some but I couldn't really find out what his source of income is or was. This takes us back to the word `amateur' which has several meanings:

Amateur: Someone who does X for the love of X (Amor is Love in Spanish), and not for money. This could be true of Richard Guy. This notion of amateur may be lost on my younger readers since this it used to be a thing to NOT take money since it somehow soils what you do. In those days Olympic athletes could not have played professionally beforehand. We can't imagine that now.

Amateur: Someone who dabbles in something but is not really that good. This could NOT be true of Richard Guy.




Aside from games he has also worked in Number Theory. His book Unsolved Problems in Number Theory has inspired many (including me).

So happy birthday Richard Guy!

He is the also the oldest living person we have honored on this blog. Second oldest was Katherine Johnson, see who is still alive.

ADDED LATER- some people emailed me if Richard Guy was still actively doing mathematics. Here is a recent paper of his: here

Thursday, September 26, 2019

Quantum Supremacy

By now you've probably heard the rumors of Google achieving quantum supremacy. I don't have inside information outside of Scott's blog post but it looks like the news should be embargoed until the release of a Science or Nature paper. These things usually happen on a Tuesday and you'd think they would avoid the news of the Nobel Prize announcements October 7-14.

Since for now the Google paper doesn't officially exist, we live in an era of Classical Dominance. Any problem that can be solved on a quantum computer today, can be solved just as fast or faster on a traditional computer. Quantum Supremacy, despite its lofty name, is just the negation of Classical Dominance, that there is some problem that a current quantum machine can solve that all our regular machines would require a considerably longer time to solve. This isn't a formal mathematical or scientific definition, so one can debate when or if we cross this threshold and I'm sure people will.

Quantum Supremacy might not even be a monotone concept. Future classical algorithms might solve the problem quickly, leading us back to Classical Dominance but leaving open the possibility of returning to Quantum Supremacy with another problem.

Quantum Supremacy is a long way from Quantum Usefulness, where quantum machines can solve problems we care about faster that traditional machines. Quantum computing will truly reach its potential when we can run general quantum algorithms like Shor's algorithm to factor products of large primes. We'll probably never see Quantum Dominance where classical transistors go the way of vacuum tubes.

Nevertheless, quantum supremacy is an important step and whether or not you think Google has gotten there, I'm sure it's an incredible achievement of science and engineering.

Monday, September 23, 2019

Applicants to Grad School are too good. Here is why this might be a problem.

Sitting around with three faculty we had the following conversation

ALICE: When I applied to grad school in 1980 they saw a strong math major (that is, I had good grades in hard math courses) but very little programming or any sort of computer science. That kind of person would NOT be admitted today since there are plenty of strong math majors who ALSO have the Comp Sci chops.

BOB: When I applied to grad school I was a comp sci major but my grades were not that good- A's in system courses, B's and even some C's in math. But I did a Security project that lead to a paper that got into a (minor) systems workshop. Two of my letters bragged a lot about that. (How do I know that? Don't ask.) So I got into grad school in 1989. That kind of person would NOT be admitted today since there are plenty of people who have papers in minor conferences whose grades ARE good in stuff other than their area.

CAROL: In 1975 I was an English major at Harvard. The summer between my junior and senior year I took a programming course over the summer and did very well and liked it. I then took some math. Then I worked in industry at a computer scientist for 5 years. Then I applied to grad school and they liked my unusual background. Plus I did well on the GRE's. Letters from my boss at work helped me, I don't think they would count letters from industry now. They took a chance on me, and it paid off (I got a PhD) but I don't think they would let someone like me in now since they don't have to take a chance. They can admit people who have done research, have solid backgrounds, and hence are not taking a chance. The irony is that some of those don't finish anyway.


1) Are Alice, Bob, and Carol right that they wouldn't be admitted to grad school now? I think they are with a caveat- they might end up in a lower tier grad school then they did end up in. Also, Alice and Bob I am more certain would not end up in the top tier grad schools they did since they can be compared DIRECTLY to other applicants,
where as Carol might be more orthogonal.

2) I have a sense (backed up my no data) that we are accepting fewer unusual cases than we used to (not just UMCP but across the country) because too many of the applicants are the standard very-good-on-paper applicants. Even the on-paper is not quite fair- they ARE very good for real.

3) Assume we are taking less unusual cases. Is that bad? I think so as people with different backgrounds (Carol especially) add to the diversity of trains of thought in a program, and that is surely a good thing. If EVERY students is a strong comp sci major who has done some research, there is a blandness to that.

4) What to do about this? First off, determine if this is really a problem. If it is then perhaps when looking at grad school applicants have some sensitivity to this issue.

5) For grad school admissions I am speculating. For REU admissions (I have run an REU program for the last 7 years and do all of the admissions myself) I can speak with more experience. The students who apply have gotten better over time and this IS cause for celebration; however, it has made taking unusual cases harder.

Monday, September 16, 2019

this paper from 2015 cracks Diffie-Hellman. What to tell the students?

I am teaching cryptography this semester for the second time (I taught it in Fall 2019) and will soon tell the students about the paper from 2015:
Imperfect Forward Secrecy: How Diffie-Hellman Fails in Practice. There are 14 authors.

The upshot is that as Diffie-Hellman was implemented in 2015, many cases were crackable. In summary (and probably too simple):

DH in a 512-bit group can be cracked by the authors

DH in a 1024-bit group they speculate can be cracked with nation-state resources.



Is this a big deal? If YES then what is being done, and if NOT then why not?

I have come up with some statements that I DO NOT KNOW if they are true, but I am ASKING you, to shed some light on the BIG DEAL or NO BIG DEAL question. (Note- Idea for a game show: BIG DEAL or NO BIG DEAL where contestants are asked if a news story is a BIG DEAL or not.)

So, please comment on the following question:

1) Since 2015 the people who use DH have upped their game and are now using bigger parameters. (I doubt this is true)

2) DH is mostly not used on things that hackers are not interested in, so this is not a big deal.

3) The expertise required to crack DH via this paper is rather difficult, so hackers don't have the skills.

4) This paper is not a problem for a bad reason: Hackers don't need to use the number field sieve DL algorithm when all they need to do is (1) guess that the pin numer is 1234 or the year the user was born (or close to it), (2) put on a uniform from Geek-Squad or some such organization and claim they are here to help, (3) exploit a known security flaw that the company has not bothered fixing.

5) The 14 authors have mysteriously disappeared. (I doubt this is true.)


(Misc: My spell checker thinks that Diffie and crackable are not words, but Hellman is.)

Monday, September 09, 2019

Are there any natural problems complete for NP INTER TALLY? NP INTER SPARSE?


Recall:

A is a tally set if A ⊆ 1*.


A is a sparse set if there is a polynomial p such that the number of strings of length n is ≤ p(n).


If there exists a sparse set A that is NP-hard under m-reductions (even btt-reductions) then P=NP. (See this post.)

If there exists a sparse set A that is NP-hard under T-reductions then PH collapses. (See this post.)

Okay then!

I have sometimes had a tally set or a sparse set that is in NP and I think that its not in P. I would like to prove, or at least conjecture, that it's NP-complete. But alas, I cannot since then P=NP. (Clarification: If my set is NP-complete then P=NP. I do not mean that the very act of conjecturing it would make P=NP. That would be an awesome superpower.)

So what to do?

A is NPSPARSE-complete if A is in NP, A is sparse, and for all B that are in NP and sparse, B ≤m A.

Similar for NPTALLY and one can also look at other types of reductions.

So, can I show that my set is NPSPARSE-complete? Are there any NPSPARSE-complete sets? Are there NATURAL ones? (Natural is a slippery notion- see this post by Lance.)

Here is what I was able to find out (if more is known then please leave comments with pointers.)

1) It was observed by Bhurman, Fenner, Fortnow, van Velkebeek that the following set is NPTALLY-complete:

Let M1, M2, ... be a standard list of NP-machines. Let

A = { 1(i,n,t) : Mi(1n) accepts on some path within t steps }'

The set involves Turing Machines so its not quite what I want.


2) Messner and Toran show that, under an unlikely assumption about proof systems there exists an NPSPARSE-complete set. The set involves Turing Machines. Plus it uses an unlikely assumption. Interesting, but not quite what I want.


3) Buhrman, Fenner, Fortnow, van Melkebeek also showed that there are relativized worlds where there are no NPSPARSE sets (this was their main result). Interesting but not quite what I want.

4) If A is NE-complete then the tally version: { 1x : x is in A } is likely NPTALLY-complete. This may help me get what I want.

Okay then!

Are there any other sets that are NPTALLY-complete. NPSPARSE-complete? The obnoxious answer is to take finite variants of A. What I really want a set of such problems so that we can proof other problems NPTALLY-complete or NPSPARSE-complete with the ease we now prove problems NP-complete.



Thursday, September 05, 2019

Transitioning

You may have noticed, or not, that I haven't posted or tweeted much in the last month. I've had a busy time moving back to Chicago and starting my new position as Dean of the College of Science at Illinois Tech.

Part of that trip involved driving my electric Chevy Bolt from Atlanta to Chicago. You can do it, but it got a bit nerve wracking. There is only one high-speed charging station between Indianapolis and Chicago and you pray the charger outside the Lafayette Walmart actually works (it did). We passed many Tesla charging superstations, I will have to admit they have the better network. 

Theoremwise, Ryan Alweiss, Shachar Lovett, Kewen Wu and Jiapeng Zhang had significant improvements on the sunflower conjecture. I posted on the sunflower theorem for Richard Rado's centenary. Nice to see there is still some give in it.

I probably will post less often in this new position. Bill asked me "Why is being a dean (or maybe its just the move) more time then being a chair? Were you this busy when you moved and first became chair?"

When I moved to Atlanta, I moved a year ahead of the rest of the family and rented a condo. We had plenty of time to search for a house in Atlanta and plan the move. Here it all happened in a much more compressed time schedule and, since we've bought a condo, into a much more compressed space.

Being a chair certainly ate up plenty of time but it feels different as dean with a more external focus. I won't give up the blog but you'll probably hear a lot more from Bill than from me at least in the near future.

Tuesday, September 03, 2019

Can Mathematicians Fix Iphones? Can anyone?


In my last post I noted that if I am asked (since I am a CS prof)

Can you fix my iphone

is

No, I work on the math side of CS

Some readers emailed me (I told them to comment instead but they were worried that other readers would argue with them) that NO, this is a tired and incorrect stereotype. Here are some samples:

1) People in Mathematics are no better or worse at fixing iphones, fixing cars, programming their VCR's, etc than the public.

2) For that matter, people in academica, even in practical sounding fields like Systems, are no better.

3) Is your nephew Jason who used to fix forklifts for a living better at these things then the general public? I asked him. Answer: No, though he is better at fixing forklifts.

I think something else is going on here. Lets look at fixing your own car. I think this is the sort of thing that some people used to be able to do but now very few can do it. Cars have gotten more complicated.

Iphones are not quite there yet but its getting that way.

Of course somethings have gotten easier--- programming a DVR is much easier than programming a VCR. And people can easily use WORD or write programs without having to know any hardware.

OKAY, after all these random thoughts, here is the question: What do you think?

Are people in CS or Math or CS theory better at X than the general public where X is NOT CS, Math or CS theory, but something like fixing their cars?

And

What has gotten harder? What has gotten easier?