Wednesday, December 20, 2023

2023 Complexity Year in Review

Result of the year goes to 

Polynomial-Time Pseudodeterministic Construction of Primes
Lijie Chen, Zhenjian Lu, Igor C. Oliveira, Hanlin Ren and Rahul Santhanam

An impressive year for complexity and we wrote several theorem posts this year including the one above, new Ramsey bounds, exponential circuit bounds for \(\Sigma_2^E\), inverting compression functions and advances in group isomorphism

It's a year dominated by wars, Taylor Swift, ChatGPT and mathematicians Leaning in. The blog word of the year is meta-complexity.

We remember Jimmy BuffettMartin Davis, Neil JonesAbraham LempelGordon MooreRoger Schank, Vera SósWilliam WulfJacob Ziv and Dilbert. Happy to forget Ted Kaczynski

Thanks to our guest posters Eric Allender, Ravi BoppanaJosh Grochow, and Youming Qiao.

Enjoy the holidays and we'll see you in 2024.

Sunday, December 17, 2023

Archimedes, Pi, and Pickelball

I typed 

3,1,4,1,5,9 

into OEIS and,  as expected, I found that these are the first few digits of \(\pi\). See here.

I then read on the page:

\(\pi\) is sometimes refereed to a Archimedes constant, because the Greek mathematician computed lower and upper bounds of \(\pi\) by drawing regular polygons inside and outside a circle. 

In my 40 or so years being involved with math, I have never heard \(\pi\) called Archimedes constant. Whats going on here?

(a) The term is STILL in use I just happened to never hear it. I noted in a blog here that I didn't have a single book on trig in my house, so this is plausible.

(b) The term is NOT in use but the person who wrote that entry either is rather old or THOUGHT it was in use or heard it once when he was 10 years old or...

(c) Some weighted sum of (a) and (b) 

I suspect the term is NOT in use anymore. So when was it in use, and when did it fall out of favor? 

More random thoughts

1) I googled why is it called pi but it was auto filled to ask why is it called pickleball. I'll report on both:

pi: It was first called pi  in 1706 by [The Welsh Mathematician] William Jones because pi is the first letter in the Greek word perimitros which means perimeter. (ADDED LATER- the comments say that the question-and-answer are not as simple as this quote would imply. I am sure they are right.) 

Pickleball: This website here says the following. Joel Pritchard invented the game (the story is more complicated than that). With that in mind here is a direct quote:

Okay, now you know how it all started but the question we’re all thinking still remains: why do they call pickleball, well, pickleball? According the to U.S.A. Pickleball Association, the origins of the name differs between different accounts.

Joel Pritchard’s wife, Joan, started to call their game pickleball because “the combination of different sports reminded me of the pickle boat in crew where oarsmen were chosen from the leftovers of other boats.” But according to Barney McCallum, they named the game after Pritchard’s dog, who was (as you might’ve guessed) named Pickles! Despite the sour taste of actual pickles, their dog was sweet and known to run off with the ball while it was still being played!

Depending on who you ask, both accounts of the game’s name may actually be true. At the start, there wasn’t any name for pickleball until an official one was needed when the game started to gain traction. With the laid-back nature of pickleball, it’s only appropriate that it was named in a similar fashion.


Wednesday, December 13, 2023

Respect

I've generally avoided talking about all the events at college campuses in the last few months that came to a head at the congressional hearings last week that led to the resignation of Penn's president Liz Magill. It's not that I don't have any thoughts to share, rather I have too many that I'm still trying to process. 

I had planned to put them in writing this week but I can't. Not yet.

So one word for this season to hopefully recapture what we've seem to have lost: Respect.

Sunday, December 10, 2023

Where do Journals go to Die?

In 2011 I had a post, here, about a real journal called The Antarctica Journal of Mathematics. Note that I put in the link in the last sentence; however, if you go there you will get to a page that IS labelled with the name of that journal, but all of the links on it are either broken or go to a place to sell domain names. I assume the journal is dead, but I can't find any evidence that it was ever alive. When I googled it I got the link above for the first hit and my 2011 blog post for the fourth hit.

On Wikipedia I found a debate about deleting the Wikipedia webpage for it, which I assume won since there is no webpage for it.  Here is that webpage: here.  That page also debates whether the journal was a joke, though they conclude that it was not a joke.

 

If you try to go to The Journal of Combinatorics and Number Theory using that link you will find

a) There is a page, but it says that the  journal is discontinued.

b)  You can click to see the table of contents from Vol 11, No 1 to Vol 12 No. 3.

c)  There is a mechanism to click on stuff and BUY articles from those issues but I do not know if it works, and I am not going to find out.

d) For issues before Vol 11, there is a place to click for prior issues. When you click on it you get to the NOVA publishers webpage but no indication of where to go for prior issues of this particular journal.

e) There is no Wikipedia site for the journal. That might not mean much, there is no Wikipedia site for  Raegan Revord either, and she deserves one (I linked to her IMDB page).

f) Why do I care? Actually, I don't care much. However, I DO have an article in that journal, co-authored with Larry Washington and Sam Zbarsky. It was refereed seriously and we were never asked for money, so (1) its probably not a scam, and (2)  there was some quality control  (or the referee didn't get the memo). It was in Volume 10, No. 1. There is  no way for me to verify that it was ever published. Fortunately I will never have to. Also, the paper does not need to be published to be known since (1) I blogged about it  here and (2)  its on arxiv's here. Someone read it (probably on arxiv) and wrote a followup paper, see here.

 

QUESTIONS

1) Why did The Antarctica Journal of Mathematics die?

2) Why did the Journal of Combinatorics and Number Theory die?

If an author publishes in a journal that dies, and the journal does not maintain a good website, how can the author prove that he published there, if they need to? For a paper journal they may have a paper version of the journal, but for an e-journal they may be SOL.

If the paper is not on arxiv (or a similar site) it may be lost forever. Note that even if you wanted to pay for an article in either of those journals, it might be impossible. 

3) I wonder- are journals losing money since people are using arxiv and hence not buying the journals?

I only use journals to get credit from my school for publishing, and to get a good proofread from the referee. NOT to get an article out there. Thats what arxiv is for! Also, in my case I can email my article to the few people who care. And I can blog about it. 


Wednesday, December 06, 2023

Do We Need to Formalize?

Back in the 1990s I explored the system Nuprl for formalizing mathematical proofs. We had a theoretical paper on quickly checking a proof and I wanted to see if we could code it up and make it work on some simple theorems. It never went anywhere.

Automated proof systems have come a long way. Yesterday Terry Tao announced that he has fully formalized his proof with Tim Gowers, Ben Green and Freddie Manners of the Polynomial Freiman-Ruzsa (PFR) conjecture over \({\mathbb F}_2\) in the Lean proof system. The process from paper to formalization took about three weeks.

It's an impressive feat--major new theorem to full formalization in weeks. There's a nice Quanta article from 2021 about the Lean system formalizing a different result.

Is this just a novelty or do we gain something from the formalization?

Mathematics has done quite well without full formalization of theorems--we trust papers that give enough details of a proof that we know a full formalization is possible but fully formalizing a proof just didn't seem an efficient use of a mathematician's time. Should we not trust the proof of the Green-Tao theorem that there are arbitrarily long arithmetic progressions in the primes if we don't have a full formalization?

It would be interesting if formalizing in Lean led to new theorems or found serious bugs in "proofs" of old theorems. 

Maybe AI combined with strong proof systems in the future will be able to take a well-written math paper and find a fully formalized proof or output a logical gap. That would be great--a reviewer can focus on the value of a result instead of its correctness. But we are a long way from unassisted formalization.

Concepts like Turing Machines are hard to fully formulate but somebody did it and there's a lengthy discussion of formalizing computational complexity. So perhaps we can require that anyone who claims a solution to P vs NP needs to formalize their proof before they get to post it to arXiv. For me, that would be the killer app for Lean.

Sunday, December 03, 2023

Where do Non-Primitive Recursive Functions come up NATURALLY?

The following is a conversation between Clyde Kruskal and Bill Gasarch.

CLYDE:  Bill, a student, Ian Roberts,  asked me if there are any non-primitive recursive functions that people actually want to compute. (See here for a definition of Primitive Recursive. Also see here for Ackermann's function which is computable but not primitive recursive.) 

BILL: Off hand I would say no, but non-prim rec functions DO come up in natural ways.

CLYDE: That's not what he asked.

BILL: Even so, that's what I will blog about. OH, one more thing, why does he want to know?

CLYDE: Ask him directly. (BILL then emailed Ian)

BILL: Good question about NATURAL problems that are not prim-rec, but why do you want to know?

IAN:  Meyer and Richie proved (see here) that if you limit the control flow to IF statements, FOR loops with finite iterators then the class of functions you implement is  exactly the primitive recursive functions. So I was wondering if I could avoid ever using WHILE loops since they are harder to reason about. 

BILL: YES you COULD avoid ever using WHILE LOOPS; however, there are times when using them is the best way to go.

CLYDE: Bill, when is the last time you wrote a program? And LaTex does not count. 

BILL: Good point. Rather than take my word for it, let's ASK my readers. I'll add that to my list of RANDOM THOUGHTS ABOUT non-prim rec functions. 

SO, random thoughts on non-prim rec functions

0) Are there problems for which writing a WHILE loop is the way to go even though they are  not needed? 

1) HALT is not prim recursive and we want to compute it. Oh well. All future examples will be computable.

2) QUESTION: Are there simple programming languages so that HALT restricted to them is decidable but not primitive recursive? I suspect one could contrive such a language so I ask for both natural and contrived examples. 

3) The Paris-Harrington Numbers from Ramsey Theory are computable and  grow MUCH faster than prim rec. Indeed- they grow much faster than Ackermann's function. See Wikipedia Entry.

4) The Kanamori-McAloon Theorem from Ramsey theory is computable and grow MUCH faster than prim rec. Indeed- they grow much faster than Ackemann's function. See Wikipedia Entry. They are not as well known as the Paris-Harrington numbers. Hopefully this blog post will help that. 

5) Goodstein's Theorem yields numbers that are computable and grow MUCH faster than prim rec. Indeed, they grow much faster than Ackermann's function. See Wikipedia Entry and/or my blog post on them.

6) QUESTION: Of PH, KM, GOOD, which grows fastest? Second fastest? Third fastest? Perhaps some are tied.

6) QUESTION: We know that GO and CHESS have very high complexity, but are still prim recursive. We know that there are some math games (e.g., the Hydra game) that are not prim recursive. Are there any FUN  games whose complexity is NOT prim recursive?

7) Tarjan's UNION-FIND data structure has amortized complexity roughly O(n alpha(n)) where alpha(n) is the inverse of Ackermann's function. This is also a lower bound. See Wikipedia entry on disjoint-set data structure. QUESTION: Is Tarjan's UNION-FIND data structure actually used? It can be used to speed up Kruskal's MST algorithm, but that just takes the question back one step: Is MST a problem people really want to solve? I asked Lance and he asked chatty. For the results of that see here . The answer seems to be YES, though I wonder if the speedup that UNION-FIND gives is important. Union-Find is also used in the Hoshen-Kopelman Algorithm for (to quote Wikipedia) labeling clusters on a grid, where the grid is a regular network of  cells, with the cells being either occupied or unoccupied.  Other issues:  (a) is UNION-FIND hard to code up? Lance tells me that it is easy to code up.  (b) Is the constant reasonable?

8) Is the Ackerman Security Company called that since they claim that breaking their security is as hard as computing Ackerman's function? Unlikely- they spell it with only one n at the end. Even so, my class believed me when I told them that. 

9) The finite version of Kruskal's Tree Theorem YADA YADA YADA not prim rec. Wikipedia Entry

CLYDE: You can't YADA YADA YADA my Uncle Joe!

BILL: It's my party and you'll cry if you want to, cry if you want to, cry if you want to. (See here for Leslie Gore's song Its my party and I'll cry if I want to which is not about non primitive recursive functions. Also see her sequel Judy's turn to cry.  A much better song with a better message for teenagers is her You don't own me.)

CLYDE: Oh well. However, I'll make sure to tell that example to my class.

(ADDED LATER: Quanta had a nice article on non-primrec functions recently here. The problem they are talking about is discussed in the comments.) 


Wednesday, November 29, 2023

The Engineer and The Computer Scientist

What is the difference between engineering and computer science? CS is not an engineering field, though there is some overlap. It's not the digital versus physical. To capture the difference we can look two tech CEOs dominating the news recently, Elon Musk and Sam Altman.

Engineers are problem solvers creating or improving technologies to reach a goal. Tesla has led the way for cars to become electric, connected and autonomous. SpaceX has develops highly capable rocket technology at lower cost. Ultimately though Tesla makes cars that go from A to B and SpaceX sends stuff into space. When Musk bought Twitter he focused on the engineering, but Twitter's challenges were not in its engineering and this is why Twitter/X has been floundering.

Computer scientists make platforms. The Internet, cloud computing, programming languages, mobile computing, cryptography, databases, social networks even NP-completeness don't focus on individual problems, rather they create environments that users and developers can apply to a variety of new applications and challenges.

AI is no exception. OpenAI has no specific goal or purpose it is trying to solve. There's AGI but that's more of a vague benchmark that may (or may not) be passed as ChatGPT and its successors continue to improve.

AWS, Python, Apple and OpenAI all have developers conferences. Tesla and SpaceX do not. Elon Musk has actually made Twitter/X harder for developers to build on. I don't hold high hopes for Grok.

It's not a perfect division, many engineers create platforms and computer scientists tackle specific problems. Nevertheless it's a good way to see the distinction between the fields.

Don't think the CS way is always the better way. You have less control of platforms, they can act in unexpected ways and people can use them unintentionally or intentionally to cause harm. Mitigating those harms is a challenge we must continuously address.

Sunday, November 26, 2023

Can I make money in the betting markets based on what i ``know'' about the Republican VP choice

The betting markets for the Republican VP are here.

Assume that I am very confident that the VP won't be Nikki Haley. I can buy a NO share for 90 cents (that may change by the time you read this). If I buy 1 shares of NO for Haley for 90 cents then

IF she is NOT the nominee I get 100 cents, so a profit of 100-90= 10 cents. 

IF she IS the nominee I lose the 90 cents.

More generally, if I buy x shares then 

IF she is NOT the nominee I get 100x cents so, so a profit of 100x-90x=10x cents.

IF she IS the nominee then I lose the 90x cents. 

SO, if I am so confident, why not buy x shares of NO on Haley where x is large?

That depends on how confident I am. 

I am VERY CONFIDENT that if Trump is the Nominee then Haley is NOT the VP. Say I think the probability of this is \( p_1\) (very close to 1. \(p_1\) could be 0.999).

I am CONFIDENT that Trump will be the nominee.  Note that I didn't say ``VERY''. I think the probability of this is \(p_2\) (not as confident- say 0.95). 

So my expected value is (ignoring some other cases)

\(10xp_1p_2 - 90(1-p_1p_2)x = x(10p_1p_2 - 90(1-p_1p_2))\).

So to make a profit I  need \(p_1p_2 > \frac{9}{10} \). 

With my estimates \(p_1=0.999\) and \(p_2=0.95\) I should make the bet. But I won't.

1) I can imagine a world where Trump does not get the nomination- health or legal problem. Legal might not stop him--- recall that Eugene Debs ran for president while in jail, as the socialist candidate (a third party) and got 3.4% of the vote. When I google list all of the people who ran for president from jail Eugene Debs is the only person I found. I suspect there are others but they were fourth party candidates. Trump  might be ineligible to run based on some interpretation of the 14th amendment, though I doubt that will happen.

2) I cannot imagine a world where Trump gets the nomination and picks Haley as his VP. So if I could make an IF-THEN bet that it WON"T be the case that Trump is the nominee and Haley is VP, I might do that. But again, weird things can happen--- what if  (say) at the convention Kristi Noem is picked for VP (she is on some lists of possible running mates for Trump) but then after that Trump has a health problem and withdraws and after the dust settles, Haley is VP. So I need to word the bet carefully so that I don't lose in that case. 

3) NOT-Haley is now at 88 cents so I may make more of a profit!

4) Some of my thoughts on Trump's possible running mates come from this article on Nate Silver's website: here.  (ADDED LATER: One of my astute readers left a comment that informed me that it should not be called Nate Silver's website.

5) NO-PENCE is selling for 99 cents. I am sure Pence won't be VP in any world, but to make any real money on that you would need to buy A LOT of shares.

6) Tim Scott and Nikki Haley both said they are not interested in the VP job. People running for prez always say that. It has no information value since often they end up on the ticket anyway. Well--- not that often since lately its not that often that a primary opponent is VP. (Kamala Harris was one, but before that you need to go all the way back to 2004 Kerry-Edwards. One of my astute readers points out that Biden ran for prez in 2008, though I didn't count that since he dropped out after the Iowa Caucus where he got less than 1% of the vote. So this depends on what you mean by `running for prez'.) 

7) Winning X dollars will make me feel good. Losing Y dollars will make me feel VERY BAD. So emotionally I shouldn't be betting. And I don't. My non-math friends (I have some!) ask me why  don't go to Las Vegas and clean up, using my knowledge of Ramsey Theory. Um.... The odd thing is that even if I was really good at Probability its still hard to make money in Vegas. As the saying goes: the money you gamble in Vegas, stays in Vegas.

8) I have blogged before about gambling. One theme is that there is no such thing as a sure thing. Here are the links:

Betting on who will win the Superb Owl if one team is undefeated here. (Superb Owl is not a typo, see next link for why I am calling that big football game the Superb Owl.)

Betting on weird things in the Superb Owl: see here

Betting on long shots in the Kentucky Derby: see  here

Betting on VP in 2008: see here

 Betting on Cash Cab: Part of this post here



Monday, November 20, 2023

War Games

With the self-destruction of OpenAI well underway, Friday night I watched the movie War Games for the first time since the 80s. Quick synopsis (spoilers): NORAD can't trust humans to launch nuclear weapons so it creates an AI system to replace them. A pre-Ferris Matthew Broderick both breaks into the system triggering a potential nuclear war and saves the day in the end

The movie had great influence for a rather ridiculous story. What Broderick's character did was not a federal crime at the time and the movie led to legislation to fix that. The DEF CON security conference was named after the DEFCON alert levels highlighted in War Games. The movie, which celebrated the male hacker, came out in 1983 at a time we had near parity in genders in computer science that would quickly dissolve.

War Games played up the doomsday scenario of AI, along with earlier films like Colossus: The Forbin Project and Hal in 2001: A Space Odyssey. The dangers of AI might have been at the center of Sam Altman's firing

The departure of Mr. Altman, 38, also drew attention to a rift in the A.I. community between people who believe A.I. is the most important new technology since web browsers and others who worry that moving too fast to develop it could be dangerous. Mr. Sutskever, in particular, was worried that Mr. Altman was too focused on building OpenAI’s business while not paying enough attention to the dangers of A.I.

In the movie a 4-star general states what every movie goer was thinking.

Just unplug the goddamn thing!

The response:

That won't work, General. It would interpret a shutdown as the destruction of NORAD. The computers in the silos would carry out their last instructions. They'd launch.

So I'd unplug the computers in the silos, but that wouldn't have kept the movie going. 

Thursday, November 16, 2023

Inverting a Function

With the STOC deadline this last Monday, a number of complexity papers have appeared on arXiV and ECCC. Two caught my eye because they seem to independently prove the same interesting result.

Beating Brute Force for Compression Problems by Shuichi Hirahara, Rahul Ilango and Ryan Williams

The Non-Uniform Perebor Conjecture for Time-Bounded Kolmogorov Complexity is False by Noam Mazor and Rafael Pass

Suppose you have a function f that maps from \(\Sigma^*\) to \(\Sigma^n\) and runs in time poly(n). Suppose for some x you knew there was a y of length m such that f(y) = x. How hard is it to find y? Standard brute force search would take time \(poly(n)2^m\). With a quantum computer you could find the y in time \(poly(n)2^\frac{m}{2}\) using Grover's algorithm. How about classically?

These papers show that you solve this problem non-uniformly in time \(poly(n)2^\frac{4m}{5}.\) A bit surprising that you can avoid searching the whole space of solutions. Has applications to the minimum circuit-size problem and time-bounded Kolmogorov complexity. Both papers build on earlier work of Amos Fiat and Moni Naor. 

I have no inside knowledge whether these papers were either or both submitted to STOC, but if they were and the program committee finds the results worthy, how will it be handled? Merge or no-merge. We'll find out in mid-February when the committee announces the acceptances.

Sunday, November 12, 2023

Forgetting your password is usually not a big deal. Unless....

When I forget my password I can usually reset it and get back to where I was.  Of course, before I get access I am nervous that its not going to work. And there may be times (its never happened to me) where one CANNOT get back access and are locked out permanently.  There ARE times when getting back access is REALLY IMPORTANT.

So the perfect storm would be: 

a) You forget your password, and

b) You  CANNOT get it back so you are perm locked out, and

c) Its REALLY important to get access.

I don't know how common it is, but here is an example of the perfect storm

a) Prime Trust, a fintech startup company specializing in cryptocurrency, lost the encryption key to its hardware wallet. 

b) They also lost the recovery key so they CANNOT get it back.

c) Is it REALLY important? Lets just say they are now singing Buddy can you spare $38.9 million? (For the original click on Buddy can you spare a dime.  For a parody of it click on  Buddy Can you Spare a Couple Billion?)

There is an article about Prime Trust losing their encryption key here.

Bruce Schneider has comments on it here.

SIDE NOTE: I've heard the following.

a) Having 15 letter password with at least 2 diff small letters, 2 diff cap letters, 2 diff numbers, 2 diff symbols is NOT good for security since you end up having to write it down AND hackers-guessing-passwords is not the main problem anyway. 

b) Some say you are better off taking 4 English words (or whatever language you speak) that have nothing in common and put them together for a password (not sure if some should be in caps) like

elephant Ramsey Rockford Ezra

which is easier to memorize. Such an approach might have helped Prime Trust. Oh well.

(xkcd had this to say: here)

WARNING- I DO NOT know if points a,b above are really true.

WARNING- Despite the WARNING above one of my readers emailed me that I should NOT be spreading false rumors about passwords. 


 Note that there are three kinds of statements

Those that are true

Those that are false

Those that you hear


Thursday, November 09, 2023

Othello. Solved.

The perfect game
Back in the 1980s, a high school friend and I created Excalibur, a computer othello program that I've posted about before. Even on an OG IBM PC we could play a perfect endgame from 14 moves out. There's been a lot of Moore's law since then and I've wondered if it could be possible to determine for sure the winner of an Othello game with perfect play. And now we know the answer: No, because it is a draw. 

Recently Hiroki Takizawa announced that he had solved Othello, originally known as Reversi. It's a simple game where two players, black and white, alternately play pieces of their color. When the black player moves, all the white pieces between the played piece and other black pieces flip to black. The reverse when white plays. Whomever has the most pieces of their color at the end wins.

Takizawa solved Othello with massive computing power and considerable alpha-beta pruning. The basic idea of alpha-beta pruning is that if you have a move that guarantees a win (or a draw in this case), you don't need to look at other moves. He builds his algorithm on a variation of a program Edax that could do perfect play from 36 moves out. I should acknowledge that this work has not yet been independently verified or refereed.

The resulting perfect play is in the diagram above. Takizawa checked that any deviation from this play would lead to either a draw or loss for the deviating player. Technically this is called "weakly solving" Othello where "strongly solving" means giving an algorithm that plays perfectly from any reachable position. Weakly solving is strong enough for me. 

I'm surprised it is a draw. There are so many possible values for the difference between white and black pieces at the end and Othello is not symmetric--it really matters who plays first. So it seems quite a coincidence that perfect play ends up perfectly even. But now we know.

Thanks to Jon Katz for pointing us to this paper.

Update (11/12/23): I uploaded a video playing out the game.

Sunday, November 05, 2023

In the bad old days we had Punchcards. How did people deal with that?

In the fall of 1976 I started as a Freshman at SUNY Stony Brook intending to major in Math and Computer Science.  I took Honors Calculus I and CS 1. The CS course was in Pascal (which I still think is a fine teaching language; however, since I have rarely taught a programming course I am not qualified to debate this point). We had to write programs using

Punch cards.

(From Sept 1, 1976 until Nov 5, 2023 I thought punch cards was one word. I also thought Stony Brook was one word. Wow- the things you learn from blogging!)

It was awful. It took a long time to just write a simple program and mistakes were hard to avoid. The turn-around time for submitting the program and getting the output was 1 hour if you didn't do the project the day before it was due but a lot longer if you did. I recall spending 6 hours debugging a program only to find that I had a 1 (the numeral) instead of a capital I (the letter). Very bad for time-put-in vs knowledge-gotten-out. 

I DID finish the course but decided to NOT major in Computer Science (I majored in Math and Applied Math instead). I had the following thought, though I could not possibly have expressed it this well back then:

I'll return to computer science once they have a better interface.

But I wondered, why didn't everyone think that? The ratio of time-put-in to knowledge-gotten-out was so bad that I would think people would all realize they should wait for a better interface. Of course, its a good thing that others did NOT give up. If everyone had given up then we wouldn't have our current wonderful electronic society!!

More generally, What did people in that era think? I asked several people who dealt with punch cards and there are some of their responses:

1) Paper Tape was Worse: 

 a) In 1960 I used a Bendix G15. It had paper tape, and I was constantly ripping them. I never imagined I would end up in this field but here I am! So actually punch cards were a major improvement. 

b) In 1976, in High School, we used paper tape. So cards did not seem like a major inconvenience. A bigger problem was the lack of resources (hole-punch machines, card readers, printers, etc) which mean long lines leading up to due dates. 

c) In 1980, to get to and from the computer room we had to walk uphill, 5 miles, in the snow. Abishola was lucky, she had shoes. Bidal  was less lucky, they had socks but no shoes,  Carol was even less lucky, she had no shoes or socks. As for me, I was the least lucky of all since I had no feet. 

Bill Meta comment: Note that the rate at which places went from paper tape to punch cards varied. We will see more variation later. 

2) An Upside to Punch Cards

a) Some of us, however, designed programs in advance of hitting the 029 (Bill Add: 029 and 026 were the post widely used keypunches of that era.)  A reasonable case can be made that the opportunity to reason about what we do first (rather than doing the monkey-typewriter thing on fast interfaces, then debugging by friction) resulted in higher quality code and way more polished critical thinking skills. Thinking. I recommend it. And in point of fact, that is one of the truisms about software engineering writ large. Some of the most successful practices we employ are ones which magically involve freeing developers to think about what the heck they do. Quality improves accordingly. This is not advocating that we return to cards. It is however the polite push-back against your assertion that the appropriate technology of that era was awful. In fact, it was just appropriate for the computing of the day. When leveraged correctly, good things happened. 

 b) One of the best things that came from it was convincing you to go prove theorems for a living. Glad for that! You work your side of the street and we will work ours.

c) While entering / running code was a pain, especially compared to today, there was a hidden benefit:  if your program didn't work you had to think carefully about why, because the number of run / debug / re-run loops you could engage in was very limited.  This made me a better programmer, because I was less prone to "shooting from hip" to get my programs to work.

d) With punch cards, you got a lot more done more quickly if you were disciplined, and your description of yourself does not sound very disciplined. (I wasn’t either.) Dijkstra preaches the importance of discipline for formal reasons, but there is no doubt that his programming experience was that the way to get working code was to think it all through carefully ahead of time, not to write hastily and iterate by trial and error. And Knuth, who shares none of Dijkstra’s sternness, describes his experience learning to program in much the same terms.


3) Did people think that there would one day be a better interface? 

a) Did not think much about it. Compared to what came before this (no access to computers) it was fantastic. I learned my first programming language (Fortran) from a text without access to a real computer at the time, so punch cards or anything was a big plus. In retrospect it's of course painful, but at the time it was just the way it was.  

b) Sure, but people can only see so far into the future. I could foresee the day when we would all have access to a CRT-screen monitor with a text-based interface (with both lower and upper-case characters, wow) all connected to a mainframe. We might even have a light pen that we could use to point at the screen! Of course, we were all looking forward with anticipation to when we would be living under giant glass-domed cities and flying around with jet packs.

4) Why didn't more people do what I do and say I'll wait for a better interface? 

This was somewhat covered by the answers to question: (2) What do you mean better? , and  question (3) Hey, it was what it was. I also got the following:

a) We wanted to be programmers or computer scientists thought we might not have said it that way. YOU Bill could opt out and do Math which was your first love anyway. WE couldn't. And there were scientists (usually physicists) who HAD to use the machine to calculate their electrons-and-whatnot so they HAD NO CHOICE but to deal with paper tape or punch cards.

b)  I was the first class at Cornell not to use punch cards. But I would have used them. Otherwise you are always waiting for the next technology (assembler, high-level language, disk drives, USB drives, dropbox, etc). What if I said in 1981, I'm going to wait until AI writes programs for me. Well, who's laughing now!

c) If you keep waiting for the next version, you never buy anything. Bill, you  still don't own a camera (see blog post here). BILL RESPONSE: I have one in my cell phone, but your point is correct, I never bought a camera that was just a camera.

5) Did you think that this is tedious  but that they were paving the way for a better tomorrow? (I doubt you could have expressed it that well).

a) Yes. And I could not possibly have expressed it that well. Hindsight is 20-20.

b) Not at all. Along with our pocket protectors and calculators strapped to our belts, it was all part of our proud nerdy identity.

6) Anything else you want to add?

a) As an undergraduate (Brandeis) we had PDP-11's, but my graduate school (NYU) had punch cards. Weird!

b) More nostalgia:


 - Classic prank: Get a bag full of punch card "holes", go back to the dorm and stuff them in your roommate's folded-up umbrella.

 - Using a thick magic marker to write the program's name along the top of the deck.

 - Status was determined by the size of your card deck. You knew that you were hot stuff when you had a card deck so large that a rubber band wouldn't hold it---you needed a box to put them in.

 - Heated debates with classmates on the best way to cleanly tear a computer printout along the perforation (fast snap or slow unzip?

- The physicality of machines (see here). 

c) You are only talking about punch cards. There were other issues: lack of machines and to much noise. Plus, of course, we didn't know what we were doing. Unlike now :-)

d)  I am not the best person to ask, but I think the answer is that you are correct — using punch cards turned people away from using computers as anything but a tool, as opposed to something fun or interesting in itself, much less beautiful. Those happier experiences were ones I had a decade before your punch card experience, because I sat down by myself at the console of a DEC PDP-4 in 1966 and never looked back. I actually HAD done some punch card programming a year or two earlier on an IBM 1620 and was left wondering why anyone would want to go into accounting — which was the application domain of that summer job. It never occurred while I was spilling those decks on the floor to me that there was something called computer science. I kind of realized that only when I took a computer graphics course in 1967, again using a standalone minicomputer (a PDP-1).  However, to your point, yes I suspect that that few people got into the field (such as it was) with just a batch-processing experience.



 

 


Thursday, November 02, 2023

The Loop Graph

 


Locals refer to downtown Chicago as "The Loop" because of a rectangle of elevated (or "El") train tracks that create a loop through the area first built in the 1890s. As empty nesters we moved back into Chicago in the "South Loop" and I find myself riding those rails often and thinking about how these trains traverse the circle.

There are three ways to enter and escape the loop, South through the southeast corner, North through the northwest corner and West through the northwest corner. There are five train lines that run through the loop.
  • Orange: Enters from the south, circle clockwise and out back south. Goes to Midway Airport.
  • Pink: Enters from the west, circles clockwise and out back west.
  • Purple: Enters from the north, circles clockwise and out back north. Goes to Evanston near Northwestern.
  • Brown: Enters from the north, circles counterclockwise and out back north.
  • Green: The only one that goes out a different way than in. It goes both directions from the west to/from the south.
Two more lines run underground through the loop.
  • Blue: You can transfer to Blue by taking escalators down from Clark/Lake station. Goes to O'Hare.
  • Red: You can't get to a loop station from Red without some walking outside. The Red line does connect with the orange, green, brown and purple lines at various stations outside the loop. The Red hits both Chicago major league baseball stadiums.
A few observations:
  • No train takes the Northwest corner counterclockwise. If you want to get from Clark/Lake to Washington/Wells you need to take the whole loop in the other direction. On the other hand it is just a couple of blocks walk.
  • If it wasn't for the Green Line you could get away with one track around the loop, though it might get backed up in rush hour. 
  • The whole idea of the loop was for a time when people would all work in downtown, the trains would come in drop people off at several stops and then head back out with no downtown terminus. Where people work has changed but the loop remains.
An elevated train traveling on the west side of the loop.


Sunday, October 29, 2023

Theory that really DOES apply: Security.

I recently read and wrote a review of

                               Math for Security by Daniel Reilly.

(For the review see here. It will appear in SIGACT News at some later point. For the amazon link see here. Disclosure: Lance and I are Amazon Affiliates.)

The book had great example of using THEORY for PRACTICAL problems of security (NOT what you think as you will see later). Since I am always surprised (possibly because of my ignorance) when theory REALLY applied to PRACTICE I asked the author some questions which he answered. Our conversation is below.

BILL: I was surprised that a book called Math for Security didn't have crypto in it. Why was that?

DANIEL:
There's a plethora of excellent material that already covers the topic very well. Cracking codes and analyzing ciphers is the first thing most people think of when they think of the relationship between math and security. There are so many other topics where a little creativity and math can open up new avenues and approaches to unique problems we face in security. As an analyst and consultant, a very small percentage of my time goes into cryptographic systems, but I'm regularly asked other questions.  I'm hoping to show that anyone can apply a little math to start making better (or at least more informed) decisions in many of these areas, not just cryptography.

BILL: Many theorists dream of the day when someone comes along to REALLY use their stuff. For example---

DANIEL: (Cuts Bill off) That's funny. As an analyst I'm always hoping to define a theory which explains some system I've been analyzing. Something like Kim Rossmo's formula in Forensics
(see here). Understanding what your data is telling you is good, but discovering why, that's the pinnacle of accomplishment! I guess the grass is always greener on the other side of the fence!

BILL: The first time I saw the problem of determining the Betweeness Centrality of a node is, it was in a paper showing that it was APSP-hard (so likely not in subcubic time).  Hence I was AMAZED that you use it FOR REAL. How hard was it to take the THEORY that you found in the literature and APPLY it.

DANIEL: For the graph theory in the book, and in general, I think it comes down to some basic statistics and understanding the underlying system being modeled. Betweenness centrality is a great example. It's really easy to calculate, but what it means in practice is wholly dependent on what generated the data. A high Betweenness score in a social network is the result of a different underlying structure than in a computer network. This is one reason graph theory made sense as the place to start. There are so many problems that can be represented as a graph with very little change to the methodology needed, but it does show the need to be flexible in your interpretation of theory.
For the programming side, NetworkX is really well designed so I didn't have to do much to implement the algorithms. That made it easy to focus on showing off how the library could be used when analyzing a real data set. Of course, whenever you're taking a general theory and applying it to a specific problem, you'll always have some pieces you need to build. The pieces that glue the theory
and the data together in a useful way. For me, that is the fun part. The art behind the science. You can give 5 analysts the same problem and get back 10 possible approaches!

BILL: I recently looked at the Complexity of the Art Gallery Problem- it is ER-Complete (see Wikipedia entry on Existential Theory of the Reals) which is between NP and PSPACE. Hence I was AMAZED that you use it for REAL. How hard was it to take the THEORY that you found in the literature and APPLY it. For example, the Guards, unlike the Who (see here) cannot see for miles and miles and miles and miles and miles.  (NOTE: Spell check thinks that NP is a word, but PSPACE is not a word. Odd!)

DANIEL: You sound like a broken record. But still a good question. The Art Gallery Project was the one that kicked off the idea to write the book in the first place. I was looking for a method to analyze physical security layouts and I came across a paper describing the problem as it related to security camera placement. That led me to the original paper and then Fisk's paper using the greedy coloring algorithm. I think I read one or two more pieces about how greedy coloring is implemented in NetworkX, but that was it. Once you understand the basic problem formulation, and Fisk's method of solving for n-vertex polygons, you quickly move out of what the theory was designed for. The Art Gallery project therefore represents what I think is a more typical scenario. I'll often start with a
theoretical solution that makes a lot of simplifying assumptions and then remove the simplifications a little at a time by adding in other theoretical bits (such as adding a function to compute a more realistic guess at what area a guard can protect). Sometimes I will find complimentary research on modeling something like walking speed. Other times I rely on my experiences and best guess (like assigning the space for people at an event). At it's core though, the algorithm solving the layout is the same one suggested by Fisk and implemented in NetworkX.

BILL: Anything else you want to add?

DANIEL: There is a quote I love that gets thrown around in System Dynamics a lot It's better for a model to be useful, than accurate I think that idea applies really well when you're building proof-of-concept systems. For example, a model that is very accurate at predicting an adversary's behavior may not be very useful if it is too complex and expensive to use. Ultimately, the best model is the one that is most useful for solving the problem at hand. This means that it is important to consider other factors than accuracy, such as interpretability, cost, and ease of use, when developing your proofs. It's important to remember the goal is not to perfectly model the system your studying, but to model enough of the key components to get a useful response.

Thursday, October 26, 2023

Saving Grace

The Grace Hopper Conference has grown to one of the largest in computer science, pushing past 25,000 attendees. Most women in computing, whether a student, faculty or working in industry, are usually in a minority. Grace Hopper gives them the chance to see and be part of a strong community of woman in our field in a safe and comforting environment.

Or so it was supposed to be. At the Expo part of last month's conference, where many companies come to recruit, men made up about 40% of the attendees according to an NPR report. Now Grace Hopper welcomes male allyship but these were no allies. Rather they acted, often aggressively, to reach recruiters and making the experience uncomfortable for the attendees who came for the conference's purpose.

The conference organizers released a statement and a blog post noting they can't legally limit registration based on gender but will work on other approaches to ensure a good conference in the future.

I know there are, in particular, graduating international students desperate to find a job so they can remain in the country. And I've argued that CS needs a full annual meeting, which like Grace Hopper exists to bring people together not to focus on research but to focus on each other. But none of this justifies ruining the experience of those who attend a conference for what it is. 

If you don't have respect, don't come to the party.

Monday, October 23, 2023

When did Math Get So Hard- Part 2

Click here for When did Math Get so Hard-Part 1, though it was not called Part 1 at the time. 

This post is not so much about WHEN math got so hard but an example of math BEING hard. The main issue is that so much is known that the PREREQUISITE knowledge can be overwhelming.

My interest in Hilbert's tenth problem (see here) and an email from Timothy Chow (reproduced in that article) lead me to the book

                               Rational Points on Varieties 

                                  by Bjorn Poonen

(see here for amazon link. Disclosure: Lance and I are amazon affiliates).

Here is the prerequisite for the book as stated in the preface: 

------------------------------------------

A person interesting in reading this book should have the following background:

1) Algebraic Geometry (e.g. [Har77]: up to Chapter II, Section 8 as a minimum, but familiarity with later chapters is also needed at time)--- this is not needed so much in our Chapter 1. 

2) Algebraic Number Theory (e.g., [Cas67], Fro67] or [Lan94, Part One] or [Neu99 Chapters I and II).

3) Some Group Co-homology (e.g. [AW67] or [Mil13], Chaper 2]). 

[AW67] M.F. Atiyah and I.G. Macdonald. Introduction to Commutative  Algebra, Addison-Wesley, 1969

[Cas67] J.W.S Cassels. Global Fields, Algebraic Number Theory (Proc. Instructional  Conf, Brighton), 1965), 1967, 42-84

[Fro67] A. Frolich, Local Fields, Algebraic Number Theory ((Proc. Instructional Conf, Brighton, 1965), 1967, 1-41.

[Har77] Robin Hartshore, Algebraic Geometry, Springer-Verlag, 1977, Graduate Texts in Mathematics, No. 52

[Lan94] Serge Lang, Algebraic Number Theory, 2nd ed. Grad Texts in Mathematics, Springer-Verlag. , 1994.

[Mil13] J.S. Milne, Class field theory (v4.02), March 23, 2013. Available at here

[Neu99] Jurgen Neukirch. Algebraic Number Theory,  Fundamental Principles of Mathematical Sciences Vol 332. 1999.

-------------------------------------------

This seems like quite steep prerequisites. I don't have them so perhaps they are easier than they look. 

But in any case, Some parts of math are hard because, over time, so much math is known that builds on earlier math, that just getting through the background material is hard. Comp Sci hasn't been around as long, but its been around in the 20th and 21st century when more was being produces, so its also gotten hard, as I discussed here. Note also that computer science uses some of that hard math, and is also an inspiration for some hard math.





Thursday, October 19, 2023

Fall Jobs Post 2023

In the 2022 Fall Jobs Post I talked about the effect of generative AI and that was two weeks before Open AI released ChatGPT to the public. A year later, how will AI change CS faculty hiring? Not much this year but change will come soon enough.

CS enrollment remains strong and computer science departments have not been able to keep up with the demand. Many see programming, rightly or wrongly, as one of the first careers that AI will displace, which may reduce enrollment in the future, as offshoring fears drove CS enrollment down 20 years ago. There will be newish majors, whether Data Science, Artificial Intelligence, Machine Learning or something else that will draw students away from traditional CS degrees. But for now many CS departments still need to grow their faculty. 

Having some knowledge and being willing to teach ML will definitely help in the job search but I expect we'll see demand in all areas, including theoretical computer science. There will also be more people on the market, especially as the major tech leaders aren't yet brought back hiring in CS to its previous levels.

Should you use AI to help you apply? I wouldn't use AI to write your personal statement or other materials--it's style is just too recognizable. But do use AI to read over what you wrote and give suggestions. You might be surprised on what it recommends.

CS departments are not yet using AI to screen faculty job applications. So you are writing for humans. There are many applicants so focus on what makes you stand out.

As always, have a well-designed web page with all your job materials. Make sure your Google Scholar and LinkedIn pages are accurate and up to date. Add yourself you the CRA's CV Database.

Some departments are starting the search earlier, so don't delay your applications.

Most CS faculty jobs are posted to the CRA and ACM. The CRA focuses on jobs for PhDs, the ACM mixes it up with general industry jobs. For theorists, check out TCS Jobs and Theory Announcements

If you have a job to announce, please post to the above and/or feel free to leave a comment on this post. 

Sunday, October 15, 2023

Paper is a tech-free way to preserve writing. Is there a tech-free way to preserve sound (e.g., music)

I blogged about ACM going mostly paper-free, and had some PROS and CONS about paper-free, in this blog here. One of my many astute readers named Abigail pointed out that paper does not go obsolete: we can still read books written many years ago without having to use some technology. (The first paper in Harry Lewis's book Ideas that Created the Future: Classic Papers in Computer Science was by Aristotle. See here for amazon link to the book and here for my review of the book). By contrast, there are stories of material being lost forever since they are on floppy disks. I wonder if pdf will suffer the same fate. 

However, that is not the theme of this post (do my posts have coherent themes?)

The point is 

PAPER is TECH-FREE and is good at preserving WRITING.

What about SOUND? Is there a Tech-Free way to preserve sound? I am thinking about music, though one can also wonder how old poetry was supposed to sound when read out loud. But back to music:

1) The Bible Psalms- we know the words, but not the medley. Psalms 45 has the following right before it: For the director of music. To the Tune of ``Lilies'' Of the Sons of Korah. A maskit. A wedding song. In my bible there is a footnote saying that maskit is Probably a literary or music term. Not helpful to a 21st century singer.

2) The first Rap Song is from the Bible, in 1 Samuel 18:7. The words are

Saul has slain his thousands and David his tens of thousands.

And again, we don't know the melody or the cadence or the rhythm. I have done a rendition of it for my Bible study group but they complained it was not authentic. They also told me to not quit my day job. 

3) When music went from 

Wax Cylinder to Vinyl and audio cassettes to CD to MP3 to Spotify (and similar systems)

some music was lost in each transition. Indeed, the inspiration for this post is the following personal story:


One of my hobbies is collecting and listening to  novelty songs (this has been mentioned in the following posts: here, here, here,hereherehere, here, here) Some are audio tape, some are CDs. A subgenre of novelty songs is Filk Music, which are folk songs with a science fiction (its been expanded to science) theme. They are often  sung at science fiction conventions. It is filklore that an early science fiction convention mistyped folk as filk and they decided to keep it.

I was thinking of a GREAT  filk song titled

Carl Sagan Ronald Reagan San Diegan Pagan (lyrics are here)

and I wondered

a) Do I have it in my collection? (Almost surely yes.)

b) If so can I listen to it? (If on CD then yes. If on audio tape, not sure.)  

c) In any case is it on Spotify or YouTube or...I have found obscure things on both  Spotify and YouTube  so this was plausible. Spellcheck insists I spell it YouTube not Youtube and I will of course obey the Spellcheck God.

ANSWERS

a) It is on Bayfilk Crazies, an AUDIO TAPE that I have. YEAH!

b) I have one audio tape player in my house that I had not used in years. It didn't work. BOO!

(ADDED LATER- I found at Tape Recorder where PLAY worked, though neither FF or REWIND worked. So I got to hear my song! And I was ``forced'' to hear other songs I had not heard for a while. Some were excellent gems I had forgotten about. Others... not so much. But I am happy for now.)

(ADDED MUCH LATER (July 1, 2024) I FOUND Bayfilk Crazies on mp3, here. Yeah!) 

c) So far I cannot find it to listen to ANYWHERE on the web. BOO!

(If you find such a place please leave a comment!)

d) It does not appear to be on CD. BOO! (Again, if you can find a place to buy it on CD let me know.) 

SO, is this great song LOST TO HUMANITY? I know the tune, so I could sing it on YouTube, but there are enough badly sung songs on YouTube and I do not want to add to that. 

But my more important point is MANY SONGS ARE BEING LOST TO HUMANITY!

4) For many old songs we DO have sheet music and lyrics so someone COULD reproduce it. That's great. Is it important to have the authentic real Elvis recordings, or is a really good 21nd century Elvis Impersonator good enough? That depends what you want. And if the sheet music is only online we may have the same problem we are pondering about paper. 

 Famous songs are re-recorded a lot (To see what the most recorded song of all time is, see here. Its not my version of Muffin Math, see here.) But for songs that are not quite famous, or only appeal to certain tastes, we are losing songs!

5) For the written word there is PAPER which does not go obsolete with technology (though there are fires, see the burning of the library at Alexandria). For music there seems to be NO such analog.

6) Video has the same problem. I blogged about that here




Thursday, October 12, 2023

Measuring Quantum Progress

In August the Google Quantum AI Team posted a blog post How to compare a noisy quantum processor to a classical computer to measure progress in building quantum computers. 

So far quantum advantage (a term that has thankfully seem to replace quantum supremacy) has focused on approximating quantum distributions like Boson Sampling or random quantum circuits as described in the blog post. These results are messy, it's hard to measure success, hard to know the classical complexity of these problems, hard to explain to the public and seem to have little to no practical value.

The poster child for quantum computing has always been integer factoring. Factoring has lots of great properties.

  1. Factoring is theoretically easy to solve on a quantum computer via Shor's algorithm.
  2. While we don't know the classical hardness of factoring, considerable efforts over decades have yet to produce any even subexponential-time algorithms.
  3. It is classically easy to check the factors.
  4. It is classically easy to generate hard to factor numbers.
  5. You can explain factoring to anyone with even a moderate interest in mathematics.
  6. Factoring is critical to a number of cryptographic protocols most notably RSA.

We will achieve true quantum advantage when a quantum machine can factor numbers we cannot then factor on traditional computers alone.

So why don't we measure the progress towards quantum advantage by the size of the numbers that quantum machines can factor? More precisely factoring via Shor's algorithm for order finding followed by some classical computations to get the factors. 

Likely because we don't do very well. As far as I can tell, the largest number factored on a quantum computing via Shor's algorithm is 21. Shor's algorithm just requires a level of entanglement beyond what today's quantum machines can handle even using error-correcting techniques.

It doesn't mean factoring is the wrong measure of the success of physical quantum computers, we're just not ready to do so. 

Sunday, October 08, 2023

Young Sheldon gets two things spot-on/Am I more famous than.../Might YS become TS?

 Young Sheldon  is a TV show that I used to only watch on airplanes, but then i got into it and am now up to date. The wonders of technology! Note that catching up on a show would have been harder when I was a kid. (This is NOT a we had it rough in my day thing.)

1) There is an episode where Sheldon, his professor, and his Meemaw (grandmother) collaborate. When its done Meemaw is disappointed to here that all they have is a prototype and the real experiment will need a machine as big as the building they are in and won't be done for 30 years. Contrast  this to when a TV show shows people proving P=NP on a Monday and using it to do stuff on Tuesday (I blogged about this here). And there are other examples where a basic science discovery is useful in far less time then it would be in the real world. 

2) A young Sheldon Cooper has the idea for bitcoin and explains it to his brother. His brother is not as smart as Sheldon in math and science but DOES understand business. As such, he gave the best description of bitcoin I ever heard here.

(ADDED LATER: A commenter left a link to a blog post about why bitcoin is NOT a scam. The link is the text of the link and clickable. Here is a clickable version: HERE

3) I looked up the actress who plays Missy (Sheldon's twin sister) on Young Sheldon. I got the name off of the Young Sheldon page but she does not have a Wikipedia entry. I do have a Wikipedia page. That doesn't seem right since  I am sure that more people say

I want to know more about the actress who plays Missy on Young Sheldon.

then say

I want to know more about that guy who coblogs with Lance.

The set of people who have Wikipedia page seems somewhat arbitrary.

4)  Iain Armitage plays young Sheldon.

In Season 6 Sheldon is 13. See timeline of Young Sheldon

In Season 6 Iain A is 15. See Wikipedia Page for Iain A


If the actors strike goes on for 2 more years then Iain  will be a 17 year old playing a 14 year old. That might not work. They may need to change the name of the show to Teen Sheldon.

The show itself joked about this (intentionally?). When Sheldon is watching Beverly Hills 90210 he asks his father George do you know why this character is depressed ? George answers because he's a teenage who looks 30 years old.

But the real question is- might the strike really affect Child Actors who age-out faster than they would have? I know this is a minor problem compared to the other problems  actors have, but I thought it was worth noting.







Tuesday, October 03, 2023

The Lyadov Lesson

I've seen many brilliant students, those who flew though high school and undergrad with great grades and little effort. As PhD students, they often feel they still don't need to try, that success will continue to come to them easily. They would be wrong. Some figure it out, others don't live up to their potential.

Igor Stravinksy
I went to a Chicago Symphony concert last week where the first two pieces were "The Enchanted Lake", a short piece by Anatoly Lyadov, followed by Igor Stravinsky's "The Firebird". You likely never heard of Lyadov. There's a reason for that. 

From the program book 

Anatoly Lyadov is often mentioned in music histories, not primarily for his own beautifully crafted orchestral pieces like "The Enchanted Lake," but as the man who missed the opportunity to compose "The Firebird." This ballet, which Igor Stravinsky eventually wrote, became a cornerstone of his career and is frequently featured in programs, often following Lyadov's own works.

The common but unverified narrative is that Lyadov had been so slow to start the project that he had only just bought his manuscript paper when the first part of the score was due. This led Sergei Diaghilev, who was in charge of staging the ballet, to dismiss him. Lyadov had developed a reputation for laziness early in his career. He was known to skip classes at the Saint Petersburg Conservatory, earning the ire of his teacher, Rimsky-Korsakov, who called him "irresponsible." Even Sergei Prokofiev, who studied with Lyadov and held him in high regard, noted in his memoirs that Lyadov's "most remarkable feature" was his laziness.

Anatoly Lyadov

Yet despite these shortcomings, Lyadov had always attracted attention for the audacity and brilliance of his orchestral work. As far back as 1873, when he published his first songs as his opus 1, Mussorgsky described his talent as "new, unmistakable, original." Stravinsky, who benefited from Lyadov's withdrawal from "The Firebird" project, later remarked that although he enjoyed Lyadov's music, he couldn't imagine Lyadov composing a ballet as long and raucous as "The Firebird."

Thursday, September 28, 2023

Half-Exponential No More

I've mentioned Kannan's proof that \(\Sigma_2^p\) does not have \(n^2\) size-circuits before. A similar proof shows that \(\Sigma_2^E = \mathrm{NTIME}^\mathrm{NP}(2^{O(n)})\) does not have polynomial-size circuits in general. You can create a language in the third-level of the exponential-time hierarchy that has maximum circuit complexity. Now if SAT doesn't have poly-size circuits you are done. Otherwise the polynomial-time hierarchy collapses to \(\Sigma_2^p\) which means the exponential-time hierarchy collapse to \(\Sigma_2^E\).

Can you show \(\Sigma_2^E\) has near exponential-size circuit complexity? The above proof doesn't quite work. The problem is that while a polynomial of a polynomial is polynomial, a subexponential function of a subexponential function could be superexponential. You can only make the proof work for circuit-size half-exponential, i.e., function \(f\) such that \(f(f(n)) = 2^{o(n)}\). I don't know of any natural half-exponential functions, but they are much larger than quasi-polynomial and much smaller than exponentials. 

The best known smallest class with an exponential circuit lower bound is for \(\Delta_2^E=E^{\Sigma_2^P}\) due to Miltersen, Vinodchandran and Watanabe from the last millennium. 

Lijie Chen, Shuichi Hirahara and Hanlin Ren have a new paper showing that in fact \(\Sigma_2^E\) does require exponential-size circuits. They have the same bound for smaller classes if you allow a bit of advice. 

You've seen these authors names before in this blog. The future of computational complexity is in good hands.

Wednesday, September 20, 2023

We Must Be Doing Something Right

The Chicago Tribune ran an editorial Monday that started

What’s the best four-year college in Illinois? Not the University of Chicago, Northwestern University or the University of Illinois at Urbana-Champaign.

No, the best college in the state is the Illinois Institute of Technology, of course!

The editorial was referring to the Wall Street Journal that ranked Illinois Tech 23rd in the nation, tops in Illinois, up from 117 last year. Illinois Tech also cracked the top 100 in the latest US News rankings, up from 127. 

Did we just get that much better? Yes, yes we did! 

Or maybe it had to do with changes in methodology. The Wall Street Journal's rankings this year puts a heavy emphasis on "how much will it improve the salaries they earn after receiving their diplomas". As Illinois Tech caters to students who often are the first in their families to go to college, and focuses on technical degrees, we can really raise up students who might not have otherwise had such an education. It's one of the things that brought me to the university in the first place.

US News also "increased the emphasis on how often schools' students from all socioeconomic backgrounds earned degrees and took advantage of information on graduate outcomes that was not available until recently". 

All rankings should be taken with a grain of salt, and universities will always tout rankings where do they well while conveniently ignoring others. It's impossible to linearly order colleges--there are just too many different factors that make different schools better for different people. 

But as people start to question the value of college the rankings are starting to address their concerns. And if that bumps up my university, so be it.

Sunday, September 17, 2023

ACM to go paper-free! Good? Bad?

The ACM (Association of Computing Machinery) will soon stop having print versions of most its publications. Rather than list which ones are going paper free, I list all those that are not going paper free:
Communications of the ACM
ACM Inroads
Interactions
XRDS: Crossroads

What are the PROS and CONS of this? What are the PROS and CONS of any publication or book being paper free?


1) I like getting SIGACT News on paper since 
a) It reminds me to read it
b) Reading on the screen is either on my phone which is awkward (especially for math) or my desktop (so I have to be AT my desktop). 
I DO NOT think this is my inner-Luddite talking. 
 
2) QUESTION:  Will SIGACT News and JACM and other ACM publications continue to have  page limit for articles? When I was SIGACT News Book Rev Col editor, and now as Open Problems Col editor, I have often had to ask the editor Can I have X pages this time? The answer was always yes,  so perhaps there never really was a page limit. But is having no page limit good? Not necessarily. Having a limit may force you to only write down the important parts.

3)  PRO: Its good for the ecology to not make so much paper.  While this is certainly true, I think the world  needs to rethink our entire consumer society to really make a difference for the ecology. In fact, I wonder if e-cars, carbon-offsets,  and paper free products make us feel good without really helping much.

4) CON but good timing: I recently had an open problems column with two co-authors. One of them is not in the ACM and is not in the community, but wanted to see a copy of the article. I have arranged to have a paper copy of that one issue sent to him.  If I had published this column in 2024, I could not do this. And saying Just go to link BLAH' does not have the same impact as PAPER. I could have printed it out for him, but that just does not seem like the same as having an official copy. 
I DO think this is my inner-Luddite talking. Or his.

5) For quite some time computer science  conference proceedings have not been on paper (there have been a variety of ways this is done). Before that time the following happened a lot: I am in Dave Mounts office talking about something (e.g., who should teach what). He gets a phone call but motions that it will be short so I should still hang out. While hanging out I pick up a RANDOM proceedings of the conference RANDOM  and find the one or two article in it about Ramsey Theory and read them, or at least note them and read them later. That kind of RANDOM knowledge SEEMS less common  in a paper-free age. But maybe not.  I HAVE clicked around the web and accidentally learned things. Like the facts I learned for my post on simulation theory here.

6) Similar to point 5- I used to go to the math library and RANDOMLY look at a volume of the American Math Monthly or some other similar journal and look at some articles in it.  But now that's harder since they have stopped getting journals on papers and only get them electronically. To be fair, paper versions of the journals are EXPENSIVE. 

7) In the year 1999 my grad student Evan Golub got his PhD and he had to bring a PAPER copy of it to some office where they measured margins and stuff of EVERY PAGE to make sure it was within university specs.  Why? Because in an earlier era this was important for when the thesis was put on microfilm.  Were they doing that in 1999? I doubt it.  Some of my younger readers are thinking OH, they didn't have LaTeX packages that take care of marginfor you.  Actually they DID have such packages but, to be fair, the requirement that the university literally measures margins on EVERY PAGE was completely idiotic. I am  happy to say that in 2007 when my student Carl Anderson  got his PhD nobody needed a paper version. I do not know when the rules changed but I am glad they did. 

8) The ACM should promote this paper free change by doing a rap song like Progressive Insurance did here

9) Recently I had a paper with 3 co-authors that all three of us, and some others, proofread (I thought) very carefully. The referee accepted it but with a somewhat nebulous this paper needs a better proofreading. I then PRINTED IT OUT and read it AWAY FROM MY COMPUTER (the paper is on overleaf) with a RED PEN and I found LOTS of stuff to fix that we all missed before. So there are some advantages to getting OFF of the computer, though that may require PRINTING. (I also blogged about this topic here.) 

Thursday, September 14, 2023

Mr. Jaeger and The Scroll

There were three major influencers in my educational journey: Mike Sipser, my PhD advisor at Berkeley and MIT, Juris Hartmanis who founded the field of computational complexity and led me to it at Cornell, and Philip Jaeger, a math teacher at Millburn High School in New Jersey.

Mr. Jaeger
Perhaps your "scroll" will outlast both of us.

I took two math courses with Philip Jaeger, Algebra II and Probability. Mr. Jaeger (I still can't call him Phil) also ran the computer room, a narrow room of three teletype machines where we saved programs on paper tape, where we would do simulations of poker hands for probability class among various other programming. He truly set me up for my future career. 

Mr. Jaeger was also the advisor of the computer club when I was president. 

That's me in the middle of the first photo, with Mr. Jaeger on my right.

Sometime during my high school years I took one of the rolls used in the teletype machine and wrote down a lengthy formula. According to Mr. Jaeger, the formula is for the area of a random triangle. I'm sure it made sense to me in high school.

The Scroll


The Scroll partially unscrolled

Another student Cheryl took the entire formula and put it entirely on an index card. Mr. Jaeger saved both the roll and the card.

The index card (actual size is 3x5 in)

Fast forward to 2023. Mr. Jaeger finds the roll and the card. His son, an academic in his own right, suggested that they track Cheryl and me down. I'm not hard to find. After some emails, they invited me to visit to pick up the roll. Last weekend I was in New Jersey visiting family so I did just that.

What great fun to talk to my old teacher, reminiscing about the high school people and times, and catching up after four decades. It was Mr. Jaeger who showed me the computer dating form that had me create a version for our high school.

I gave Mr. Jaeger a copy of my book and he gave me the scroll and the index card. (Cheryl, now a lawyer, wasn't interested in it). I safely brought it all home to Chicago along with all the memories. I dug up my old yearbook for first two photos above. The scroll will indeed outlast the both of us.

Mr. Jaeger, myself and the scroll.