Sunday, April 25, 2021

Ferrer's Diagrams can be used to prove X theorems about partitions. What is X?

1978: I took an excellent  ugrad course in combinatorics from James C Frauenthal (he sometimes wrote his name as the biniomial cofficient (J choose F))  and he covered Ferrer's diagrams. They are a nice way to prove equalities about types of partitions.   See here for a definition and a few examples. I have this (possibly false) memory that there were LOTS of partition theorems proven nicely with Ferrer's diagrams.

Fast forward to 2021:

2021: My TA Emily  needs a topic to cover in Honors Discrete Math. I have this memory that there were LOTS of theorems about partitions proven with Ferrer's diagrams. We look at many websites on Ferrer diagrams  and  find only TWO examples:

The numb of partitions of n into k parts is the numb of partitions of n into parts the largest of which is k.


The numb of partitions of n into \le k parts is the numb of partitions of n into parts the largest of which is \le k

We DO find many theorems about partitions such as this corollary to the Rogers-Ramanujan theorem:

The numb of partitions of n such that adjacent parts differ by at least 2 is the numb of partitions of n such that each partition is either \equiv 1 mod 5 or \equiv 4 mod 5.

This is a HARD theorem and there is no Ferrer-diagram or other elementary proof. 

SO, I have one MEMORY but the reality seems different. Possibilities:

1) My memory is wrong. There really are only 2 examples (or some very small number).

2) There are other examples but I can't find them on the web. I HOPE this is true--- if someone knows of other ways to use Ferrer diagrams to get partition results, please comment. 


Thursday, April 22, 2021

The Million Dollar Sermon

Illinois Tech has one of the greatest origin stories for a university. In 1890 Frank Gunsaulus, a pastor on the south side of Chicago, gave a sermon where he said "If I had a million dollars I would build a school to provide students from all backgrounds meaningful roles in a changing industrial society". Philip Armour, a wealthy industrialist in the congregation, went up to Gunsaulus after the sermon and said, "if you give me five years for this school, I'll give you the million dollars". Thus started the Armour Institute of Technology which after some mergers became the Illinois Institute of Technology.

The "million dollar sermon" really happened, though the exact wording and even the exact year are lost to posterity or, as speculated, hidden in a cornerstone of one of the original campus buildings. 

In 1890 we were in the beginnings of the second industrial revolution, a revolution of communication, transportation, electrification and soon the assembly line. The first revolution happened a century earlier with mechanization and the steam engine. The third revolution was computers and automation, and we are now in the early parts of the fourth industrial revolution, one based on data and information. 

There are many parallels between 1890 and today. Like 1890, the private economy is dominated by a small number of large companies that have an outsized influence in our society. Like 1890, technology is changing faster than we can manage it. Like 1890, many workers are finding their skills quickly outdated.

Today Gunsaulus's words ring truer than ever. We more than ever need to provide students of all backgrounds meaningful roles in a changing technological society. 

Sunday, April 18, 2021

An investment puzzle and speculation as to why some think its hard

 This is a Guest Post by David Marcus. He gives a puzzle and its solution, which is interesting, and then speculates as to why some people get it wrong. 

---------------------------------------------------------------------------

THE PROBLEM:

Investing Puzzle or Arithmetic Can Be Useful


The following is an example of investment results that I saw in an

investment newsletter. There are two portfolios that use different

strategies. Both portfolios start with $1 million twenty years ago and

withdraw 5% each year. The idea is that you are retired and withdrawing

money to spend. Not all years are shown in the tables.


Portfolio A


Year   Return  Withdrawal    Balance

2000   15.31%      57,655  1,095,445

2005    1.81%      59,962  1,139,273

2008  -12.65%      51,000    969,004

2009   34.26%      65,049  1,235,936

2010   11.94%      69,175  1,314,331

2015   -2.48%      64,935  1,233,764

2020   10.27%      66,935  1,271,765

Total Withdrawal: 1,685,190

Change in Balance: 27.18%

======

Portfolio B

Year   Return  Withdrawal    Balance

2000   -0.95%      49,524    940,956

2005    3.80%      44,534    846,154

2008  -20.11%      35,922    682,523

2009   18.27%      40,360    766,833

2010   11.57%      42,777    812,764

2015    0.99%      50,767    964,567

2020   13.35%      65,602  1,246,433


Total Withdrawal: 1,425,573

Change in Balance: 24.64%


Portfolio A has a final balance that is 25,000 more than Portfolio B's and

had about 260,000 more in withdrawals. Does the example lend credence to

the Portfolio A strategy being better than the Portfolio B strategy?

---------------------------------------------------------------------------

THE ANSWER:

Investing Puzzle or Arithmetic Can Be Useful: Analysis


Summary: The two portfolios have about the same performance over the 20

years. The difference is mainly due to Portfolio A having a good year or

years near the beginning before much money was withdrawn. The example

merely shows that it is better to withdraw money after a gain rather than

before.


Detailed Analysis:


The scenario is: Start with X = $1 million. Withdraw 5% a year.


Define "gain factor" to be 1 plus the percentage return. For example, if a

portfolio returns 5%, then the gain factor is 1.05.


Let A_j, j = 1, ..., 20, be the gain factors each year for portfolio A.


Let B_j, j = 1, ..., 20 be the gain factors each year for portfolio B.


The final amount in portfolio A is


   F = X * A_1 * 0.95 * A_2 * 0.95 * ... * A_20 * 0.95 .


The final amount in portfolio B is


   G = X * B_1 * 0.95 * B_2 * 0.95 * ... * B_20 * 0.95 .


From the "Change in Balance" values or the balances for year 2020, we see

that F and G are almost the same:


   F = 1.271865 * X,

   G = 1.246433 * X.


But, as we learned in elementary school, multiplication is commutative, so


   F = X * 0.95^20 * \prod_{j=1}^20 A_j,

   G = X * 0.95^20 * \prod_{j=1}^20 B_j.


Since F and G are almost the same, the total gains (product of the gain

factors) for the two portfolios are almost the same, i.e.,


   \prod_{j=1}^20 A_j \approx \prod_{j=1}^20 B_j.


Then what accounts for the big difference in the amounts withdrawn?

Portfolio A must have had some good years near the beginning. (We see in

the tables that Portfolio A did better in 2000 than Portfolio B.) So, all

the example shows is that it is better to withdraw your money after your

gains rather than before.


To take an extreme example, suppose an investment is going to go up 100%

this year. It is better to take your money out at the end of the year

(after the gain) than at the beginning of the year (before the gain). This

is a triviality.


The example tells us nothing useful about the two strategies.


Note: The total gains aren't exactly the same, but the timing of the yearly

gains is what is driving the results. We have (rounding off)

   ( F - G ) / 0.95^20 = 70942.81 .

So, if there had been no withdrawals, the difference in the portfolio

balances would have been about $71,000, much less than the $260,000 +

$25,000 difference with the withdrawals.

---------------------------------------------------------

WHY IS THIS HARD FOR PEOPLE?

Many people have trouble with this puzzle. The difficulty may be that such

people don't make a mental model (or a written model) of the process that

is producing the balance. If you write down (or have in your head) a

formula for the balance, then you see that the gain factors are independent

of the withdrawal factors. That is, we could withdraw more or less money,

or even deposit money, without affecting the gain factors we would use in

the model. This then leads us to consider the gain factors on their own,

and to recognize that the gain factors are the true measures of how the

strategies perform.




Thursday, April 15, 2021

Ordering Beauty

First, congratulations to fellow complexity theorist and blogger Scott Aaronson for receiving the 2020 ACM Prize in Computing for "groundbreaking contributions to quantum computing". The prize is ACM's highest honor for mid-career researchers. Well deserved! 

Now back to our regularly scheduled post...

Every freshman at Cornell back in 1981 had to take two seminar courses, basically one-shot courses in an usually humanities area which required no prerequisites but lots of writing. I took my first course in philosophy. The instructor, a PhD student, at one point described his research, a philosophical argument that there is an intrinsic total ordering of beauty, say that Beethoven would always sit above the Beatles, no matter the beholder. I didn't believe him then and still don't today. A few months ago the Washington Post ran a story with the same theme entitled Maradona was great, and maybe the greatest. Can we make similar claims about artists?

Somehow we have this belief when it comes to conference submissions, that there is some perfect ordering of the submissions and a good PC can suss it out. That's not really how it works. Let's say a conference has an accept rate of 30%. Typically 10% of the submissions are strong and will be accepted by any committee. About half the submissions are just okay or worse and would be rejected. The other 40% of the submissions will be chosen seemingly randomly based on the tastes of the specific members of the program committee. Experiments in the NeurIPS and ESA conferences have bourn this out. 

Why not make the randomness explicit instead of implicit? Have the PC divide the papers into three piles, definite accepts, definite rejects and the middle. Take the third group and randomly choose which ones to accept. It will create a more interesting program. Also randomness removes biases, randomness doesn't care about gender, race and nationality or whether the authors are at senior professors at MIT or first year grad students at Southern North Dakota. 

We put far too much weight on getting accepted into a conference given the implicit randomness of a PC. If we make the randomness explicit that would devalue that weight. We would have to judge researchers on the quality of their research instead of their luck in conferences.

Given that conferences, especially the virtual ones, have no real limits on the number of papers and talks, you might say why not just accept all the papers in the middle. Works for me.

Sunday, April 11, 2021

Is the following reaction to getting the first COVID shot logical?

 Alice works at a charity that puts together bag and box lunches for children.


They all wear masks and they are 12 feet apart and very careful, and nobody there has gotten COVID.

Then Alice gets here first COVID shot and says:


I am not going to work for that charity until I have had my second shot and waited  4 weeks so I am immune. 


She is really scared of getting COVID NOW that  she is on the verge of being immune. 


Is that logical? She was not scared before. So does it make sense to be scared now? I see where she is coming from emotionally, but is there a logical argument for her viewpoint? I ask nonrhetorically.


bill g. 

Thursday, April 08, 2021

Quantum Stories

Scott Aaronson wrote last month about the hype over quantum computing. I'd thought I'd drop a few stories.

I was once asked to review a grant proposal (outside the US) that claimed it would find a quantum algorithm for NP-hard problems. I wrote a scathing review but the grant was funded because I failed to prove that it was impossible. I replied that they should fund my research to teleport people from Chicago to Paris because they couldn't prove I couldn't do it. I never got a response.

I was at an NSF sponsored meeting on quantum computing. I suggested, as a complexity theorist, that we need to explore the limits of quantum computing. A senior researcher said we shouldn't mention that in the report or it might hurt our chances of funding the field if they think quantum computing might not be a complete success.

I went to a Microsoft Faculty Research Summit which had a big focus on quantum computing. I complained of the quantum computing hype. My friends in the field denied the hype. Later at the summit a research head said that Microsoft will solve world hunger with quantum computing.

I was meeting with a congressional staffer who had worked on the National Quantum Initiative which coincidentally was being announced that day. I said something about high risk, high reward. He looked shocked--nobody had told him before that quantum computing is a speculative technology.

Quantum computing has generated a large number of beautiful and challenging scientific questions. Thinking about quantum has helped generate classical complexity and algorithmic results. But quantum computing having a real-world impact in the near or mid-term is unlikely. Most scientists I know working directly in quantum research are honest about the limitations and challenges in quantum computing. But somehow that message is not often getting to the next layers up, the policy makers, the research managers, the university administrators, the media and the venture capitalists. 

But who knows, maybe some quantum heuristic that doesn't need much entanglement will change the world tomorrow. I can't prove it's impossible.

Sunday, April 04, 2021

Do any NP-reductions use deep mathematics? Non-rhetically

BILL: Lets say we didn't know that FACTORING NPC --> NP=coNP.
then what direction would we think Factoring in P or NPC? 

STUDENT: In P. After all, Number Theory is a deep subject and I can imagine some deep Theorem in it leading to FACTORING in P. 

BILL: That cuts both ways. I can imagine some deep theorem in NT being the key to showing 

FACTORING poly-reduces to  SAT

STUDENT: 
Deep mathematics HAS been used for algorithms. Factoring algs is one example, The graph minor theorem yielding MANY problems in P is another. Can you give me an example where deep mathematics has been used for an NPC reduction?

BILL: Oh. Gee. I do not know of any. 

STUDENT: If only you had some mechanism to ask the theory community. Maybe you could call it a web log, or weblog.

BILL: If only...

QUESTIONS
1) Are there any NPC reductions that use deep math? (I realize that the phrase `deep math' is not well defined,)

2) Are there other reductions that use deep math?

3) If P NE NP then: 
For all epsilon there is no approx-alg for MAX3SAT which yields  \ge  (7/8 + epsilon)OPT

For all delta  there is no approx-alg for CLIQ which yields > n^{-delta} OPT

No approx-alg for SET COVER which yields \ge (ln n + o(1))OPT. 

All of these proofs use  the PCP machinery or something similar. My impression is that the original PCP theorem, while long, hard, and impressive, did not use deep math. I have a vague memory of some paper or PhD thesis stating that the ONLY theorem needed was that a poly of degree d over a finite field has \le d roots. 

However to get the optimal lower bounds seemed to use some deep math. But I am more ASKING than telling. 

Thursday, April 01, 2021

Want to Buy a Theorem?

This is embarrassing to admit but after a few badly timed trades on GameStop options I find myself a bit tight on money. To raise some cash, I reluctantly decided to sell one of my prized possessions, one of my theorems.

For Sale: Boolean formula satisfiability cannot be solved in both logarithmic space and quasilinear time. For a more formal and slightly more general statement and a proof, see this paper.

Bidding starts at 12 BTC (about $705,000). 

The winning bid, upon verified payment, will receive:

  • The ability to give the theorem the name of your choice such as your own name, your best friend's mother's name or "Teddy McTheoremface".
  • A non-fungible token (NFT) attesting ownership of the theorem and the name you have chosen for it.
  • Anyone citing this result will be required to note that you own it and use the name you chose above. You cannot, however, limit the use of the theorem or receive compensation for its use. 
  • By virtue of owning this theorem you will a Fortnow number of zero. This immediately gives you an ErdĹ‘s number of 2. If you have previously written a paper with Paul ErdĹ‘s then both of us will now have an ErdĹ‘s number of 1.
Frequently Asked Questions

Q: Why this theorem?

A: The theorem is in one of my few solely authored papers and I can't afford to share the proceeds of the sale. 

Q: Doesn't Ryan Williams and others have stronger theorems?

A: The results are incomparable. Ryan gives bounds on a single algorithm with low time and space. My theorem allows different machines for the time and space bounds.

Also, to the best of my knowledge, Ryan's theorem is not for sale.

Q: Doesn't the proof of the theorem rely on other people's theorems such as Nepomnjascii? Shouldn't he get some of the value from this sale?

A: I'm not selling the proof of the theorem, just the theorem itself.

Q: If I purchase this theorem will I get to write next year's April Fools post?

A: No.