Sunday, October 30, 2022

What was the recent Nobel Prize in Physics really about?(Guest Post)

 David Marcus was a Math major a year ahead of me at SUNY Stony brook (he graduated in 1979,

I graduated in 1980). He then got a PhD from MIT in Math, and is a reader of this blog.  Recently he emailed me that he thinks the current Nobel Prize Winners in Physics do not understand their own work. Is it true? Let's find out!

------------------------

(Guest blog from David Marcus)

2022 Nobel Prize in Physics Awarded for Experiments that Demonstrate Nonlocality

The 2022 Nobel Prize in Physics was recently awarded to experimenters who demonstrated that the world is nonlocal. The curious thing is that neither the writers of the Nobel Prize press release nor the recipients seem to understand that this is what they demonstrated.


For example, the press release (see here) says: "John Clauser developed John Bell's ideas, leading to a practical experiment. When he took the measurements, they supported quantum mechanics by clearly violating a Bell inequality. This means that quantum mechanics cannot be replaced by a theory that uses hidden variables." That is not what the experiments mean, and the statement is false.

The word "locality" means that doing something here cannot instantly change something other there.

The experimental setup is the following: You prepare two particles, A and B, and send them in opposite directions so that they are far apart. You and your colleague do experiments on each particle at the same time. If you and your colleague perform the same experiment, then, from your experiment on A, you can predict with certainty the result of your colleague's experiment on B (and vice versa).

In a paper in 1935, Einstein, Podolsky, and Rosen pointed out that, assuming locality, the experimental results at A and B must be determined by the source that prepared the particles. They didn't actually say, "assuming locality", but they implicitly assumed it. (If you disagree with them, please offer an alternative.)

In 1964, John Bell published his paper. In it, he considered three of the experiments that could be done on the particles A and B. Assuming the results are determined by the source (which follows from Einstein, Podolsky, and Rosen's argument), he derived an inequality on the correlations between the results of the three experiments on the two particles. The math is simple; for details, see here.

The Nobel Prize winners did experiments, and their results violated Bell's inequality (or similar inequalities). Hence, the world is nonlocal.

The simplest theory that agrees with experiment is Bohmian Mechanics. This is a deterministic theory of particles whose motion is governed by a wave (the wave function being the solution of the Schrödinger equation). Of course, Bohmian Mechanics is nonlocal, as is the world.

Thursday, October 27, 2022

The Media Coverage of the Matrix result is Terrible (though not worse than usual)

 BILL: A computer program (or an AI or an ML or whatever) found a BETTER way to do matrix mult! Its in the same spirit as Strassen. I've always wondered if Strassen was practical  since it is simple, and computers have come a long way since 1969, though I suspect not (I WAS WRONG ABOUT THAT). I'll blog about and ask if Strassen will ever be used/practical   (I did that post here).

READERS: Uh, Bill,  (1) Strassen IS used and practical and (2) the new algorithm only works in  GF(2). (Lance did a post about the new algorithm where he makes this explicit here.) Some readers claimed it was GF(2^k) and some that it was fields if char 2. In any case NO it is not a general algorithm.

BILL: There is good news and what others might consider bad news but I do not.

GOOD NEWS: I learned that Strassen IS practical and used, which I did not know. 

GOOD NEWS: I learned that I was WRONG about the new algorithm since I just assumed it worked in general, and updated the post so others would not be deceived. 

BAD NEWS: Darling asked if I was embarrassed to be wrong. If I am embarrassed that easily I would have quit blogging in 2009. 

DARLING: So Bill, how did you get it so wrong?

BILL: Well obviously my bad for not doing my due diligence. But that's not what's interesting. What's interesting is that if you read the articles about it for the popular press you would have NO IDEA that it only works for mod 2. Its like reading that quantum computing will solve world hunger.

DARLING: It won't?

BILL: No it won't. 

DARLING: I was being sarcastic. 

BILL: Anyway, the coverage pushed two points

a) IMPRESSIVE that a computer could FIND these things that humans could not. This is TRUE (gee, how do I know that? The Gell-Mann Effect,  is that people disgusted when they read a newspaper article on something they know about and find the mistakes later assume that the other articles are fine. SHOUT OUT to Jim Hefferon who telling me the name Gell-Mann Effect and left a comment with a pointer. The original version of this post had a BLANK there.) 

b) The algorithm is practical! They did not quite say that but it was implied. And certainly there was NO mention of it only working in GF(2). And I was fooled into thinking that it might be competitive with Strassen. 

READERS (of this blog entry, I predict) Uh, Bill, the popular press getting science news wrong and saying its more practical than it is probably predates the Bible. I can imagine  the Cairo Times in 2000BC writing `Scientists discover that in any right triangle with sides a,b,c  a^2+b^2=c^2 and this will enable us to build food silos and cure Hunger. In reality they knew that the 3,4,5 triangle was a right triangle, were no where near a proof of a general theorem, and I doubt it would have helped cure hunger. 

BILL: This time the news was REALLY CLOSE to what I do (if  R(5) is found by a computer and the media claims its practical I'll either have a very angry blog or repost my April Fools' day article on Ramsey Theory's application to  History) AND I posted incorrectly about it. So, to quote many a bad movie

THIS TIME ITS PERSONAL!

Monday, October 24, 2022

Cheating in Chess and in Class

In the 24th move of the second game of the 1978 Chess Championship, a cup of blueberry yogurt was delivered to the defending champion Anatoly Karpov who offered a draw shortly thereafter. The challenger Victor Korchnoi claimed the flavor of yogurt was a coded message to Karpov and later in the tournament all food deliveries had to be decided on in advance. The good old days.

With computer chess programs now far more powerful than humans, chess cheating has become far more common and came to a head last month with the controversy between Magnus Carlsen and Hans Niemann. Did Niemann cheat to win in his win over Carlsen in St. Louis or was it just a rare upset? How can we tell?

This brings up cheating by students in class. Reports and statistics show that cheating has increased over the last few years. The pandemic played a role, but a good rule is that pandemic didn't change behaviors, rather accelerated changes already in progress. Technology has made it easier to cheat. It's difficult to near impossible to create a homework problem where someone couldn't just look up an answer. Sites like Chegg provide solutions to all sorts of problems while there are many sites where you can hire someone to write a paper or do a project for you. Advances in generative AI, like GPT-3 and GitHub co-pilot will soon make cheating as easy as clicking a button.

But it's more than technology. As students view university education less about learning and more about getting the credentials for a job, the inhibitions to cheat disappear. And while the vast majority of students don't significantly cheat, it's hard for anyone to avoid using Google when they get stuck on a problem. 

We can continue to use technology to fight the technology in a every growing arms race to catch cheaters but it can feel like a losing war. We should take solace that the students who work hard solving problems and projects will be the ones who will succeed in life. 

Thursday, October 20, 2022

Alpha Tensor

In a recent post, Bill used the announcement of a new AI multiplication algorithm to discuss the applications of Strassen's famous algorithm. For this post I'd like to focus on the new algorithm itself, Alpha Tensor, the algorithm behind the algorithm, what it has actually accomplished and what it means for us theorists. 

To multiply two 2x2 matrices in the usual way you need eight multiplication steps. In 1969 Strassen surprised the world by showing how to multiply those matrices using only seven multiplications.

You can recurse on larger matrices. For 4x4 matrices you can use 72=49 multiplications instead of the naïve 64. In general for nxn matrices you need roughly nlog27 ≈ n2.81 multiplications.

No one has found an algorithm for 4x4 matrices that uses less than 49 from recursing on Strassen. Alpha Tensor does so for the special case of working over GF[2], where addition and subtraction are interchangeable. Their algorithm does not work for general fields such as the real numbers.

Here's the full table of Alpha Tensor results from the Nature paper for multiplying a nxm matrix by a mxp matrix. The Modular column is for GF[2] and the standard column is for general fields. Alpha tensor does improve on the best known for general fields for specific problems like multiplying a 3x4 matrix by a 4x5 matrix. Much of the press failed to make this distinction for 4x4 multiplication leading to some confusion.


What does this mean for theory? Recursing on 4x4 matrices now reduces the time for matrix multiplication to roughly n2.78 nowhere close to the best theoretical upper bound of about n2.37. The Alpha tensor result may be more practical though time will tell.

Manuel Kauers and Jakob Moosbauer shortly after Alpha Tensor announcement, reduced the 5x5 case over GF[2] to 95 multiplications. Nice to see the last word isn't by machine (yet!) but that shouldn't reduce the excitement over Alpha Tensor. Often we see a breakthrough followed by a small improvement. Note that 95 multiplications for 5x5 matrices won't give a faster asymptotic algorithm for nxn multiplication than Strassen.

What excites me the most is not the algorithm, but the algorithm to find the algorithm. Alpha Tensor uses the tools that AlphaZero used to play Chess and Go to search the large search space of potential algorithms using Monte Carlo Tree search, basically searching at random and learning and updating the probabilities of the search. Before using machine learning, we had few good approaches to searching large combinatorial spaces of this nature. 

In general, most new algorithms come from new approaches, not just from the structural decomposition we see in matrix multiplication so theorists won't be out of a job anytime soon. Nevertheless this is just another lesson that using ML has dramatically improved our ability to search through a large number of possibilities looking for a specific solution. 

The Alpha Tensor Nature paper was submitted in October of 2021. A year is eternity in the ML world. I wonder what is happening now that we don't know about.

Tuesday, October 18, 2022

BYTE LYNX- an awesome video game/Am I an influencer?


Tucker Bane is a friend of mine who has an AWESOME video game available

that is called BYTE LYNX.

I am curious- Can I be an INFLUENCER!

Lets find out!


At the link below there are


a) Reviews of the game.

b) Videos of the game being played.

c) The ability to purchase and download the game.


Link is here



Thursday, October 13, 2022

How Not to Pass a Polygraph Test


Many years ago I was asked to serve on an advisory board for an organization that did confidential research. To be on the board I had to have US top secret clearance. The first step was filling out a lengthy form which asked deep details about every aspect of my life. Then there were the interviews with myself and many people I interacted with, especially internationally, and I have many international colleagues. Eventually I got past these steps.

The final step required taking a polygraph (lie-detector) test. So I flew to Baltimore to visit a non-descript office building near the airport. I failed the test. Twice more I went to Baltimore and failed those tests as well. 

Just to be clear, I never tried to falsify or hide information on these tests. In one case I was asked "Do you live in Atlanta?" I said no. The person administering the test stopped and said I put my address down as Atlanta. I said my mailing address was Atlanta but (at the time) I lived just north of the border in Sandy Springs. She said I should use Atlanta for the test, in other words I should lie. The test didn't go well after that.

In another case, I was asked if I was ever arrested. For the record, I have never been arrested but the answer came up as inconclusive. The administrator, different than before, trusted the machine more than me and the rest of the day didn't go well. 

Perhaps the test wasn't meant to just test whether I was telling the truth, but also my ability to keep a secret. At least that would make more sense why I failed three times. More likely I took questions too literally, a product of a mathematician's mind.

I never joined the advisory board but that wasn't the worst of it. In 2014 the Chinese hacked into the US Office of Personnel Management taking information from, among others, those who applied for security clearance. It's the main reason I keep security freezes with the credit bureaus.

Sunday, October 09, 2022

Will Strassen's Matrix Mult Alg ever be practical?

All time bounds are asymptotic and really O-of.

Recall that Strassen found a clever way to multiply  two 2x2 matrices with 7 mults (and lots of adds)  leading to a matrix mult alg in n^{\log_2 7} = n^{2.87...}


Recently (see here) a deep-mind-AI found a way to multiply  two 4x4 matrices with 47 mults (and lots of adds) leading to a matrix mult alg in n^{\log_4 47} = n^{2.777...}. NOTE ADDED: The new algorithm only works over GF[2] for 4x4 matrices.

Much better is known, see our blog posts here and here.


The more advanced algorithms are complicated and have large constants so will never be practical. But Strassen's result, and now the new algorithm, SEEM to me they could be practical.

(ADDED LATER- many of the comments inform me that Strassen IS practical and IS being used. Great! Now we know!)

Thoughts about Strassen that also apply to the  new algorithm. 

1) n has to be large for Strassen to given an improvement. But as we deal with larger data sets the value of n is getting larger. 

2) People are mostly interested in sparse matrices for which there are better methods. I've heard that for a while- but is it still true? I thought ML used dense matrices. 

3) Strassen is hard to code up. Actually it doesn't look that hard to code up. However, I have never tried to code it up, so maybe there are subtle points there.

4) Strassen only works on matrices of size 2^n x 2^n. You can pad matrices out but that might kill whatever time advantage you get. (The new alg only works on  4^n x 4^n). 

5) Strassen uses recursion and there is the hidden cost of recursion. I think that is a think of the past and our younger readers do not know what I am talking about. 

6) (This is obvious) the recursion would only go down to a certain level and THEN you would use ordinary Matrix Mult. This may also add time. 


I suspect that 2 and 4 are the most important reasons Strassen (or the new algorithm) is not practical BUT I would like to hear your thoughts?

Does any package NOW use Strassen's Algorithm?

Side Note: I like to ask students if they think there is a better-than-cubic algorithm for Matrix Mult. They do not. Then I show it to them and tell them THIS is why LOWER BOUNDS are hard. You have to show that NO, nobody clever will find a trick you hadn't thought of. 


Thursday, October 06, 2022

Art and Technology

Last weekend I went to one of Chicago's jewels, the Art Institute, and saw the opening of a new exhibit by Berlin-based artist Josephine Pryde entitled The Vibrating Slab referring mainly to the phone that constantly tries to gain our attention. The exhibit used photographs and sculptures to tie smart phones to prehistoric rocks. No pictures here because ironically they didn't allow us to take photos with our "slabs".

After I saw the Pryde exhibit I went to see the artist herself give a presentation. She related a story where she talked about going to the movies and seeing Top Gun: Maverick, not knowing it is a new release. Tom Cruise, who controlled a computer with hand movements in Minority Report, goes old-school in Maverick. Cruise and several young actors, through various plot contrivances, flew 20th century planes in a movie that could have taken place in the 90s. According to IMBD, at the insistence of Tom Cruise, minimal green screen and CGI aerial shots exist in the film, and even the close up cockpit shots were taken during real in-flight sequences. Old school indeed. Kind of the opposite of say the Disney series The Mandalorian filmed in a soundstage with everything generated by CGI.

Pryde's exhibit looked at the interaction with technology as art. Upstairs from Pryde's exhibit was art from technology, a series of prints that David Hockney created on another slab, the iPad, in Normandy during the early days of the Covid pandemic. 

No. 340, 21st May 2020 - David Hockney

On the way from Pryde's exhibit to the lecture I passed through the Art Institute's impressionism collection and compared real Monets with the fake one I created with Dall-E. Monet manages to capture a detailed scene with broad brush strokes--if you zoom in the detail disappears. Dall-E can't quite pull that off.

Vétheuil by Monet

Monet Dagsthul by Dall-E


Tuesday, October 04, 2022

Is it okay for a paper or book to say `for more on this topic see Wikipedia Entry BLAH.

 One of the proofreaders for Computational Intractability: A Guide to Algorithmic Lower Bounds

(available here)

made the following objection, which raises some questions.

I object to telling readers to see a Wikipedia Entry.  Wikipedia is marvelous, but it is unstable. I have been led astray by short-lived editorial changes made by trolls. 

The proofreader is surely correct that  `See Wikipedia entry X' should be minimized. And indeed, I have gone through all of the cases we had of such things and tried to minimize them. But there are times when there seems to be no way around it. Or maybe there is but I can't see it. 

a) I want to refer to the set of problems that are (exists R)-complete. The ONLY list I know of is on Wikipedia here.

b) I want to discuss the complexity of the video game braid. There is a nice Wikipedia page about the game braid  here. There are some sites that have videos about the game, but not reallyan  explanations of it. I DID find a site that looks pretty good, here, but is that site more stable than the Wikipedia entry? There did not seem to be an official site. (I had the same issue with the 15-puzzle and some other puzzles that do not seem to have a natural official site). 

c) I want to refer the reader to a list of algorithms for discrete log. Wikipedia has a great site on this here. Is there a good article that does the same? Is it behind paywalls?


I tend to thing that the  Wikipedia sites above are stable and accurate. It helps that they are not on controversial topics.  They should be fine. Articles that are behind paywalls are much worse. As for articles on authors websites- are they more or less stable than Wikipedia?