Sunday, July 30, 2023

Another problem with CHATgpt

 I was giving a recruiting talk for my REU program and I had some slides with testimonials from students:


-------------------------------------------------------------------------

TESTIMONIAL ONE

This REU experience was greatly beneficial in expanding my knowledge and experience with machine learning. Dr. Gasarch, the mentors, my team, and the professors were all very supportive and encouraging, and I learned so much from them over the course of the program. The program was a perfect way to explore different research aspects and allow me to get a better idea of how research is conducted. I am very thankful for this experience.


TESTIMONIAL TWO

The experience of REU CAAR was excellent. I participated in some research before, yet this is the first time for me to do research in a group, which was great!

Though auction design as a topic was not familiar to me before, I learned it by reading several papers. Our program includes both mathematical and computer science components. That is nice as I am interested in both, and our group members divided the work so we all worked on stuff we cared about.

Aside from the research, the lunches and talks were interesting. Thanks to Professor Gasarch, his helper Auguste,  and all the mentors. I would recommend it to anyone interested in computer science or mathematics.
------------------------------------------------------------------

The students laughed and all said 

Those are obviously produced by CHAT-GPT.

They weren't. In fact, they were emailed to me by students in August 2022 before CHAT-GPT was a thing. I told the audience that; however,  I doubt they believed me and I didn't want to dwell on the point. The more you say its not CHAT-GPT, the more it sounds like it is.

This incident is minor. But it portends a bigger problem: Will everything be suspect? How do you prove something was not produced by CHAT-GPT? For some things that might not matter, but for testimonials it matters that it's from a human being.  For what other writings is it important that they came from a human being?  Here are some examples and counterexamples and questions

1) A witness to a crime has his written statement done by CHAT-GPT. IF its just helping him write up what he really saw, thats fine(?).  IAMNAL (I am not a lawyer)

2) You writes a novel but its later found out that its by CHAT-GPT. IF you are  up front about this on the cover of the book AND the novel is good, I see no problem with this. But what about the other way around- it really IS written by you  but nobody believes you. And if nobody cares then it will be
even harder to convince people that it was written by you. Imagine the following:

AUDIENCE: The novel was clearly written by a CHAT-GPT, but that's fine since it was a great novel and you provided the input.

YOU: No, It really was written by me.

AUDIENCE: (annoyed) whatever you say. 

3) When a celebrity writes a book we just assume it was ghostwritten (I do anyway). And I don't care--- all I care about is if the book is good or bad. But ghostwriters cost money. CHAT-GPT will allow us all to have ghostwriters. But I am thinking of the other direction: If a writer (celeb or not) actually writes a book without a ghostwriter or CHAT-GPT, it would be hard to convince us of that. 

4) I wonder if the number of times it matters that it was no CHAT-GPT is small and getting smaller. 

Wednesday, July 26, 2023

Paper Writing Before Overleaf

 A twitter question from a young researcher.

Reminds me of the time one of my children asked me how I drove places before GPS.

I'm not old enough to remember paper writing before computers. Collaboration mostly happened when people were in the same physical location. There were more single-authored and single-institution papers back then. You'd often work on a writeup when you were at the same conference, or even travel just to finish up a paper. 

By the late 80's people started to get comfortable sending latex files by email. BibTeX wasn't popular yet so LaTeX files tended to be self-contained. Mail back then added a ">" in front of any line beginning with "From" so you'd often see "¿From" in papers from that time.

We'd just take turns working on the paper. If we were really sophisticated, we'd work on different sections at the same time using a master file with \include statements. We added notes directly into the latex using comment lines beginning with %LANCE: Fix this!

Oh, and before GPS I used a physical map to get around that I kept in the car. The biggest challenge was folding the map back up afterwards. My kids didn't believe me.

Saturday, July 22, 2023

"Never give AI the Nuclear Codes" Really? Have you seen Dr. Strangelove?

(I covered a simlar topic here.) 

 In the June 2023 issue of The Atlantic is an article titled:

                                    Never Give AI Intelligence the Nuclear Codes

                                    by Ross Andersen

(This might be a link to it: here. Might be behind a pay wall.) 

As you can tell from the title they are against having AI able to launch nuclear weapons. 

I've read similar things elsewhere. 

There is a notion that PEOPLE will be BETTER able to discern when a nuclear strike is needed (and, more importantly, NOT needed) then an AI.  Consider the following two scenarios:

1) An AI has the launch codes and thinks a nuclear strike is appropriate (but is wrong).  It does so and there is no overide for a human to intervene. 

2) A Human knows in his gut (and some of what he sees) that a nuclear attack is needed. The AI says NO ITS NOT (The AI knows A LOT more than the human and is correct). The human overides the AI (pulls out the plug?) and launches the attack. 

Frankly I think (2) is more likely than (1). Perhaps there should be a mechanism so that  BOTH the AI and the Human have to agree. Of course, both AI's and Humans are clever and may find a way to overide. 

Why do people think that (1) is more likely than (2)? Because they haven't seen the movie Dr. Strangelove. They should!

Wednesday, July 19, 2023

More on Pseudodeterminism

Back in May I posted about a paper that finds primes pseudodeterministically and Quanta magazine recently published a story on the result. Oddly enough the paper really isn't about primes at all.

Consider any set A such that 

  1. A is easy to compute, i.e., A is in P.
  2. A is very large, that is there is a c such that for all n, the number of strings of length n is at least 2n/nc. If you throw a polynomial number of darts you are very likely to hit A.
Chen et al show that for any such A, there is a probabilistic polynomial time Turing machine M such that for infinitely many x in A, M on input 1|x| will output x with high probability. 

The set of primes has property 1 by Agrawal, Kayal and Saxena and property 2 by the prime number theorem. Why focus the paper on primes though? Because deterministic prime number finding is a long-standing open problem and there aren't many other natural problems that have properties 1 and 2 where we don't know easy deterministic algorithms to find examples.

With the general result, we can think about oracle results. It's pretty easy to create an A, such that A has property 2 and no deterministic polynomial-time algorithm with oracle access to A can find an element of length n on input 1n for infinitely many n. Since A is always in PA we get property 1 for free. That would suggest to push the result to finding primes deterministically would require using properties of the primes beyond being large and easy. 

Maybe not. Rahul Santhanam, one of the authors, tells me their proof doesn't relativize, though whether the result relativizes is an open question, i.e. whether there is an A fulfilling property 2 such that any pseudodeterministic algorithm with oracle access to A will fail to find more than a finite number of elements of A. It does seem hard to construct this oracle, even if we just want the pseudodeterministic algorithms to fail for infinitely many n. 

Nevertheless it seems unlikely you'd be able to use these techniques to prove a deterministic algorithm for all large easy A. If you want to find primes deterministically you'll probably need some to prove some new number theory. 

Sunday, July 16, 2023

Finding and answer to a math question: 1983 vs 2023.

In 1983, as a grad student, I knew that HALT \( \le_T \) KOLG but didn't know how to prove it. I asked around and A MONTH later through a series of connections I got the proof from Peter Gacs.

 See here  for the full story and see here for a write up of the proof

In 2023, as professor, I heard about a pennies-on-a-chessboard problem and wanted to know what was known about it. I blogged about it and AN HOUR later I found what I wanted. 

See here for my blog post asking about and see here for my write up of all that's known. 

Why the difference from a MONTH to an HOUR. 

1) In 1983 there was no Google. I don't recall if there were other search engines, but even if there were there wasn't much to search. I do note that in 2023 Googling did not help me, though it may have helped my readers who pointed me to the information I wanted. 

2) In 1983 email wasn't even that common. So the notion of  just email such-and-such for the answer did not occur to me. (Also note that I am a last-blocker, see here.) 

3) Here is a factor that should have been in my favor: the community of theorists was smaller so we kind of knew each other and knew who was doing what. AH- but who is this  we ? I started at Harvard in 1980 but really only began doing theory in 1981 and didn't really know the players yet. However, I think the smallness of the community DID help Mihai Gereb (a grad student at Harvard) to know to contact Peter Gacs to help  me find the proof. Contrast- today the community is so large that I might not know the players (I didn't for the penny-chess problem) though with email and the blog (next item) I can reach out to people who Do know the players.

4) The Blog- I have the ability to ASK several people at one time, and one of them might know. That worked in this case and in other cases, but it does not always work. 

5) People without a blog can do something close to what I did- ask several people. And that can be more targeted. (ADDED LATER- a commenter pointed out that I should point out that Math Overflow and Stack exchange and sites like that are also options. Often when I Google a question I have been taken to those sites. I have had less luck asking them directly, but that could just be me.) 

QUESTION: Is it easier to find things out NOW or 40 years ago? I think NOW by far, but I do see some possibly counterarguments. There is a balance- there were less places to look back then. Indeed, the answer to my question was NOT a journal article or a conference paper, it was another blog.

Lastly THANKS to my readers for pointing me to the sources I needed! Kudos! 

ADDED  LATER -a link to the Erdos-Guy paper that I need to put here to aid one of my comments is here

 

Thursday, July 13, 2023

Whither Twitter

Twitter tells me it's my twitterversary, 15 years since I first started tweeting. Not sure I'll make it to sweet sixteen.

No longer do tweets show up on this blog page. Twitter can't tell the difference between some AI engine slurping up tweets and their own display widget showing tweets on a weblog. Bill complains that he can no longer follow my Twitter as he refuses to set up an account on the site and you can't read tweets without one. Also I will soon lose access to TweetDeck without paying the $8/month that wouldn't solve the other problems. This is ignoring that fact that since Elon has taken over Twitter, the site has just become far less fun.

I have accounts on Mastodon and now Threads. The two combined have less that 100 followers compared to the 5400 I have on Twitter. Not sure how many of those 5400 are bots or people who no longer read twitter regularly. 

For now my tweets get imported into Mastodon which now appear on this blog page. Eventually once Threads gets proper web support and APIs I expect that will become the new twitter and I'll move there. We'll have to see.

Sunday, July 09, 2023

A further comment on Chernoff--- and the future of ....

Ravi Boppana recently did a guest blog on Chernoff turning 100 for us here

Consider this unpublished comment on that post:
 
 --------------------------------------------------
As we delve into the depths of Professor Chernoff's impressive centennial celebration, it strikes me that the most astounding aspect isn't the breadth of his influence, but the fact that his eponymous 'Chernoff Bound' remains as relevant today as it was when first conceived in 1952. It's not just a mathematical theorem - it's a testament to the timeless power of innovative thinking and a reminder that truly exceptional ideas can cross boundaries, transcending disciplines and even generations.

As statisticians, computer scientists, and mathematicians, we are not just the beneficiaries of Professor Chernoff's scientific legacy; we are the continuation of it. Every time we use Chernoff bounds in our work, we're not merely applying a theorem - we're participating in a story that began over 70 years ago, and will hopefully continue for many more.

So, as we say 'Happy 100th Birthday' to Professor Chernoff, let's also raise a toast to his contributions that have shaped our field and will continue to guide future generations. It's a living testament that the bounds of his impact are far from being reached. Here's to a legacy that defies the bounds, much like his own theorem!
--------------------------------------------------------

This SEEMS like an intelligent comment.

This IS an intelligent comment.

(One of the comments on this blog post points out that the ChatGPT comment is INCORRECT. Can a comment be INCORRECT and still INTELLIGENT? I think so if it contributes to the conversation.) 

Who wrote it? You can probably guess: ChatGPT. Lance asked ChatGPT for a comment and this is what he got. 

We have, for many years, often gotten bot-generated posts that pick out a few key words and then have a link to buy something. My favorite was

                                  Nice post on P vs NP. For a good deal on tuxedo's click here. 
 
I would like to think it was written by a human who thinks Lance will crack P vs NP and wants him to look good when picking up the Millennium  Prize. But of course I doubt that.

Of more importance is that the attempts to look like a real post were pathetic and content free. In the future (actually, the future is now) ChatGPT may be used to generate an intelligent comment that has a link in it at the end, or worse, in a place we don't notice. So we will need to be more vigilant. This has not been a problem for us yet, but I suspect it will be. 


Wednesday, July 05, 2023

Automating Mathematicians

The New York Times published an article with the ominous title A.I. Is Coming for Mathematics, Too. We know by the work of Gödel and Turing that we can't automate mathematics. But can we automate mathematicians? Is AI coming for us?

Reading the article I'm not immediately scared. The focus is on logical systems like like the Lean theorem prover and SAT solvers. 

Developed at Microsoft by Leonardo de Moura, a computer scientist now with Amazon, Lean uses automated reasoning, which is powered by what is known as good old-fashioned artificial intelligence, or GOFAI — symbolic A.I., inspired by logic. So far the Lean community has verified an intriguing theorem about turning a sphere inside out as well as a pivotal theorem in a scheme for unifying mathematical realms, among other gambits.

So Lean is being used mainly to logically verify known theorems. This could actually make life harder for mathematicians, if journals start requiring formal verification for submitted papers.

I'd love a higher-level AI. One that takes a journal-style proof and verifies it, or even better takes a proof sketch and fills in the details (and write it up in LaTeX) . In other words, let me think of high level ideas and leave the messiness to AI. 

I'm won't be holding my breath. Right now, generative AI has limits in its ability to reason, and reasoning is at the heart of mathematics. 

On the other hand, AI now plays a mean game of chess and go, which we also think of reasoning. So maybe automating mathematicians is closer than we think. It might go further and start suggesting new theorems and proving them on their own.

Ultimately like most other fields AI won't eliminate the need for mathematicians. But like nearly every career moving forward, those who will succeed will not be the ones that push back on AI but those who work smarter and harder to use AI as a tool to do even greater things. Best to think about how to do that now before it's too late.

Saturday, July 01, 2023

Chernoff Turns 100

Guest post by Ravi Boppana

Herman Chernoff celebrates a milestone today: he turns 100 years old. 

We in theoretical computer science know Professor Chernoff primarily for his ubiquitous Chernoff bounds.  The Chernoff bound is an exponentially small upper bound on the tail of a random variable, in terms of its moment generating function.  This bound is particularly useful for the sum of independent random variables.   

Many, many results in theoretical computer science use Chernoff bounds.  For one set of examples, Chernoff bounds are employed in the analysis of algorithms such as Quicksortlinear probing in hashing, MinHash, and a randomized algorithm for set balancing.  For another example, Chernoff bounds are used to reduce the error probability in complexity classes such as BPP.  These examples merely scratch the surface of the wide-ranging impact that Chernoff bounds have had. 

Professor Chernoff introduced the Chernoff bound in his seminal paper from 1952.  Chernoff credits another Herman (Herman Rubin) for the elegant proof of the bound in this paper.  Similar bounds had been established earlier by Bernstein and by Cramér

In his distinguished career, Chernoff was a professor for decades at Stanford, MIT, and ultimately Harvard.  In May, Harvard proudly hosted a symposium in honor of Professor Chernoff's centenary, which he attended.  The photo above shows him at the symposium, looking as cheerful as ever (photo credit: Professor Sally Thurston). 

Beyond his remarkable research accomplishments, Professor Chernoff has passionately guided an entire generation of exceptional statisticians.  According to the Mathematical Genealogy Project, he has advised 26 PhD students, leading to a lineage of 288 mathematical descendants.  Chernoff himself earned his PhD at Brown University in 1948 under the supervision of Abraham Wald.  

Professor Chernoff and his wife, Judy Chernoff, have been married for more than 75 years.  A Boston TV news story said that the Chernoffs are believed to be the oldest living couple in Massachusetts.  At the symposium in May, Professor Chernoff doubted the claim, though he had previously acknowledged that it might be true.  Maybe his cherished field of statistics can be used to estimate the likelihood of the claim.   

On this extraordinary milestone day, we extend our heartfelt congratulations and warmest wishes to Professor Chernoff.  Happy 100th birthday, Professor Chernoff!  Mazel tov.