Wednesday, April 26, 2023

Comic Book Alignment

Talk about AI "alignment" makes me think of a time in the 1950s that a group of companies decided to

create an industry organization to self-govern their work to make sure that they were following the values of the time and avoid government oversight. Of course, I'm talking about the Comics Code Authority (CCA). 

Fueled by psychiatrist Fredric Wertham's book Seduction of the Innocent and a series of U.S. Congressional hearings, the comic publishers realized they needed to police themselves and formed a trade group, the Comics Magazine Association of America (CMAA). The CMAA created the Comics Code Authority (CCA) in 1954 to enforce a code of content guidelines that comic book publishers would adhere to. The Comics Code prohibited explicit violence, sexual content, and other adult themes in comic books, as well as a promoting a "positive portrayal" of authority figures and institutions. The CCA seal, which was a small stamp indicating that a comic book had been reviewed and approved by the organization, became a requirement for distribution by most newsstands and retailers pushing many publishers to follow the code.

I started reading comic books in the 1970s with the code in full swing. It was not a golden time for comic books with mostly bland, straightforward stories, and I gave it up as I went into high school. In college in the '80s, a friend brought me back into comics with some series, like Watchmen, having given up the code and the seal. I started reading comics voraciously, so much that I had to go cold turkey in grad school so I could focus on research. The code itself was abandoned in 2011 after even Archie Comics gave up using the seal.

There's not a direct parallel between comic book writers and large language models, but the principle is the same. If you try to enforce a collection of values, you will get blander, less interesting output. I'm not saying that all alignment is a bad idea, but that you need to realize you will lose something when you do.

Sunday, April 23, 2023

Thoughts on Gordon Moore

 Gordon Moore passed away on March 24, 2023. He was 94 years old. 

He is best known for the article 


Cramming more components onto integrated circuits. 

It appeared in the magazine Electronics (it is now defunct), Volume 38, No. 8, April 19, 1965. Do you need to track it down in the basement of your library. No. Its here and here. I wonder if Moore would have predicted that his article would be available easily over 50 years later. Or is it? Link rot is a serious problem so you might want to download it to your local files. Note that the first link is some sort of official version and the second version is my local version. Not clear which link will rot first. The article also has an addition which is an interview with Moore that was done later.

In the article Moore said that the number of components-per-circuit (I think that means chip) will double every year. Moore credits Dave House with modifying it to `doubling every 18 months' and Carver Mead with calling it `Moore's Law'.  Later it came to be quoted as computer SPEED would double every 18 months. We will take this to be Moore's Law in this blog post. 

Is Moore's law dead? Browsing Google the answer seems to be that it is slowing down but not dead yet. (IDEA for a comedy sketch: Redo the Monty Python Dead Parrot sketch about the death of Moore's law.) 

If Moore had 1 penny in April 1965 and it doubled every 18 months then how rich would he be now? How rich was he in April 2022? Compare the two numbers. 







Thursday, April 20, 2023

Health Tech


On Tuesday, at the behest of an alumnus, I spent the afternoon at HIMSS, a large health tech conference being held at the big convention center in Chicago. When I think of Health Tech, I imagine fancy medical devices, but most of the exhibitors were focused on software solutions.

Cybersecurity had the biggest theme area, no surprise given the devasting role ransomware has had on some hospital chains. The second largest theme area focused on Interoperability. Just a few years ago, the vast majority of medical data is transferred via fax. A few companies, like Epic Systems, dominate the electronic health records space and don't share nicely. There's a relatively new standard, FHIR, for transferring medical data, making it easily accessible via APIs while keeping it secure. Hopefully, we can finally kill off the fax machines in doctors offices. Patient Engagement was the other big theme area.

Of course the big discussion topics are about how AI will change health care. For example, the advances in electronic records have led to doctors spending far too much time entering data instead of seeing patients. AI could make data entry quick, easier or perhaps even unnecessary. Also AI could help provide functionality for triage and initial diagnoses, helping to extend the capabilities in a staff-limited environment and help bring down health-care costs. Many of the exhibited software systems boasted about using AI but it won't be until next year's meeting that we see the true integration of large-language models into health care technology.

Many of the challenges of technology in health care carry over to higher education. We don't generally use faxes, but why do we send transcripts by PDFs? Health and student data share similar privacy and security challenges, why can't we develop a FHIR-like system for higher education? Cybersecurity and Patient Student Engagement challenges loom large for universities as well. 

Thursday, April 13, 2023

My Week at Simons

This week finds me at the Simons Institute for Theoretical Computer Science in Berkeley California. Simons started about the same time I joined the administrative ranks and never had the opportunity to spend a full semester there. I can manage a shorter trip and purposely chose a week with no workshops and great visitors including Sam Buss, Russell Impagliazzo, Valentine Kabanets, Toni Pitassi, Ryan Williams, former student Rahul Santhanam and former postdocs Pavan Aduri and Vinod Variyam and many others including the next generations of complexity theory leaders. Simons is having programs on Meta-Complexity and an "extended reunion" for Satisfiability. Apparently I used to work on Meta-Complexity before it was a thing.

Computational complexity traditionally has tried to get ahead of new technologies, and modelled randomized, parallel, quantum computation and cryptography in the infancy of their development allowing complexity to help guide our understanding and development of these areas. In the last twenty years or so, complexity has migrated more towards mathematics, and has mostly missed technological changes like cloud computing, hierarchical memory models, edge and mobile computing for example. 

But the recent advances in optimization and machine learning cannot be ignored. There has certainly been plenty of discussion of ChatGPT and Russell gave an informal lecture yesterday trying to model large-language models at some level. I've been having some discussions about how complexity can answer questions like what it means for a model to be explainable. 

Complexity theory also ought to reckon that practically we seem to be getting the best of P = NP while avoiding losing cryptography simultaneously in Heuristica and Cryptomania among Russell's five worlds. Russell claims we're not in Heuristica, at least not now, since we can still generate hard to solve problems. But if our models aren't modeling the world we live in, perhaps it's time to rethink the models. 

Monday, April 10, 2023

Complexity and Explainable AI

About six years ago, I posted on why it was important to understand machine learning, mentioning trust, fairness, security and causality. But I then I brought in complexity.

What if P = NP? Would that help. Actually it would makes things worse. If you had a quick algorithm for NP-complete problems, you could use it to find the smallest possible circuit for say matching or traveling salesman but you would have no clue why that circuit works. 

Ryan Williams countered

If P=NP you could also find the shortest proof in your favorite formal system that the smallest possible circuit does what you wanted it to do, as well as any other claim you are wondering that may be true about the circuit. That proof might not be comprehensible to you, but it could be written in a format where proof assistant software such as HOL or Coq could parse it and convince you it is correct. So if P=NP (with feasible low constants) I think that would definitely help.

So if P = NP maybe you couldn't understand how the circuit works but any question about how it works you could answer. 

While we don't live in a world where P = NP, we are in a world where we can produce very strong learning algorithms where we can't understand the programs they produce. They are now powerful enough to apply Ryan's idea and use the algorithms themselves to understand the decisions they make.

For example I tried using Google Translate on a Hungarian obituary of Vera SĂłs. Hungarian does not use gendered pronouns and and the translation incorrectly produced male pronouns. I then fed the translation into ChatGPT which translated with the correct female pronouns. 

You can speculate why Google Translate uses male pronouns but you can't ask it why. But you can ask ChatGPT.

I used female pronouns in the translation because Vera T. SĂłs is a female name. Vera is a feminine given name, and since the context of the original text is about her life and accomplishments, it is appropriate to use female pronouns in the English translation to maintain clarity and accurately convey information about her.

So perhaps if you want to understand how ML works, perhaps we should seek stronger algorithms, not weaker ones, algorithms that can explain themselves. As they say, a dull knife is more dangerous than a sharp one.

Tuesday, April 04, 2023

Neil Jones (1941-2023)

Eric Allender graciously agreed to write this remembrance of Neil Jones.

Neil Jones passed away on March 27.

Neil's work had a profound impact on the field of computational complexity theory, although he was primarily known for his work in other areas of computer science.  For example, his 1998 ACM Fellow citation is "For outstanding contributions to semantics-directed compilation, especially partial evaluation, and to the theory of computation, formal models and their practical realization."  Note that there's no mention of "complexity" (except as it is bundled together with "theory of computation" -- and Jones also published in the area of computability theory).

So what were some ways that Neil influenced the development of computational complexity theory?

In 1972, in the 4th STOC conference, Neil collaborated with Alan Selman, to show that a notion that logicians had been studying since the 1950's coincided exactly with a natural complexity class.  More specifically, given a logical formula F, the "spectrum" of F is the set of numbers {n : F has a finite model of size n}.  What kinds of sets can be the spectrum of a first-order formula?  Is this class of sets closed under complement?  These were some of the questions that logicians had struggled with.  Jones and Selman gave a precise characterization, as NE (nondeterministic exponential time).  Thus the complement of every spectrum is a spectrum if and only if NE=coNE.  As D. Sivakumar points out in a LinkedIn comment on Neil's death: "The following year, Fagin proved that generalized spectra coincide with NP, and descriptive complexity theory was born."

Of course, a lot of other things were happening in the late 60's and early 70's:  Savitch proved Savitch's Theorem.  The first NP-completeness results appeared.  It appears that several people were trying to build on Savitch's theorem, to show that everything in P can be done in log2 space, and this motivated Steve Cook to define a problem ("Path Systems") and show (1) certain algorithms for Path Systems could not be implemented in small space, and (2) Path Systems has a small-space algorithm iff everything in P does.  This result of Cook's was similar in flavor to a theorem of Savitch, showing that a problem he called "Threadable Mazes" was in L if and only if NL=L.  Although these notions were clearly in the air, Jones (and -- simultaneously -- Meyer & Stockmeyer) were the first to explicitly formalize the notion of logspace reducibility (including closure under composition), and to notice that the NP-completeness results of Cook and Karp held also under logspace reducibility.  And Jones was the first one to go ahead and develop the general theory of logspace reducibility and the theory of completeness for subclasses of P (first for P itself (with Laaser), and later for NL (with Laaser and Lien)).  I think that this is when people started to get the idea that completeness was not a special property that only a few problems shared.  Rather: Nearly EVERY natural computational problem was likely to be complete for some reasonable complexity class.

Notably, Jones also recognized that logspace was overkill, when considering reductions.  He also wanted to have a really restricted notion of reducibility, so that one could talk meaningfully about problems being complete for L.  To this end, he defined "log-rudimentary" reducibility.  This was quite natural for him, since he had work previously on Smullyan's "Rudimentary Relations".  But log-rudimentary reductions never really caught on.  Instead, after Furst, Saxe, and Sipser kickstarted the study of AC0 circuits, a notion of AC0 reducibility was developed by Chandra, Stockmeyer, and Vishkin in the mid-1980's, which turned out to be very useful in classifying problems as being complete in various subclasses of P.  Much later, in 1991, I published a paper with Vivek Gore, showing that Neil's log-rudimentary reductions are precisely the same thing as uniform AC0 reducibility.  Thus Neil Jones had the insight to define and study a notion that would not become mainstream for another decade, and which still provides the best tool for classifying the complexity of natural problems in subclasses of P.

I only had the pleasure of meeting Neil once, during a visit to Copenhagen in 2004, although we would occasionally exchange some e-mail about topics in complexity.  It is interesting to note that the most recent paper on Neil's DBLP page deals with complexity classes.  I haven't spent much time looking at the paper, but I do see that the authors define a complexity class that lies between NL and P.  It might be interesting to see if this class coincides with SAC1 (also known as LogCFL).

I thank Lance and Bill for encouraging me to write a few lines about Neil's importance to the field. 

Saturday, April 01, 2023

Who's on April First

 


Carlos May waving to the crowd on April 1, 1972

Instead of the usual April Fools’ Day post, I present one of the best April Fools Day stunts ever. Here’s the text from an old Parade Magazine clipping I dug up recently that was published on April 1, 1985.

When it comes to innovative and wacky ideas in baseball, Bill Veeck was a true legend. As a team owner and promoter, Veeck was known for his creative approach to the sport, from planting ivy on the walls at Wrigley Field to his famous "exploding scoreboard" at Comiskey Park. But did you know about the time Veeck pulled off an unforgettable April Fools' stunt by having the 1972 Chicago White Sox wear the names from the classic "Who's on First?" sketch?

It was April 1, 1972, and the Chicago White Sox were getting ready to play a game that would go down in history. Bill Veeck had decided to pay homage to the iconic comedy routine by Bud Abbott and Lou Costello, considered by many the greatest comedy sketch ever performed. For those unfamiliar with the sketch, it revolves around a series of misunderstandings based on the names of the players on a fictional baseball team. The names sound like common phrases, leading to a hilariously confusing conversation.

In Veeck's version of the stunt, the White Sox players would take the field with the names of the "Who's on First?" team on the back of their jerseys. The players, initially skeptical of the idea, eventually embraced the spirit of April Fools' Day and played along.

As the game commenced, fans were treated to a scene straight out of the Abbott and Costello routine. Instead of their usual names, the players' jerseys featured names like "Who," "What," "I Don't Know," "Why," "Because," "Tomorrow," and "Today." Here was the starting lineup:

  1. Who - First Base: Dick Allen
  2. What - Second Base: Mike Andrews
  3. I Don't Know - Third Base: Bill Melton
  4. Why - Left Field: Carlos May
  5. Because - Center Field: Ken Berry
  6. Abbott - Right Field: Jay Johnstone
  7. I Don't Care - Shortstop: Luis Aparicio
  8. Today - Catcher: Ed Herrmann
  9. Tomorrow - Pitcher: Wilbur Wood
The right fielder is never named in the sketch. Pat Kelly pinch hit for Johnstone in the 6th wearing “Costello”. 

The confusion was not only limited to the fans in the stadium. The opposing team and the umpires struggled to keep track of the game, often leading to comical misunderstandings on the field. For instance, the umpire might have shouted, "Who's out!" only to be met with the response, "No, Who's on first!"

Though some traditional baseball fans were initially taken aback by the stunt, the majority embraced the humor, making the game one of the most memorable in White Sox history. It was a testament to Veeck's genius that he could seamlessly blend comedy with the sport he loved.

The "Who's on First?" game became a cherished part of baseball lore and added to the legend of Bill Veeck. It demonstrated his willingness to think outside the box, engage fans, and remind everyone that, at its core, baseball should be a source of fun and entertainment.

The 1972 Chicago White Sox "Who's on First?" April Fools' Day game captured the spirit of Bill Veeck's inventive approach to baseball. As we celebrate April Fools' Day this year, let's remember the time when the White Sox took the field with the most confusing lineup in baseball history and showed us all that, sometimes, laughter truly is the best medicine.