Wednesday, January 22, 2025

The Fighting Temeraire

What does an 1838 painting tell us about technological change?

A colleague and I decided to see how well LLMs could teach us a topic we knew nothing about. We picked the Romanticism art movement. I asked ChatGPT to tutor me on the topic for an hour. Chatty picked four paintings. 

Top Left: Liberty Leading the People (Delacroix, 1830)
Top Right: Wanderer above the Sea of Fog (Friedrich, 1818)
Bottom Left: The Fighting Temeraire (Turner, 1838)
Bottom Right: The Third of May 1808 (Goya, 1814)
For each of these paintings, I put the painting up a one screen and used the voice feature to have ChatGPT give me an overview of each, and then we would have a discussion about it where I would ask about various features. Ended up spending about an hour on each. Was it successful? I now know significantly more about the Romantic art period and these paintings, though of course not an expert. It was certainly a better and more enjoyable experience than freshman seminar course on art history I took in college.

Let discuss one of these paintings in more detail, the 1938 painting The Fighting Temeraire, tugged to her last berth to be broken up by Joseph Mallord William Turner, on display at the National Gallery in London. 

The Fighting Temeraire by J.M.W. Turner (click on picture for more detail)

The 98-gun ship Temeraire featured on the left fought in the Battle of Trafalgar. This painting captures the ship being towed by a steam tug through the Thames to be broken up for scrap. 

Much to love in the painting: the reddish sunset, the reflections of the boats in the water, the detail of the Temeraire and the lack of detail of the tug.

But also note the nod to technological change, the tall sailboat being taken to its end by a coal-powered tugboat, marking the new era of shipping vessels, the industrial revolution in full swing, and the beauty we lose to progress. Now a bad metaphor for the AI revolution of today.

If you want to learn more about the painting, you can watch this lecture from the National Gallery, or you could ask your favorite LLM.

7 comments:

  1. Without knowing anything about the topic and without any expert guidance, how do you know which parts of what you learned from the AI are correct versus which parts were totally made up BS that just sounds like it could have been correct? In reality, all its output is the latter, but its training set is so large that it often stumbles into outputting correct information.

    ReplyDelete
    Replies
    1. That wasn't me (I sometimes forget to put my name in), but it sure sounds like me. Maybe it's an LLM imitating me???

      But seriously, the LLM technology has no way of relating the text it deals with to the real world or any idea/concept of "reality" whatsoever, and has no place to plug in such language understanding technology, were there any such technology to plug in. (This is why the iPhone news app crashed and burned. This should not have been a surprise.) The whole point of the LLM algorithm is to generate nice-sounding text without doing the work of language understanding. (Since we im the 1970s/1980s generation of AI types failed to figure out how to do language understanding.)

      So my in-your-face doppelganger is spot-on correct: you can never trust the output from an LLM. By definition of what the algorithm does. It's not just that the output is schmarmy and supercillious, it's that it's exactly and only random.

      Delete
    2. I find the latest models hallucinate far less than before. I agree that you should separately check any critical information, but when I do so it tends to be accurate.

      Instead of knocking the system, try it yourself. Pick a topic to learn, something you don't know well but is well-studied, and ask a model like GPT 4.0 or Claude Sonnet to teach you. I think you'd be surprised.

      Delete
    3. Lance: Wikipedia is just too good. Why bother with something that has to be checked (and has an obnoxious writing style) when there's Wikipedia? Written by real humans and checked by a suprizingly effective editorial system.

      I read, a lot, and have read, a lot, (both English and Japanese) and have written, a lot (English only, during 30 years as a translator). So I'm perhaps overly sensitive to LLM language being obnoxious. A sentence that doesn't have a living, breathing, thinking, emoting human behind it isn't worth reading.

      But I have been thinking about maybe thinking about learning something new, though, and that is an excellent suggestion. All the medical/psych lit on brain aging (I'm 72) has it that walking plus learning something new is the way to go. The problem is that I've sunk a lot of time into Go, Jazz Guitar, and Japanese, and want to get better at those, not learn crotcheting, which was something a study actually tested having people do, and it actually helped!

      Delete
  2. @david_in_tokyo: how has Tokyo been treating you lately! Are u living there permanently? It sure sounds like it. Japanese use to have a lead on AI in the 1980s but surely this was more in hardware than software gadgets.
    Surprisingly this text was not generated by AI or an LLM. An LLM would have generated a much more polished variant of this.

    ReplyDelete
    Replies
    1. I like Japan for a number of reasons, and am settled in here for the long haul. I was actually at NEC's AI Lab in '86/'87. For the fun of it, I reimplemented the pattern matcher from Carl Hewitt's PhD thesis, and they used it happily for several years after I left. But the Japanese AI stuff fizzled.

      Delete
  3. Do AI experts every wonder if whether their marvelous invention is capable of posing, more importantly motivating, novel questions?
    Doesn't an ant (individually or as an emergent intelligent team) constantly probe the environment to seek what to do next or to find a better way to achieve what its family needs?
    Isn't constantly asking questions (even during dreaming), selecting the handful good ones to delve further into and discarding all the nonsense that emerges, a bedrock prerequisite for any intelligent agent?
    Throwing around fancy terms "unsupervised learning", "reinforcement learning", "out-of-sample generalization", "reasoning" does not establish that any amount of fancy gradient-descent algorithms will lead to the holy grail.
    Intelligence has to constantly perform (loosely speaking) "gradient-ascent" to explore a very complex landscape to avoid getting stuck in local minima; look at all the very long chain organic compounds that naturally just want to get clumped-up or knotted-up or break-up (I know alpha-fold has made some inroads on this). A living cell has the herculean (in every sense magical; the only intelligence we know) task of maintaining its precious negative entropy relative to its external environment with which it has to constantly exchange resources and information. In due course, the cell dies and the emergent magic vanishes (not unlike a wisp of cloud); the environment reclaims the cell's entropy and a new cell emerges somewhere else.

    ReplyDelete