Saturday, July 22, 2023

"Never give AI the Nuclear Codes" Really? Have you seen Dr. Strangelove?

(I covered a simlar topic here.) 

 In the June 2023 issue of The Atlantic is an article titled:

                                    Never Give AI Intelligence the Nuclear Codes

                                    by Ross Andersen

(This might be a link to it: here. Might be behind a pay wall.) 

As you can tell from the title they are against having AI able to launch nuclear weapons. 

I've read similar things elsewhere. 

There is a notion that PEOPLE will be BETTER able to discern when a nuclear strike is needed (and, more importantly, NOT needed) then an AI.  Consider the following two scenarios:

1) An AI has the launch codes and thinks a nuclear strike is appropriate (but is wrong).  It does so and there is no overide for a human to intervene. 

2) A Human knows in his gut (and some of what he sees) that a nuclear attack is needed. The AI says NO ITS NOT (The AI knows A LOT more than the human and is correct). The human overides the AI (pulls out the plug?) and launches the attack. 

Frankly I think (2) is more likely than (1). Perhaps there should be a mechanism so that  BOTH the AI and the Human have to agree. Of course, both AI's and Humans are clever and may find a way to overide. 

Why do people think that (1) is more likely than (2)? Because they haven't seen the movie Dr. Strangelove. They should!

14 comments:

  1. Because people think they themselves are smart and empathize with other people. And, because the software that we have now is pretty far from intelligent in the human sense and often makes silly mistakes. Personally, I empathize better with computers. We could just get rid of the nuclear weapons.

    ReplyDelete
  2. Strangelove (SPOILER ALERT) has it both ways: A human-led first attack followed by an automated doomsday response.

    ReplyDelete
  3. There is also the movie "Colossus: The Forbin Project".

    ReplyDelete
    Replies
    1. Yes, yes, the late Marvin Minsky used to recommend/mention this movie ... during causal coffee chit chats ....

      Delete
    2. Fortunately, the causal chats did not cause any nuclear attack.

      Delete
  4. Current AI are much more susceptible to adversarial attacks than humans. If a country put AI in charge of nukes, I might expect that at some point some other nation (or a smaller militant group or even a few individuals) would take serious effort into triggering a false alarm just to start a conflict. For the foreseeable future, I would be much more concerned about (1) than (2) since there have been a few close calls where a human could have launched the nukes but didn't (Stanislav Petrov, Vasily Arkhipov). The safest solution of course would be to not have nukes to begin with.

    ReplyDelete
  5. How not to think about War Games?

    ReplyDelete
  6. Yes, the 1983 sci-fi "War Games"
    movie is actually the one that came
    to mind before Anonymous 11:00 AM
    made the comment -- featuring
    IMSAI 8080 microcomputer... along with
    "Theaterwide tactical warfare" terminology.

    Reverting to Bill's post.
    Bill, have you seen the movie?
    Do you think anything there's anything meaningful or slightly insightful in the way Joshua or W.O.P.R. "discovered" there's no winner in tic-tac-toe?
    Presumably, part of the "hack" in the movie, was to exhaust W.O.P.R. (War Operation Plan and Research) system,i.e., divert its ability to crack the codes and then launch the missiles, until it "realizes" that there's no point in attacking.

    ReplyDelete
    Replies
    1. (this is bill) (1) we can only hope, but (2) I am unable to get my own blog to know that I am me and hence put my name in and not have to go through moderation. If they can't even do that...

      Delete
    2. (2) ok this is actually quite a fair point -- lol.

      Delete
  7. ... when I hear humans still talking about "nuclear attack" and "nuclear codes" not in the context of a sci-fi movie, I have no doubt that AI will do a better job at handling the world ... :-)

    ReplyDelete
  8. I discuss issues of trusting computers in general (let alone AI), and a very similar case of a Russian airline crash over Germany in 2002 (where the pilot ignored the onboard computer), in Ch. 17 of my book, Philosophy of Computer Science (Wiley-Blackwell, 2023).

    ReplyDelete
  9. I think the potential issue, is that we are concerned that we gave the AI an objective that it decides is best accomplished by the end of humanity. And to be fair if the objective is to minimize human suffering, ending humanity as soon as possible is clearly the correct answer once you realize human suffering is an inevitable consequence of human existence

    ReplyDelete
  10. What, exactly, is the point of a retaliatory strike capability if a first strike has been suffered? The only real point of that capability is deterrence. If deterrence has failed, there is no point.

    ReplyDelete