Monday, November 06, 2017

The two fears about technology- one correct, one incorrect

When the luddites smashed loom machines their supporters (including Lord Byron, Ada Lovelaces father) made two arguments in favor of the luddites (I am sure I am simplifying what they said):

  1. These machines are tossing people out of work NOW and this is BAD for THOSE people. In this assertion they were clearly correct. (`lets just retrain them' only goes so far).
  2. This is bad for mankind! Machines displacing people will lead to the collapse of civilization! Mankind will be far worse off because of technology. In this assertion I think they were incorrect. That is, I think civilization is better off now because of technology. (If you disagree leave an intelligent polite comment. Realize that just be leaving a comment you are using technology. That is NOT a counterargument. I don't think its even IRONY. Not sure what it is.) 
  3. (This third one is mine and its more of a question) If you take the human element out of things then bad things will happen. There was a TV show where a drone was going to be dropped on something but a HUMAN noticed there were red flowers on the car and deduced it was a wedding so it wasn't dropped. Yeah! But I can equally well see the opposite: a computer program notices things that indicate its not the target that a person would have missed. But of course that would make as interesting a story. More to the point- if we allow on computers to make decisions without the human elemnet, is that good or bad? For grad admissions does it get rid of bias or does it reinforce bias? (See the book Weapons of Math Destruction for an intelligent argument against using programs for, say, grad admissions and other far more important things.)
I suspect that the attitude above greeted every technology innovation. For AI there is a similar theme but with one more twist: The machines will eventually destroy us! Bill Gates and Steven Hawkings have expressed views along these lines.

When Deep Blue beat Kasparov in chess there were some articles about how this could be the end of mankind. That's just stupid. For a more modern article on some of the dangers of AI (some reasonable some not) see this article on watson.

It seems to me that AI can do some WELL DEFINED (e.g., chess) very well, and even some not-quite-so-well-defined things (Nat Lang translation) very well, but the notion that they will evolve to be `really intelligent' (not sure that is well defined) and think they are better than us and destroy us seems like bad science fiction (or good science fiction).

Watson can answer questions very very well, Medical diagnosis machines may well be much better than doctors. While this may be bad news for Ken Jennings and for doctors, I don't see it being bad for humanity in the long term. Will we one day look at the fears of AI and see that they were silly--- the machines did not, terminator-style, turn against us? I think so. And of course I hope so.


  1. I wonder whether there is a "trust element" in the most recent AI advances. There is a verifiable mathematical proof that the Bellman-Ford algorithm will find a shortest path in a weighted graph, but can there be a mathematical proof that a deep learning self-driving car will avoid running over all humans equally?

    It may be that trusting an advanced AI is psychologically similar to trusting an unfamiliar person (which is also mathematically unverifiable), and fearing that a machine will "turn on us" is the other side of that coin.

    1. I see no reason why AI could "solve" trolley problems where either choice is debatable and this will be counted against AI, thus self driving cars will make the good fortune of lawyers.

  2. The Luddites concerns of the 19th century involved two parts (1) the change in the kind of work itself from individual craft to jobs that were mechanical themselves (2) the multiplier effect of using machines on production, reducing the need for workers.

    I think that none of the three categories of harm that you cite quite covers (1) because the focus was on the effect on the well-being of individual workers who still had jobs, not so much on whether the work would have been better done by traditional methods. (I guess that the analogue now is everyone just staring at screens all day.)

    A big part of the reason that (2) was wrong in the long term big picture is that traditional methods were so far from being able to deliver manufactured goods to everybody that many people were still needed even with the multiplier effect of machines.

    When automated plants with few workers can supply the entire world with a good, it has a dramatically different effect. Yes, "Ford workers can still afford to buy Ford cars", but they may be so few that they no longer are representative of the general population. The difference is even greater for high-value high-demand items like cell phones.

    A question you didn't ask is what are the jobs of the future that are going to replace middle management, manufacturing, or driving?

    1. Middle management will protect themselves as they always do. Aren't most of them useless now already?

  3. The problem isn't technology as such, the problem is HOW the insane monkeys use technology.
    Technology, Statecraft, and Unrestricted Warfare
    The Existential Terror of Battle Royale

  4. Regarding the (short till medium time) prospects of AI see also the recent article "The Real Risks of Artificial Intelligence" by David L. Parnas in CACM, Oct 2017.

    For the long time prospects of a technological and scientific evolution we may only speculate (sort of sci-fi). Also it is possible that mankind destroys itself before the blessings of AI come into action. "Devil-may-care" sounds cynical, in particular for the next generations, but do we have an alternative solution?

  5. I presented a paper titled, "If Technology is a Dissipative Structure, Bring it on Deserves a Closer Look." It addresses, from a thermodynamic perspective, the issues in this post. The Abstract can be found at

    Jeff Robbins

  6. I am a graduate student in STEM. I just have an observation to make. The people you call "luddites"- the blue-collar workers of today - are mostly poor, grew up in households where education, let alone higher education, wasn't a priority.

    Not everyone has access to high quality education. Not everyone is going to grow up to get a job being a superstar programmer. And what's more, our very own field of TCS, which we should realize we are *highly privileged* to be a part of, exists simply because someone somewhere with plenty of money thinks it's ok to pay a bunch of math nerds to prove things they enjoy proving, despite there being very real problems out there in the world that should perhaps be prioritized first.

    I urge you to reconsider your smug attitude towards these "luddites". Please have some compassion for those who actually deserve it.

    PS. Yes, technology helped create this interface through which I'm commenting. But even the "high-class-intellectual" world of theoreticians and "brogrammers" who coded up thise site runs on the backs of "normal luddites" such as home-makers, stay-at-home-moms, janitors, cooks, baristas, and office assistants, without whom you and I wouldn't have the luxury of doing what we get to do.

  7. 1. The use of "Luddite" was an analogy, not an identification.

    2. "Not everyone has access to high quality education." This is absolutely correct. And it is a huge problem. This is not a law of nature, it is a sad state of affairs -- but not necessarily everywhere. See about education in Finland, for example.
    3. In many countries, one can have an excellent life, even if one is not a "superstar programmer." For example, in Switzerland effective minimum wage is about 30K/year, universal health care is affordable, etc.
    4. Support for "pure" science (like TCS) comes mostly from enlightened self interest of governments and high tech companies--not because some rich dude thinks it is cool. As for what research should be prioritized--if you have a good crystal ball, please share it with us. Much progress in science and technology is unpredictable. [and if you believe TCS is useless, why don't you change to being a grad student in an area where they solve "very real problems?"]
    5. Of course one must value the worth of work--whether it is proving theorems, cooking meals, or cleaning windows. My point is exactly that one should be able to have a dignified life in ANY profession (and along those lines, strongly object to the implicit assumptions that homemakers, baristas or office workers should be categorized as luddites, or that, because of their profession, they cannot be intellectuals.