Thursday, February 08, 2018

For the Love of Algorithms

Wired magazine labelled 2017 as The Year We Fell Out of Love with Algorithms. The article goes on to talk about how algorithms give us filter bubbles, affect elections, propagate biases and eliminate privacy. The article at the end argues that we shouldn't blame the algorithm but the people and companies behind them.

Every day algorithms decide what music we listen to, what posts we see on Facebook and Twitter, how we should drive from point A to point B, what products we should buy. Algorithms feed my nostalgia in Facebook and Google Photos showing me what once was. Algorithms recognize my voice and my face. We've even created new currencies from algorithms. Algorithms form the backbone of the top six world's largest public companies (Apple, Alphabet, Microsoft, Amazon, Facebook and Tencent). It's been a long time since I only trusted algorithms to format my papers.

Algorithms have taken over nearly every aspect of our lives and our economy. With new machine learning techniques, no longer can the creator of an algorithm fully understand how algorithms work. But we aren't in a Skynet world, algorithms are not sentient and not even inherently bad, any more than we label a group of people inherently bad because of a few examples.

This societal transition from human-oriented decision to algorithmic decision making will continue and will have great benefits such as much safer cars and medical care. We must be vigilant, of course, to how algorithms will change society, but (in a serious sense) I welcome our new machine overlords.

7 comments:

  1. My problem with the push to trade human decision-making to algorithmic decision-making has also to do with algorithms not being "smart" enough - for one thing, why delegate our decisions to a device which in some ways has less than the cognitive abilities of a shrimp? Also, algorithms are still well-understood enough to be "influenced" by the people creating them (the companies you mention above), and there are already indications that these particular people are not trustworthy enough to be "advising" our new overlords.

    ReplyDelete
  2. algorithms in my BRAIN decide what music I listen to when I pick which MP3s to load

    algorithms in my BRAIN decide who to follow on Facebook and Twitter

    algorithms in my BRAIN general decide how drive from point A to point B with some map referencing

    algorithms in my BRAIN decide what products to buy after looking up prices and options

    ReplyDelete
    Replies
    1. ... algorithms on XYZ's server and ABC's server decide the data that your BRAIN will "see" and use for its decisions ... :-D

      Joking apart, in my opinion, on some specific decisions you can dig options/reviews/pros/cons... and so on, but for many "minor" (for you) aspects, especially if they are somewhat tied to the "virtual world", your decisions will be in some degree driven by algorithms.

      This is an already ongoing process (due to newspapers, radio, tv for which the "contents" are decided by other humans, not algorithms) ... for example, if you listen to the vast majority of radio stations your BRAIN will decide among a VERY restricted musical offer and it will think that it made a "free independent" decision.

      Delete
  3. can an algorithm moderate this comment

    ReplyDelete
  4. First and foremost one should note, that the output of AI approaches used in practice are rarely "algorithms" in the traditional sense. An algorithm consists of clear-cut instructions of how to solve a well-defined problem. We can analyze the algorithm to understand its performance and other relevant properties. In this sense, it's just a tool that some human throws at a problem to solve it (at least with high probability).

    But in AI, you don't get any of that. It's a bunch of heuristics depending crucially on a very large number of guessed/learned parameters often thrown at some ill-defined "problem", leading to some solution that apprarently are good enough in many cases. No guarantess, no proofs, just magic in - magic out. Will the neural network always detect that little child jumping in front of your car in the video stream? Who knows? Maybe it works better than humans, maybe not. Hard to quantify, measure, verify -- the tenets of sound engineering practice.

    With real algorithms, there is intent by the person who applies the algorithm. Facebook certainly has goals that are implemented in their algorithms. It's probably not a neural network that filters the messages that you are seeing, but an algorithm optimizing for Facebook's goals, whatever they are. With intent there is accountability. With magic, there is nothing.

    If an algorithm is just a tool with established properties that I can apply to some problem in order to solve it, then it's well justfied to always automate this activity using the tool. In some sense, man-kind has fully understood that activity by concentrating all relevant knowledge about it into that algorithm. Problem solved, once and for all. Doing it manually will never be rational again, except for pedagogical or educational reasons. Think calculator or Mathematica.

    But with e.g. neural networks, I don't get a tool with known properties that will arguably always solve the problem it is thrown at. It's a matter of trust, but not in a rational, scientific or engineering sense. The performance paramters are NOT known, it just seems to work. Nobody really knows why it works, or why it doesn't work. Not body even can tell, whether one is justified to apply this technique in that particular environment, since there are also no established *limitations* of that particular technique.

    In that sense, it's clearly justifiable trusting life to traditional algorithms with known properties complemented by sound reliability engineering of the system that implements the algorithm. I can justify up to a tolerable level of catastrophic failures per hour to trust the flight control system and autopilot rather than the human pilot.

    But going to AI, there is very little justfications for trust. Too much arbitrariness, too much vague alchemy, too little science. If an AI is driving the car, sure it may work most of the time. Or it may not. In any case, we can't say that man-kind has now "understood" the problem of driving cars to the extend that the activity we can write down explicitly how to do it, and then just let a machine do it. We haven't understood it at all on a fundamental level. We don't understand it even on a non-rigorous but algorithmic level, where e.g. we could write down a heuristic algorithm which kind of works, but of which we can't prove its properties. Rather, we rely on some very simple and generic shell of an algorithm which fully depends on huge tables of opaque data that fell from the sky. We will for sure all do it in the future, yet we have understood nothing really about the problem at hand.

    ReplyDelete