Machine learning and magic

When I first heard about a lie detector as a child, I was puzzled. How could a machine detect lies? If it could, why couldn’t you use it to predict the future? For example, you could say “IBM stock will go up tomorrow” and let the machine tell you whether you’re lying.

Of course lie detectors can’t tell whether someone is lying. They can only tell whether someone is exhibiting physiological behavior believed to be associated with lying. How well the latter predicts the former is a matter of debate.

I saw a presentation of a machine learning package the other day. Some of the questions implied that the audience had a magical understanding of machine learning, as if an algorithm could extract answers from data that do not contain the answer. The software simply searches for patterns in data by seeing how well various possible patterns fit, but there may be no pattern to be found. Machine learning algorithms cannot generate information that isn’t there any more than a polygraph machine can predict the future.

2 thoughts on “Machine learning and magic

  1. The type of thinking you describe has a long and illustrious history:

    On two occasions I have been asked, — “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?” …I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

    Evidently it’s been around since the dawn of computing!

  2. Feel that ML akin to glorified “graph-fitting” with a heavy dose of mathematical sophistry to gain more respectability?
    Just saying

Comments are closed.