AI Spring

Artificial intelligence, or at least the perception of artificial intelligence, has gone from disappointing to frightening in the blink of an eye. As Marc Andreessen said on Twitter this morning:

AI: From “It’s so horrible how little progress has been made” to “It’s so horrible how much progress has been made” in one step.

When I read this I thought of Pandora (the mythical figure, not the music service).

“Are you still working on opening that box? Any progress?”

“No, the lid just … won’t … budge … Oh wait, I think I got it.”


Related post: Why the robots aren’t coming in the way you expect by Mark Burgess

4 thoughts on “AI Spring

  1. If I had to point to an event that alerted the world to the creepy potential for AI, I’d say when it became public that Target could tell whether someone was pregnant from her shopping patterns.

    (I don’t attempt to make a distinction between AI, machine learning, data mining, etc.)

  2. I continue to be underwhelmed by what today is being called “AI”. It’s still a result of intense human effort, perhaps magnified or made faster.

    My initial hope (35 years ago) was that Cyc (the attempt to systematize human knowledge) would enable algorithms to be developed that would lead to some form of ability to truly extrapolate or generalize from a large base knowledge set. But this ability presently exists only to a very limited extent in the extremely narrow domain of various solvers or proof engines.

    I was next tantalized by the capabilities of neural networks to learn in ways that resulted in logic and mappings very different from traditional approaches. (I was briefly involved in an effort to “untangle” stable neural nets into simpler recipes, such as fuzzy logic.) But neural nets hit complexity limits that restricted their usefulness.

    Then came swarms of simpler engines working together to hopefully achieve a “whole greater than the sum of the parts”, as has been done for several autonomous robotics projects. It is still “early days” for these efforts, but the results to date have been meager indeed.

    I next hoped that quantum computers would provide advantages to bypass many of the above limitations, only to see recent work prove that the domains within which quantum computers will provide major benefits over classical computers are relatively small in both number and size.

    Today, I’d classify most of what’s labeled as “AI” to be primarily large-scale statistics more than any form of “intelligence”. With some very notable results, the most useful of which, IMHO, is Google Translate.

    But there is one thing in the world of AI that may be as true today as it was 50 years ago: “We’ll see *real* AI in 25 years!”

Comments are closed.