From Jaron Lanier:
I love software, but software always under-represents reality. Reality has this depth to it and potential for surprise and subtlety that you just can’t get from software.
From Jaron Lanier:
I love software, but software always under-represents reality. Reality has this depth to it and potential for surprise and subtlety that you just can’t get from software.
Comments are closed.
I would agree entirely if the word can’t was replaced by don’t.
What are you saying, Nathan? That software doesn’t represent reality, but has the ability to do that?
I’m not sure of your meaning, but it seems you have a low view of reality.
“The map is not the territory.” Korzybsky
(some other Richard)
Or perhaps I have a high view of the potential of software as a reflection of reality?
I’d say we don’t even know enough about reality, never mind the software or hardware requirements, to even come close to reflecting it in software. Just take a gander at this page: To do list. Or at a more human scale, this one. Note that even the structure of water is not resolved. Even supposing you just want to reflect ordinary, everyday reality, what about turbulence?
And that’s just the building blocks. What about reflecting intelligent agents? Maybe you could reflect some basic, rudimentary actions of ordinary humans, but real artificial intelligence? And supposing you manage that, what about, for example, people suffering from schizophrenia or psychosis? Before you answer, check out the table of contents for this book.
And of course, these are all things that we know that we do not know, and just a few of such unknowns at that. How much more might there be in reality that we aren’t even aware that we know nothing about?
In my travel bag, I have a small handheld mirror. It’s so small that, when I hold it close enough to be useful for shaving, it only shows at most about a quarter of my face. But afterward, I’m reasonably convinced that I am well-groomed.
Software which accurately represents reality does not have to reflect the entirety of reality, it merely has to perfectly match the user’s mental model of reality (per Donald Norman) at a high enough fidelity that the user’s task can be completed satisfactorily.
I can shave in the dark. I often shave with my eyes closed. If I decided to use a piece of cardboard for a mirror and still got a good shave, would that demonstrate that the cardboard shows an accurate representation of my face? It would have a high enough fidelity that the user’s task could be completed satisfactorily.
Besides, if I have software which perfectly matches my mental model of my face, how could it possibly reveal that my kid drew on it with a Sharpie while I was asleep? That’s the kind of surprise that reality can spring, which is not going to come out of a perfect match to a mental model.
Your mind is made up; I shall intrude no longer on your predetermined conclusions.
I’m quite frequently surprised by software. However, software surprises are almost universally bad surprises, reality can give you pleasant ones :)
OK, it’s been a while since someone has posted to this thread.
I just ran across it and I have a another angle to add, so keep the shaving cream capped. :-)
In my experience, software is based upon requirements.
Requirements is based upon what the user says they do, or needs to do.
However, what they say they do, and what they actually do, is usually different. Sometimes far different. Because they when they describe what they do, they present a picture of a world of make beleive. They tell you how the task should be done, or how they would do if the world were a perfect place.
But the world’s not perfect and tasks are usually performed not as they “should be” but as they need to be.
And so if you take the users word for it, and we often do.
The requirements don’t match what they do, and as a result,
the software doesn’t match what they do either.
Ergo, it doesn’t match reality. At least as it exists today.
But someday…