Where OO has succeeded most

Eric Raymond makes an interesting observation on where object oriented programming has been most successful.

The OO design concept initially proved valuable in the design of graphics systems, graphical user interfaces, and certain kinds of simulation. To the surprise and gradual disillusionment of many, it has proven difficult to demonstrate significant benefits of OO outside those areas. It’s worth trying to understand why. …

One reason that OO has succeeded most where it has (GUIs, simulation, graphics) may be because it’s relatively difficult to get the ontology of types wrong in those domains. In GUIs and graphics, for example, there is generally a rather natural mapping between manipulable visual objects and classes.

I believe he overstates his case when he says it has been difficult to show benefits of OO outside graphics and simulation. But I agree that it’s harder to create good object oriented software when there isn’t a clear physical model to guide the organization of the code. A programmer may partition functionality into classes according to a mental model that other programmers do not share.

When you’re developing graphics or simulation software, it’s more likely that multiple people will share the same mental model. But even in simulation the boundaries are not so clear. I’ve seen a lot of software to simulate clinical trials. There the objects are fairly obvious: patients, treatments, etc. But there are many ways to decide, for example, what functionality should be part of a software object representing a patient. What seems natural to one developer may seem completely arbitrary to another.

More posts on OOP

16 thoughts on “Where OO has succeeded most

  1. Even when some decisions are arbitrary, OO still enables you to mitigate the problem of what to do with global variables. Where they live may not be ontologically perfect, but at least it is a smaller space than the whole program.

  2. Right, but practically, OO has the effect of forcing programmers to segment namespaces. We all learn that OO is polymorphism and all that, but this segmentation has been the biggest effect I’ve observed.

  3. @Jonathan

    Let us not confuse OO and namespaces. If you collect your functions and variables in distinct namespaces (let us call them a “class”), you do get neat benefits. But it is not the same as OOP.

  4. The boundary between namespaces and objects is fuzzy. I was surprised when I first saw that C# made no semantic distinction. Where C++ uses :: for namespaces and . for objects, C# just has . for both. But after a while that made some sense.

    “Namespace” is a more humble designation than “object” because it implies less cohesion, and it may be a more honest designation in many cases. But namespaces also provide less access protection, at least in the languages I’m familiar with.

  5. A programmer may partition functionality into classes according to a mental model that other programmers do not share.

    But what will change here if we replace “classes” and “objects” with “sets” and “functions” or something else? I think all these mentioned difficulties are applicable to software design in common, not just OOP paradigm. But I agree, that OOP suffers from that much more.

    I think the problem here is OOP isn’t backed with solid theoretical foundation (at least I don’t know any) — the most authoritative books on OOP are descriptions of patterns which are essentially cookbooks and has no strong underlying theory.

  6. There is some theoretical work on object oriented programming. For example, see here. But I think such work is largely beside the point. OO doesn’t belong to computer science as much as software engineering.

    The (potentially huge) benefit of OO is psychological. It’s a way of organizing software that works well with the way people think. When you’ve got a class called “window” that contains the data and functions necessary to work with a window on a user’s screen, that’s fantastic. There’s an obvious place to look for things. There are a number of implicit assumptions a programmer has that are satisfied. (That is if things are well done. Nothing can prevent an idiot programmer from doing something unrelated to window management in a class called “window.” That is NOT a computer science problem. That’s a human problem.)

    The problem is when you have something like a class called “processor” that is a set of vaguely related functions that carry out most, but not all, the steps in some workflow. All this is well known in theory. Classes should correspond to things in the problem domain, be self-contained, encapsulate implementation details, etc. All easy in theory but surprisingly hard to do well in practice.

  7. Even in the graphical world, there’s room for intractable dispute. For example, is a circle a kind of ellipse or is an ellipse a kind of circle? That’s a simplified example, of course, but the general idea — do you start generalized and specialize or the inverse? — is not a question you can decide without appeals to what seems the most obvious to you, which isn’t going to be obvious to others.

  8. SteveBrooklineMA

    I’m not sure I agree that OO reflects the way people think. There may be times when that is true, but often it isn’t. I think it is common in real life to have tools that we apply to objects. But we do not typically think of the tool as part of the object, nor is it very common to think of the tool-object as a bound unit. This is because a single tool can often be applied to many different object types. Even a specialized tool, like a spark-plug wrench, we do not think of as part of the car, part of the engine, or part of the spark plug.

  9. SteveBrooklineMA

    Continuing from my previous comment, it’s true with data too. Every physical thing I interact with in my life has a spatial location. My daughter is still in bed. My coffee cup is on the table next to me. But I don’t think of the location of my cup as *part* of the cup. It is more like a statistic, existing externally.

  10. I agree that the metaphor works better in some circumstances than others. When the metaphor is strained, the psychological benefit of OO is reduced, maybe even to the point of being a liability.

    OO was a good idea, and it still is, as one tool in a toolbox. It’s not going to go away. I’m pretty sure it’s not going to be replaced by pure functional programming. It looks like the next stage is a mixture of OO and functional, with the proportions to be worked out. For example, I think C# and Clojure are closer together than their ancestors C++ and Lisp were.

  11. OO has, of course, also been extremely successful in the design of operating system executives. Even in Unix-like operating systems, UFS is-a filesystem, SCSI disk is-a block device and USB keyboard is-a human interface device. Linux driver writers often have to write their own manual vtables.

  12. I think Raymond’s quote tells more about the limitations of his cognition than about the limitations of OO. Speaking for my domain, OO is used throughout the financial services industry. Every quant library I’ve seen has at least a Security hierarchy and MarketData hierarchy. They may not be too deep, but the OO approach is very helpful when organizing the massive amounts of data and algorithms needed by an investment bank.

  13. Domain Driven Design has helped standardize what objects exist and what they do, at least between team members working in the same domain/project.

  14. As others have pointed out, choosing appropriate reusable abstractions is just as much a problem with a simple struct in C or parallel arrays in Fortran as it is in an OO language like Java or C++.

    SteveBrooklineMA: Sure, a location is not “part of the person”, but then neither is the person’s name or job. The data has to reside somewhere. From a practical perspective, OO can cut down on management of resources. For instance, if I have an external table relating persons and locations, I have two things to manage to get a person’s location, getLocation(personRef,locationTable) versus the more standard OO personRef.getLocation() that encapsulates the location table, most reasonably inside an interface with a name like “Located” specifying a single method getLocation().

    The standard way to abstract functionality across classes is through interfaces (in Java) or virtual classes (in C). For instance, I can use an iterator to apply an arbitrary operation to all of the elements of a collection. What supports this is a common interface, “iterator of type T”.

    The functional programming gurus will tell you this is just a brain-damaged hack that approximates the one-true-way to do maps in functional programming, but practically speaking, it does roughly the same thing for factoring code from a programmer’s perspective.

  15. Bob Carpenter: Sure, a location is not “part of the person”

    That depends on the location! :-)

  16. It really depend of what you call OO.

    If for you OO is java bean with useless getter and setter managed by purelly procedural service sthat just happen to provide a contract in the term of a public interface, then OO is everywhere. But it could be done with C struct and header files.

    If you think OO has mix of code + data with inheritence and polymorphism not just used as a way to reuse code but to modealise the real world, then OO is far less sucessfull than one might think.
    Problem is not functional gurus, problem is not even OO. Problem is that OO is ofen viewed as the only true choice.

Comments are closed.