In a hologram, information about each small area of image is scattered throughout the holograph. You can’t say this little area of the hologram corresponds to this little area of the image. At least that’s what I’ve heard; I don’t really know how holograms work.
I thought about holograms the other day when someone was describing some source code with deeply nested templates. He told me “You can’t just read it. You can only step through the code with a debugger.” I’ve ran into similar code. The execution sequence of the code at run time is almost unrelated to the sequence of lines in the source code. The run time behavior is scattered through the source code like image information in a holograph.
Holographic code is an advanced anti-pattern. It’s more likely to result from good practice taken to an extreme than from bad practice.
Somewhere along the way, programmers learn the “DRY” principle: Don’t Repeat Yourself. This is good advice, within reason. But if you wring every bit of redundancy out of your code, you end up with something like Huffman encoded source. In fact, DRY is very much a compression algorithm. In moderation, it makes code easier to maintain. But carried too far, it makes reading your code like reading a zip file. Sometimes a little redundancy makes code much easier to read and maintain.
Code is like wine: a little dryness is good, but too much is bitter or sour.
Note that functional-style code can be holographic just like conventional code. A pure function is self-contained in the sense that everything the function needs to know comes in as arguments, i.e. there is no dependence on external state. But that doesn’t mean that everything the programmer needs to know is in one contiguous chunk of code. If you have to jump all over your code base to understand what’s going on anywhere, you have holographic code, regardless of what style it was written in. However, I imagine functional programs would usually be less holographic.
Related post: Baklava code
I’ve seen this with frameworks (esp. ASP.Net) and a lot of inheritance. Part of the problem is that there’s a lot of code without source (the ASP.Net) stuff, and when the debugger hits that all bets are off, the code just executes. Figuring out what code executes next can be an adventure.
Lovely attempt at dry humour. Calling sh*tty code ‘holographic.’ Like it.
What a nice metaphor! I wanted to add that besides making code hard to understand too much DRYness can actually make maintenance more difficult. In order to get rid of repeating code you incorporate more structure into a program. And now you can no longer make quick fixes here and where, often you must also refactor your abstractions if they prove to be too restrictive.
Excellent metaphor
jorjun: “Dry” humor? Nice pun.
It is an interesting metaphor. A hologram works by using the same type of light to illuminate the hologram as was used to create it. If you think of the object that was captured in the hologram as the concept of the program, then you can say the light used to create it was the programmer’s intention.
Unlike a holographic projection however, when you read the program later most likely you do not have that light available. So is this a situation of an anti-pattern, or poor tools?
This is a great point. Sometimes a little redundancy can go a long way, and trying to condense things down can actually make writing code worse. This makes even more sense if you broaden your scope of what counts as “code”.
As an example, I write/maintain GUI test scripts. One the scripts tried to combine similar tests into single functions which can be called for each individual dialog. The dialogs are similar and behave very similarly to one another, but the match isn’t exactly one-to-one: there are some functional differences. The problem is that these tests are very similar but also just different enough that these functions eventually became major hacks with tons of special cases. They’re awful to maintain and making even small changes can cause headaches.
Meanwhile, similar scripts where the tests are just written out each time “from scratch” without any clever slicing and dicing are much easier to maintain, even if a single change in testing requirements means changes in several places. It’s more work but also easier, straightforward work.
In this situation I like to say “This code is so DRY it chafes.”
If the business concerns of two pieces of code are not well aligned — they should not be coupled, even if the code is similarly structured.
All that is cool.
Until you had to fix something in a repeated code, and you gonna (of course) forgot some place where it should be fixed…
I prefer loosing some time at reading (and once used to high abstraction, you don’t loose that much readability) than days of bug hunting…
An interesting property of holograms is, that the whole image is stored in every part of the hologram. So when you cut the hologram in half, you can still see the whole image.
I doubt that holographic code still runs/compiles when cut in half. I guess the analogy ends here…
This phenomenon also happens with prose. I had an adviser who would wring every bit of redundancy out of his writing. He would peephole-optimize his sentences until they said exactly what he meant. They were maximally precise and efficient. Unfortunately, this meant that they made perfect sense to someone who already knew what he was trying to say, but to everyone else they were unreadable.
Christian: I’ve heard that holograms have the property you mention, but I don’t understand. There must be some catch, such as some loss of resolution. Otherwise every hologram could be made as small as you like by cutting it in half repeatedly, and this would give unlimited data compression.
Actually, I think functional code tends to have an execution path that does jump around a lot. To give an extremely simple case, if you write (map (2 *) [1, 2, 3, 4, 5]) the execution path will ping-pong between the recursion (iteration) defined by map and the application of (2 *) [*]. And in a lazy functional language the execution path is extremely difficult to see from the code itself, except for very simple examples.
Having said that, in the context of functional languages, knowing the execution path of the code is less important to understanding than it is for imperative languages.
[*] modulo optimisations, of course.
When I read your title, I thought you were using holographic as in a “holographic will”. :-)
Redundancy is especially important in written language. I argue that the reason our written language is not more parsimonius with characters is precisely to build in redundancy. If you look at some constructed languages, every character in every word is significant, with much bigger problems in the case of misspellings or errors in copying. I wonder if something similar is at work in DNA.
My recollections of experiments with holograms are that the “entire” image is indeed present in a fragment of a hologram, but from a much more limited range of perspectives. Whereas in a large hologram you can view the scene from many angles (making holograms with magnifying lenses in the image fun) in the fragment it is more restricted. But it has been over thirty years and my recollection may be bad. I think you could get the same effect by simply covering all but a small part of a hologram.
I’ve seen code so holographic, when you run it, it shows Princess Leia saying, “Help me Obi-Wan.”
I think george is on to something here. This is a situation of poor tooling. Or I will just say that it is an anti-patern only because we use tools with severe limitations in capturing the programmer’s intention and we do not account for these limitations. One solution to this would be Knuth’s (misunderstood and incorrectly scoffed by some) literate programming.
IIRC, holograms do indeed lose resolution when cut down. TANSTAAFL
Which leads to my point in re redundancy. Another Heinlein story has a group of “super intelligent” folks saving humanity from ourselves. One aspect of this group is that they communicate in a spoken language with “all redundancy” squeezed out”. So what mere mortals would take a paragraph or so to say, they convey with a short raucous squawk. Over the (analog) radio… At that point my 12-year-old self cried B.S.
Sadly my prescience at that age did not foretell a brilliant career in science.
Splitting the hologram will degrade it: http://en.wikipedia.org/wiki/Holography#Fidelity_of_the_reconstructed_beam
My software experience started in 1976. Writing in machine code – putting ‘0’s and ‘1’s into core stores. My word, how fast things have moved, but there is still one underlying theme that this artice brings out.
Thats is that the manufacture of functionally accurate, efficient and maintainable software can only be obtained by imposing structured Software Engineering principles irrespective of the environment in which it is produced or the tools used.
Clever, convoluted and obscure software will fail or turn out to b unaffordable.
Mike Swaim, you know we can step into ASP.NET library code, don’t you?
I’m going to agree that one problem here is poor tooling. However, my feeling is that DRY code is still very desirable and the main problem is when we sacrifice other principles for it. An example I’ve run into is when you have similar code paths that are static in relation to some condition inside a loop (or in my case deeply nested inside a much larger control structure), but somebody has felt it was necessary to add a conditional in order to remove the redundancy between the two operations. The problem is that we have sacrificed functional cohesion for redundancy and ended up with logical cohesion, which I find to be about the worse kind of cohesion. This is similar to the test case example somebody mentioned earlier.
The same results are obtained by over decomposition of classes. I’ve seen some systems that were unbelievably difficult to maintain/understand because the original design took object decomposition as the defacto right thing to do and had zillions of small but related classes.
In support of the original post.. the first goal of writing code should be maintainability, closely followed by intelligent design for performance (design those pieces as performant that MUST be performant, and delay all other performance enhancement until empirical evidence proves it as a necessity). Its a shame these things aren’t at the top of many programmer’s lists.
I’ve noticed that a lot of polymorphic object-oriented code has this property. The behavior of the full code base is scattered into little nuggets of functionality within each object. I can’t decide whether this is a good thing or a bad thing, because when I write polymorphic object-oriented code, it seems to just work.
Framework-based code like Windows MFC code is also holographic. The problem with this kind of code is that I don’t have any idea what the framework is doing. I decorate a big black-box framework with little nuggets of functionality, and either it just works or else it fails mysteriously (generally by failing ever to invoke some nugget) and I have no idea why.
I guess it comes down to how well you understand the behavior of the system.
I found this article intersting, on occasion I have over enthusiastically embraced the DRY principle and ended up regretting it, usually because of some multiple referenced piece of code requires an extra argument or something and I can’t easily change all the referencing code. I guess you could argue that my original function design was short sighted but I rarely know all possible uses of a piece of code when I first write it.
I’ve also noticed frameworks (e.g. JQuery) that rely on string descriptors to couple dissperate bits of code together, these only fail at runtime – I sort of though that strong typing was supposed to cure that.
@John: That’s true. The holographic image is analog, so you’re subject to the (molecular?) resolutions both of the imaging device and of the storage medium — you really can’t do it infinitely: each smaller version of the hologram is, as you suggest, at a lower resolution, simply because the original is pixellated at the level of the number of molecules either containing it, or in the imaging medium which first captured it (whichever is lower).
As far as code is concerned: if your object model is reasonable, so will your code be: at that point, this kind of abstraction becomes moot: if the object model is obvious, inherited behaviour will also be obvious (unless you’re calling a hundred levels of inheritance, or something – in which case it’s not very well designed anyway).
I think the place to make any concessions you’re going to make is probably in the object model: yes, you could probably make every object in the system a subclass of some base object, to the nth degree, but you just don’t… if your object model is sensible, then you shouldn’t run into this problem and the code can be as DRY as you want it to be without loss of comprehensibility.
The term “holographic code” is clearly meant to be derogatory, but I’m not convinced it’s an apt metaphor. I’ve worked with difficult-to-figure-out frameworks, but I’ve also worked with object oriented programs which, while they were full of little nuggets of functionality, were nevertheless reasonably easy to understand even if it was hard to manually follow the execution path. The virtual methods allowed you do do just what you’re supposed to; reason about the system without looking at the details of each class. I think the conclusion must be that there’s good code and bad code, and it’s object-oriented-ness is not necessarily causal.
IMO it’s not a matter of tooling – I profoundly dislike languages, libraries, toolkits or whatever is only usable with a very smart tool.
IMO it’s a problem of architecture. Most often you can structure your application so that most of it is easily understandable, and only a small part is complex, and hide the small part behind an interface which tells _what_ it does, but not how. If you base your application on a small set (maybe 10-20 or less) interfaces, it becomes understandable.
I have seen many apps (and am currently working on one) where this thing about relying on a small set of essential abstractions is completely disregarded. In fact, it seems to me most programmers are oblivious to the advantages such an architecture provides, from a maintainability and readability point of view. Key thing to understand, though: inerfaces, not base classes.
@AC: Correct, about the architecture stuff: I agree completely. Part of this is trying to get programmers to put aside their “..but this is so COOL!” impulse and sacrifice a bit of esoteric brilliance for reality. Ultimately, it’ll end up pretty cool anyway, if it’s done properly.
I like the idea of thinking twice about reducing redundancy to the max but the holographic methaphor does not match at all. Instead it seems inverse to me. In an ideal holograph any part contains the whole. What you talk about is just fragmentation. Even to understand a part of it you have to visit many and far distant places which is really just inverse to the holographic idea.
So we should call the fragmented code anti-holographic.
In some way I think this fragmentation is normal and it is also normal to DNA. But there are some patterns to overcome the problems. In case of DNA there is a special molecular folding of the DNA-string that allows fast unfolding of needed parts out of the whole clew while keeping the rest folded. There was a great article about it but I forgot where.
Fragmentation in order to reduce redundancy is also a performance issue. For exactly that reason relational databases lack support of the inhertitance pattern because it add quite some complexity and indirection which would break the “simple” design of relational databases.
So in the end it is a trade-off:
Minimized Redundancy vs. Fragmentation
like
Speed vs. Space
or
the golden rule of mechanics