Top down, bottom up

Toward the end of his presentation Don’t fear the Monad, Brian Beckman makes an interesting observation. He says that early in the history of programming, languages split into two categories: those that start from the machine and add layers of abstraction, and those that start from mathematics and work their way down to the machine.

These two branches are roughly the descendants of Fortran and Lisp respectively. Or more theoretically, these are descendants of the Turing machine and the lambda calculus. By this classification, you could call C# a bottom-up language and Haskell a top-down language.

Programmers tend to write software in the opposite direction of their language’s history. That is, people using bottom-up languages tend to write their software top-down. And people using top-down languages tend to write their software bottom-up.

Programmers using bottom-up languages tend to approach software development analytically, breaking problems into smaller and smaller pieces. Programmers using top-down languages tend to build software synthetically. Lisp programmers in particular are fond of saying they grow the language up toward the problem being solved.

You can write software bottom-up in a bottom-up language or top-down in a top-down language. Some people do. Also, the ideas of bottom-up and top-down are not absolutes. Software development (and language design) is some mixture of both approaches. Still, as a sweeping generalization, I’d say that people tend to develop software top-down in bottom-up languages and bottom-up in top-down languages.

The main idea of Brian Beckman’s video is that algebraic structures like monoids and monads inspire programming models designed make composition easier. This could explain why top-down languages enable or even encourage bottom-up development. It’s not as clear to me why bottom-up languages lead to top-down development.

Related post: Functional in the small, OO in the large

18 thoughts on “Top down, bottom up

  1. Agreed, but top down programming is related to OOP I think. Bottom up languages had to invent classes to cope with the real world, whereas top down languages impose their “implicit classes.”

  2. Extending from your point that, in top-down languages, the relationships are easier to see/define at the small end…

    Bottom-up languages seem more focused on the implementation (procedural list of instructions), so one has to “zoom out” to a high level to understand relationships. Once you understand module-level design, you can look at each module and define the sub-parts (and their relationships)… and on down until you are writing individual instructions

    Maybe?

    So, the OO paradigm allows you to define/manage relationships at an intermediate level, and helps span the gap between top-down and bottom-up. Or not :)

  3. or maybe
    top-down languages focus on relationships, so they naturally allow composition (as mentioned)

    conversely, bottom-up/turing machine-style languages focus on execution of individual steps, and starting at a higher level helps you better coordinate all the pieces

    OO still bridges these two, by bringing emphasis on relationships to the bottom-up world

  4. A simple hypothesis to account for this phenomenon is that you always have to focus your energy on the “hard part”.
    If your language excels at high-level composability, you need to focus on the low-level, trusting that you have the freedom to make the high-level work out later.
    If your language excels at low-level control, you need to focus on getting the big picture in place so things don’t go wrong, knowing you can make the details work however you want.

  5. Very interesting observation. But it should be noted that the entire top-down effort over so-called TM type PLs is precisely to create an (ad-hoc/custom/domain specific) compositional model. Throw in a few frameworks, and you are in fact working bottom-up (e.g. JEE component development is not top-down.)

  6. This line of thinking has me wondering about APL — it’s contemporous with lisp and fortran but it was originally designed to document the behavior of hardware (http://www.computernostalgia.net/articles/apl.htm), and the implementation as a programming language came later.

    Anyways, I am wondering about the thesis of this post in that context.

  7. “He says that early in the history of programming, languages split into two categories: those that start from the machine and add layers of abstraction, and those that start from mathematics and work their way down to the machine. …”

    For early in the history of programming it might be true that programming languages are the tools for communication between machine and mathematics; but now human connect machines directly using programming languages with the help of mathematics. And that is the responsibility of programming languages. Mathematics has a different responsibility.

    Functional programming languages (Haskell) are gaining importance due to the advantages of referential transference in multi-core machine; which is purely a machine issue for present state of hardware. Otherwise like brain, statefulness is the intelligence, which deals parallelism in much more efficient way; without even demanding referential transference. Commercially available hardware is far away from our brain.

  8. In Systems Engineering (that’s the broad SysEng, not specifically CS), it would be called “Middle Out”.

    The majority of practice is middle out, but it just doesn’t sound sexy, nor take an extreme position, so it’s rarely heard about. Though there is a tendency to only go one level above one’s current design level so fail to see the true big picture. Such cases are a type of premature optimisation…

  9. Something I observed when working for a defense contractor (we built military flight trainers requiring millions of lines of software) was that actually, software design is actually both top down and bottom up, and the result tends to meet in the middle. Requirements analysis and allocation between hardware and software for military flight trainers had a lot trade offs to be dealth with. The best designers looked at both the specialized hardware at the bottom and the system requirements and software at the top, and bridged the gap toward the middle.

  10. Is it fair to say that often “top down” languages have some form of REPL and that makes it easier to start at the bottom?

  11. My real question is why did one class of languages become so much more popular then the other class? Are most problems easier to solve in C-like languages, or did they just get up and running in a useful manner first?

  12. Forth is a bottom-up language that usually employs a REPL of some kind, but still encourages bottom-up coding.

  13. Canageek: Here’s a possible explanation.

    Bottom-up languages struggle with complexity. Top-down languages struggle with efficiency. Initially, programs were simple and hardware slow, so bottom-up languages won. Complexity grew and top-down languages became more efficient, but inertia was on the side of bottom-up languages.

    Another explanation is that bottom-up languages have a lower barrier to entry, especially for those who are not comfortable with math.

  14. I offer myself as a data-point, not an expert. There is some truth to the notion that bottom-up languages have a lower barrier to entry, but possibly not just because of math-phobia. My own history started in 1966 on a machine with 12K characters of memory, perhaps 100 times as much disk, and about a 100uSec instruction time. So efficiency was indeed paramount. But since with was in a Business Data Processing class, correctness was also pretty darn important. Pretty much my whole career has been in “Mission Critical” software (albeit, sometimes with a mission like “Amuse players and make sure they can’t get a free game”). I dispute the notion that I programmed “top down” as much as you might expect. At the beginning of any project, any sane person needs to “start at the top” to envision the general approach. But fairly soon, I would always start planning from the bottom up, and most usually starting with the key data structures. Then back toward the top to refine the module structure, and back to the bottom for proof-of-concept implementation. To not do so leads to what a friend called (in the early 1970s) “the magic subroutine problem”. That is, one ponders, designs, codes from the top down, and as the deadline approaches finds out that the whole task has been pushed into one, impossible to implement, module.
    Today this manifests in the form of metaphor shear (when that magic module is in a framework you can’t fix, and wouldn’t be allowed to try anyway), or failure to scale (yeah, I know, a “high class problem”).

    Abstraction always discards detail. Sometimes the very detail one discards is the one that invalidates the whole approach.

    Now, about “barrier to entry”. I submit that metaphor shear is at least as
    off-putting to beginners as the limits of even something as arcane and tiny as a PIC. (e.g. T.I. Logo’s blatantly false “Out of Ink” error, which might as well have said “This is too magic for _you_ to ever understand”) Can you tell that “priesthoods” give me the willies?

  15. A simple hypothesis to account for this phenomenon is that you always have to focus your energy on the “hard part”.
    If your language excels at high-level composability, you need to focus on the low-level, trusting that you have the freedom to make the high-level work out later.
    If your language excels at low-level control, you need to focus on getting the big picture in place so things don’t go wrong, knowing you can make the details work however you want.

    This apply up to a point. If the big design doesn’t fit the problem, you can do what ever you want in the implementation detail, it will not make the whole software fast or easy to modify.

    And I guess we have same limitation for bottom up programming. if the low level abstraction are not really suited to your needs, it won’t allow for good high level components.

  16. i guess it makes sense that in bottom up languages(particularly in imperative) you have to nail the top abstractions first. In top down languages you tend to worry about low level imperative effects (stateful steps needed to get things done), and the language already helps compose high level constructs already (monads, associative pieces,etc.).

    Both try to achieve the same at the end (high level components that are easy to reuse, compose, modular). Like Legos. They just differ in implementation details.

    Scala is trying to be the middle. For me, functional programming and mathematical abstractions look logical (which is always easier on human brains – we are evolutionarily conditioned to reason logically).

Comments are closed.