Partial functions and total functions

I was thinking about writing a post about entire functions and it occurred to me that I should say something about how entire functions are unrelated to total functions. But then I realized they’re not unrelated. I intend to say something about entire functions in a future post.

Update: See The ring of entire functions.

Partial functions

A total function is a function defined for every element of its domain. This terminology is more common in computer science than in math. A lot of what we call functions in software are actually partial functions because they’re not defined everywhere. For example, the square root function sqrt in C takes a argument of type double and returns a value of type double, but it’s not defined for all inputs of type double, only those inputs that are not negative.

Mathematically speaking it makes no sense to refer to a function that is not defined on every point of its domain. A function’s domain is by definition the set of points where it is defined, so a total function is just a function, and a partial function is not even a function.

And yet it is convenient to refer informally to functions that are undefined for some inputs. This is common as dirt in software development, but it occurs in math too.

Type checking

The distinction between total functions and partial functions is more subtle than it may seem.

For example, it would be more precise to say that the sqrt function is defined on non-negative numbers of type double. There’s no non-negative floating point type in C, but in principle we could introduce one. Sounds like a good idea. Why hasn’t C done this?

It’s obvious when functions are defined on (some) arguments of type double and return (some) arguments of type double.  It’s not so obvious that functions are defined on more restricted types or return more restrictive types. For example, suppose you want to apply sqrt to a function that you wrote. If your function takes in a double and squares it, then this is fine. It may ostensibly return a double but in fact it returns a non-negative double. But if your function cubes its input then the output may not be compatible with the input to sqrt.

The domain of the square root function is simple, but what about functions that are undefined at more complicated sets. For example, the gamma function is undefined at 0 and at negative integers. The gamma function is undefined at -4, for instance, but it’s defined at -4 + ε for any tiny ε. Imagine how hard it might be to verify that a function returns -4 + ε but not -4.

Or suppose you have a function that computes the real part of complex numbers for which the Riemann zeta function is zero. If you could prove that the program always returns either 1/2 or a negative even integer, you would earn fame and fortune. This is an extreme example, but it is true that determining the precise domain or range of a function can require proving theorems that are difficult if not impossible.

Dependent type theory

We’re at a fork in the road. We either give up on more restrictive types and live with functions that may not be defined for every element in their domain, or we implement far more sophisticated types and type checking. In other words, we get into dependent type theory.

If we choose the fork leading to dependent types, there are a lot of new costs and new benefits. The benefits are more obvious than the costs. The benefits include, for example, having a compiler reject code that could pass a negative value into sqrt. But this comes at the expense of making it far more difficult to write compilers. It also restricts the range of programs that can be written or increases the skill requirement of those who can write the needed programs.

Mathematical formalism

Mathematicians informally speak of functions not being defined everywhere in their domain even though it would be more accurate to say that the domain excludes certain points. This kind of talk is fine among friends where there’s an implicit understanding of what detail is being left out and the trust that the details will be made more explicit when necessary. In a more hostile environment, like Twitter, pedants will gleefully pounce on such lapses in rigor.

You can make partial functions rigorous by defining them to be relations rather than functions. A relation between sets A and B is a subset of their Cartesian product A × B. A function is a relation such that for every a in A there exists a unique pair (a, b) in the relation. A partial function is a relation in which for every a in A there is at most one pair (a, b) in the relation.

A “multi-valued function” is strictly speaking an oxymoron, but more formally it is a also relation, not a function. As with partial functions, the terminology “multi-valued function” is informal. Typically there is a way to formalize multi-valued functions so that they are actual (single-valued) functions with a different codomain, but one may wish to avoid this extra formality when it is not necessary.

4 thoughts on “Partial functions and total functions

  1. “…The benefits [of dependent type theory] are more obvious than the costs….”

    To every software consumer I have ever encountered, it is the other way round.

    My catchphrase is “String typing is like violence; if it isn’t solving all your problems, you must just not be using enough of it.”

    But it is as you say: bugs should not compile; and in some subject-matter domains, it is possible to get very close to that ideal.

    The rest is the Booch Principle: “Add more classes”. This is not just a cheap luxury; for business software, it is absolutely essential to the authorization model.

  2. Mathematicians have a much more interesting idea of types than computer science theory people who tend to adopt awkward ideas from formal logic and pretend they are being more precise. One problem is that in programming we don’t have abstract functions, we have algorithms. An algorithm may implement a mathematical function but it is not one. Another problem is that mathematical types are within systems, not sets, so 2*2 may be 4 or 0 depending on whether we are in Z or Z_4.

    Sqrt as a program function is absolutely defined on all doubles. For example the clibrary returns -nan from sqrt(-1) which is totally reasonable.

  3. “There’s no non-negative floating point type in C, but in principle we could introduce one. Sounds like a good idea. Why hasn’t C done this?”

    I think there is a simple answer for that:
    Number types in C correspond to hardware supported types in CPUs.
    CPUs usually implement signed and unsigned integers, but only
    signed floating point numbers.

    The reason is probably that unsigned integers exist because they allow for higher numbers, but that isn’t really needed for floating point numbers.

Comments are closed.