Andrzej Odrzywolek recently posted an article on arXiv showing that you can obtain all the elementary functions from just the function
and the constant 1. The following equations, taken from the paper’s supplement, show how to bootstrap addition, subtraction, multiplication, and division from the eml function.
See the paper and supplement for how to obtain constants like π and functions like square and square root, as well as the standard circular and hyperbolic functions.
It may seem like a cute exercise in theory of computation, but consider the implications for deep neural networks. Stacking emls and 1s is exact (to the degree that log and exp are exact), as opposed to the universal approximator theorem, which only gives epsilon-delta guarantees.
How can a function that requires subtraction be said to bootstrap subtraction?
@David: Imagine you’re given a bunch of black boxes that take two inputs and produce one output. If that black box computes the function eml(x, y), you can wire the boxes together to create a network that can take in x and y and return x – y.
Note that we’re assuming the eml box is a solid unit. We are not given three boxes: one for exp, one for log, and one for subtraction.
Can the functions learnt by neural networks be represented using EML and 1, and what could be the advantages of it?
Thanks; that makes more sense.