I occasionally get a comments from people who see “log” in one of my posts and think “log base 10.” They’ll say they get a different result than I do and ask whether I made a mistake. So to eliminate confusion, let me explain my notation.

When I say “log,” I always mean natural log, that is, log base *e*. This is the universal convention in advanced mathematics. It’s also the convention of every programming language that I know of. If I want to use logarithms to a different base, I specify the base as a subscript, such as log_{10} for log base 10.

The reason logs base *e* are called natural, and the reason they’re most convenient to use, is that base *e* really is natural in a sense. For example, the function *k ^{x}* is its own derivative only when

*k*=

*e*. And the derivative of log

*(*

_{k}*x*) is 1/

*x*only when

*k*=

*e*.

All logarithms are proportional to each other. That is, log* _{b}*(

*x*) = log

*(*

_{e}*x*) / log

*(*

_{e}*b*). That’s why we can say something is logarithmic without specifying the base. So we might as well pick the base that is easiest to work with, and most people agree that’s base

*e*. (There are some exceptions. In computer science it’s often convenient to work with logs base 2, sometimes written lg.)

Logarithms base 10 have the advantage that they’re easy to compute mentally for special values. For example, the log base 10 of a 1,000,000 is 6: just count the zeros. So it’s good pedagogy to introduce logs base 10 first. But natural logs are simpler to use for theoretical work, and just as convenient to compute numerically.

Along these lines, when I use trig functions, I always measure angles in radians. Just like all advanced mathematics and all programming languages.

As with natural logs, radians are natural too. For example, the derivative of sine is cosine only when you work in radians. If you work in degrees, you pick up a proportionality constant every time you differentiate a trig function.

Natural logs and radian measure are related: Euler’s formula *e ^{ix}* = cos(

*x*) +

*i*sin(

*x*) assumes the base

*e*and assumes that

*x*measured in radians.

**Related post**: Relating lg, ln, and log10

Very cute photos! That said, as others have posted, you’d have a lot less to explain to people if you used “ln” instead of “log”. It appears to be unambiguous, and it’s also shorter.

Growing up in the South in the U.S., we used “ln” for base-e, and “log” for either base 10 or base 2. This appears to be pretty common. I have worked with people all over the world since then, and I haven’t run into any trouble or confusion with these conventions.

It’s true that this approach means that “log” is ambiguous. In this case that’s a good thing. It means that each author can pick the convention that works best for what they are working on, rather than people using the “wrong” convention having to noise up their formulas with a bunch of constants.

I’m a mathematician and a programmer, so I use the conventions of mathematics and programming. Both professional communities have standardized on “log” for natural log.

Aside from Excel macros, every programming language I know of uses “log” to mean natural log. And so does every

advancedmath book I can recall.I have occasionally seen ln for natural log; I saw it a couple weeks ago in an article written for an undergraduate audience. But I have never seen a math article that writes “log” and assumes the reader knows that the author intends this to mean log_10. On the other hand, I often see things like “log(1 + x) ≈ x for small x” which implies natural logs.

Numerous people have commented that they learned to use “log” for log_10, that’s it’s common in their country or profession, etc. But nobody has cited an advanced math source that makes this implicit assumption.

Knuth uses ‘ln’ for natural logs and requires a subscript for ‘log’. See

TAOCP V1 p 23 where it’s defined, or any of the appendix B ‘Index to notations’.

F Doss: Good example. Knuth’s notation is unambiguous, and a good choice since his books are read by a wide audience with mixed backgrounds. Also, he deals with a variety of number systems.

However, this is not a case of an advanced source writing “log” and implicitly meaning log_10. This is what I’m claiming essentially never happens.

>Euler’s formula exp(ix) = cos(x) + i sin(x) is only true when we use the natural base e and radian measure.

But isn’t this equation true for ALL x ? Independent of whatever x may represent.

Kevin: it’s true that exponential(ix) = cosine( x radians ) + i sine( x radians ) for all x. But if exponential means an exponent of anything other than

e, it’s not true. And exp(ix) does NOT equal cos(x degrees) + i sin(x degrees).Mathworld entry is quite interesting http://mathworld.wolfram.com/NaturalLogarithm.html. It claims log is used for log_e in the mathematics community, and ln is used in in physics or engineering. It is consistent with John’s claim as long as one agrees that neither physicists nor engineers use advanced mathematics

to complicate things further, I think there was some changes in the french eudcation system back in 1992. anyway, french did not have, thank god, that ‘non-negative’ terminology which is an absolute nightmare (not non negative negative ) and a logical inconsistency (the weakest concept should always be assume unless specified)

Alright, obviously this can’t be solved with anecdotes. What we should do is have someone write a script that will scan papers in various languages for the use of log, ln, lg, lb, etc. Then we can get a proper statistical overview of how it is used. For example, I suspect that physical chemistry leans towards log as log_e, since it borrows heavily from physics.

I am curious as to why people don’t like the ln notation; It could be my Asperger’s talking, but explicit seems better then implicit to me.

Pseudonym,I do not fully believe your explanation of sorting complexity.You say «You need at least lg (n!) + O(1) bits of information to discover this number, and therefore need at least lg (n!) + O(1) comparisons to do the sort».

But, for example, we can check that array is sorted in n-1 comparisons, which means that some permutations can be “discovered” in O(n) comparisons, not in O(n × lg(n)) as stated. And who knows, may be some other permutations need maximum possible n×(n-1) / 2 = O(n^2) comparisons to discover.

The problem come from statement «A binary comparison gives you one bit of information». I think that is wrong, because the amount of information the comaprison gives you depends on comparisions you done before.

@Anton, complexity usually means worst case complexity. In the worst case you would need O(nlog(n)) comparisons to sort n numbers with an algorithm like quick sort. And O(n^2) with bubble sort. If we want to be picky, the cost of a comparison between two numbers is proportional to the number of bits to express those numbers, ie log(N) where N is an upper bound on these numbers. Hence, if we are sorting non negative integers that are bounded by N, then quicksort would run in O(Nlog(N)Log(N)) . Note that in this case, radix sort complexity would be better, O(Nlog(N)). It is better than the complexity derived by @pseudonym because radix sort isn’t based on comparison.