Leading digits and quadmath

My previous post looked at a problem that requires repeatedly finding the first digit of kn where k is a single digit but n may be on the order of millions or billions.

The most direct approach would be to first compute kn as a very large integer, then find it’s first digit. That approach is slow, and gets slower as n increases. A faster way is to look at the fractional part of log kn = n log k and see which digit it corresponds to.

If n is not terribly big, this can be done in ordinary precision. But when n is large, multiplying log k by n and taking the fractional part brings less significant digits into significance. So for very large n, you need extra precision. I first did this in Python using SymPy, then switched to C++ for more speed. There I used the quadmath library for gcc. (If n is big enough, even quadruple precision isn’t enough. An advantage to SymPy over quadmath is that the former has arbitrary precision. You could, for example, set the precision to be 10 more than the number of decimal places in n in order to retain 10 significant figures in the fractional part of n log k.)

The quadmath.h header file needs to be wrapped in an extern C declaration. Otherwise gcc will give you misleading error messages.

The 128-bit floating point type __float128 has twice as many bits as a double. The quadmath functions have the same name as their standard math.h counterparts, but with a q added on the end, such as log10q and fmodq below.

Here’s code for computing the leading digit of kn that illustrates using quadmath.

#include <cmath>
extern "C" {
#include <quadmath.h>
}

__float128 logs[11];

for (int i = 2; i <= 10; i++)
    logs[i] = log10q(i + 0.0);

int first_digit(int base, long long exponent)
{
    __float128 t = fmodq(exponent*logs[base], 1.0);
    for (int i = 2; i <= 10; i++)
        if (t < logs[i])
            return i-1;
}

The code always returns because t is less than 1.

Caching the values of log10q saves repeated calls to a relatively expensive function. So does using the search at the bottom rather than computing powq(10, t).

The linear search at the end is more efficient than it may seem. First, it’s only search a list of length 9. Second, because of Benford’s law, the leading digits are searched in order of decreasing frequency, i.e. most inputs will cause first_digit to return early in the search.

When you compile code using quadmath, be sure to add -lquadmath to the compile command.

Related posts

Benford’s law and SciPy
Leading digits of factorials

Read More

C++ at Facebook

Andrei Alexandrescu said in a panel discussion last week that when he joined Facebook two years ago, maybe 90% of the programmers wrote PHP and 10% C++. Now there are roughly as many C++ programmers as PHP programmers.

One reason Facebook is shifting more work to C++ is to reduce operating costs such as power consumption per user.

Andrei’s remarks are about 24 minutes into this video.

Read More

C++ Renaissance

Dynamic language developers who are concerned about performance end up writing pieces of their applications in C++. So if you’re going to write C++ anyway, why not write your entire application in C++?

Library writers develop in C++ so that their users won’t have to. That makes a lot of sense. But if you’re developing your own application, not a library, maybe it would be better to write everything in C++ instead of wrapping C++ in something else.

A few years ago an immediate objection would be that C++ is hard to use. But with the advent of C++ 11, that’s not as true as it once was. C++ has gained many of the conveniences traditionally associated with other languages. (Update: for more on this theme, see Not Your Father’s C++.)

Dynamic languages are designed for programmer productivity[1] at the expense of efficiency. If you can afford that trade-off, and quite often you can, then don’t worry about C++. But if you’re using a dynamic language and C++, maybe it would be easier to just stick to C++. Of course the decision depends on many factors — how much of the application needs to be efficient, the size and skill of your team, etc. — but I suspect more projects will decide to do everything in C++ because of its new features.

***

[1] “Productivity” implicitly assumes productivity at a certain set of tasks. If your tasks fall into the set of things a language was designed to support, great. But a “productivity” language may not improve your productivity if you don’t meet the language’s intended profile.

Related posts:

What most C++ programmers do
Maybe C++ hasn’t jumped the shark after all

Read More

a < b < c

In Python, and in some other languages, the expression a < b < c is equivalent to a < b and b < c. For example, the following code outputs “not increasing.”

a, b, c = 3, 1, 2
if a < b < c:
    print "increasing"
else:
    print "not increasing"

Now consider the analogous C code.

int a = 3, b = 1, c = 2;
if (a < b < c)
    printf( "increasingn" );
else
    printf("not increasingn");

This outputs “increasing”!

If you don’t know C, you’d expect the output to be “not increasing.” If you do know C, you might expect the code not to compile.

Here’s what’s going on. The code first asks whether a < b. This evaluates to 0 because a < b is false. Then it asks whether 0 < c, which is true.

If the code above were part of a C++ program, it would still compile and still produce the same result. However, the compiler would generate a warning for implicitly casting Boolean result to an integer.

Read More

Using C# like a scripting language

Clift Norris wrote a clever little batch file csrun.bat several years ago. I thought I’d posted it here, but apparently not. If you have a C# program in foo.cs, you can type csrun foo.cs to compile and run the program.

The batch file doesn’t do much at all, but it might change how you think about a C# program. The C# code is still compiled, but since the compilation step is hidden, it feels more like an interpreted language.

When someone says they like interpreted languages, maybe what they really mean is that they enjoy running code quickly without the overhead of starting up an IDE, compiling the code, navigating to the directory containing the compiled executable, etc. This has nothing to do with whether a language is compiled or interpreted.

@echo off
REM : Compile and run a C# source file.
REM : The C# compiler (csc.exe) must be in your PATH.
REM : The command line argument is expected to be something like foo.cs

if "%1"=="" goto USAGE

csc.exe /nologo /out:%1.exe  %1
if ERRORLEVEL 1 goto EXIT

%1.exe	%2  %3  %4  %5
goto EXIT

:USAGE
echo You must specify an argument representing the C# file you want to run.

:EXIT

This batch file does not set references; you’d need to modify it if you want to reference an assembly.

Update: CsharpRepl is a REPL (read-evaluate-print-loop) that lets you write C# at the command line as you would most scripting languages. CsharpRepl is part of Mono and works cross-platform. Thanks to MikeG in the comments.

Related post: Visual Studio 2010 is a pig

Read More

Maybe C++ hasn’t jumped the shark after all

A couple years ago I wrote a blog post Has C++ jumped the shark? I wondered how many people would care about the new C++ standard by the time it came out. I doubted that it would matter much to me personally.

… if something is hard to do in C++, I just don’t use C++. I don’t see a new version of C++ changing my decisions about what language to use for various tasks.

The new standard is out now, and I find it more interesting that I thought I would have. I’d still rather not use C++ for tasks that are easier in another language I know, but sometimes I don’t have that option.

When I heard about plans to add lambdas and closures to C++ I commented here that “it seems odd to add these features to C++.”  Now I think lambdas (anonymous functions) and closures may be the most important new features for my use.

One of my favorite features in Python is the ability to define functions on the fly. If I’m writing a function and need to create a new function to pass as an argument, then I can define that function on the spot. By contrast, C++ has not let you define a function inside a function. Now you can.

(You still cannot define a named function inside a C++ function. However, you can create anonymous functions inside another function. If you save that anonymous function to a variable, then you’ve effectively created a named function.)

Here are a couple examples of how it can be very convenient to define little functions on the fly. First, optimization software typically provides methods for minimizing functions. If you want to maximize a f(x), you define a new function g(x) = -f(x) and minimize g. Second, root-finding software typically solves equations of the form f(x) = 0. If you want to solve f(x) = c, you create a new function g(x) = f(x) – c and find where g is zero. In both cases, the function g is so trivial that there’s no need to give it a name. And more importantly, you’d rather define this auxiliary function right where you need it than interrupt your context by defining it outside.

Anonymous functions can also have associated data obtained from the context where they’re defined. That’s called a closure because it encloses some context with the function. This is very often necessary in mathematical software development. For example, say you have a function of two variables f(x, y) and you want to integrate f with respect to x, holding y constant. You might then think of f as a function of one variable with one constant, but the compiler sees it as a function of two variables and will not let you pass it to an integration routine expecting a function of one variable. One way to solve this problem is to use function objects. This isn’t difficult, but it requires a lot of ceremony compared to using closures.

For specifics of how to use lambdas and closures in C++, see Ajay Vijayvargiya’s article Explicating the new C++ standard (C++0x), and its implementation in VC10.

Related posts:

Why do C++ folks make things so complicated?
Random number generation in C++ TR1
Regular expressions in C++ TR1

Read More

Why do C++ folks make things so complicated?

This morning Miroslav Bajtoš asked “Why do C++ folks make things so complicated?” in response to my article on regular expressions in C++. Other people asked similar questions yesterday.

My response has two parts:

  1. Why I believe C++ libraries are often complicated.
  2. Why I don’t think it has to be that way.

Why would someone be using C++ in the first place? Most likely because they need performance or fine-grained control that they cannot get somewhere else. A Ruby programmer, for example, can make a design decision that makes code 10% slower but much easier to use. “Hey, if you want the best performance possible, why are you using Ruby? Didn’t you come here because you wanted convenience?” But the C++ programmer can’t say that. It’s not turtles all the way down. Often C++ is the last human-generated language in a technology stack before you hit metal.

This weekend The Register quoted Herb Sutter saying “The world is built on C++.” The article goes on to mention some of the foundational software written in C++.

Apple’s Mac OS X, Adobe Illustrator, Facebook, Google’s Chrome browser, the Apache MapReduce clustered data-processing architecture, Microsoft Windows 7 and Internet Explorer, Firefox, and MySQL — to name just a handful — are written in part or in their entirety with C++.

Certainly there is a lot of software implemented in higher-level languages, but those high-level languages are almost always implemented in C or C++. When there’s no lower-level language to appeal to, you have to offer a lot of options, even if 90% of users won’t need those options.

On the other hand, that doesn’t mean all C++ libraries have to be complicated. The argument above says that the lowest layers have to be complicated and they’re written in C++.  But why couldn’t the next layer up also be written in C++?

Some time in the 90’s I ran across an article called “Top C++.” I’ve tried unsuccessfully since then to find a copy of  it. As I recall, the article proposed dividing C++ conceptually into two languages: Top C++ and Bottom C++. Explicit memory management, for example, would be in Bottom C++. Top C++ would be a higher-level language. You could, for example, tell a compiler that you intend to write Top C++ and it could warn you if you use features designated as Bottom C++.

Of course you could slip from Top C++ into Bottom C++ if you really needed to, and that’s the beauty of using one language for high-level and low-level code. Application code in C++ would use Top C++ features by default, but could deliberately drop down a level when necessary. A section of Bottom C++ could be marked with comments or compiler pragmas and justified in a code review. Instead of having to cross a language barrier, you simply cross a convention barrier.

I thought this was a great idea. I’ve followed this approach throughout my career, writing high-level C++ on top of low-level C++ libraries. To some extent I put on my Bottom C++ hat when I’m writing library code and my Top C++ hat when I’m writing applications.

But most of the world has decided that’s not the way to go. If you’re writing C++, it might as well be Bottom C++. Instead of Top C++, write VB or some other higher-level language. There’s very little interest in high-level C++.

I imagine this decision has more to do with management than technology. It’s easier to have people either write C++ or not. If you’re writing C++, then use any language feature any time. I suppose this is a stable equilibrium and that the Top/Bottom C++ partition is not.

Related posts:

Software architecture as a function of trust
Programming language subsets

Read More

C# math gotchas

C# has three mathematical constants that look like constants in the C header file float.h. Two of these are not what you might expect.

The constant double.MaxValue in C# looks like the constant DBL_MAX in C, and indeed it is. Both give the maximum finite value of a double, which is on the order of 10^308. This might lead you to believe that double.MinValue in C# is the same as DBL_MIN in C or that double.Epsilon in C# is the same as DBL_EPSILON. If so, you’re in for a surprise.

The constants DBL_MAX and double.MaxValue are the same because there is no ambiguity over what “max” means: the largest finite value of a double. But DBL_MIN and double.MinValue are different because they minimize over different ranges. The constant DBL_MIN is the smallest positive value of a normalized double. The constant double.MinValue in C# is the smallest (i.e. most negative) value of a double and is the negative of double.MaxValue. The difference between DBL_MIN and double.MinValue is approximately the difference between 10^-308 and -10^308, between a very small positive number and a very large negative number.

C has a constant DBL_EPSILON for the smallest positive double precision number x such that 1 + x does not equal 1 in machine precision. Typically a double has about 15 figures of precision, and so DBL_EPSILON is on the order of 10^-16. (For a more precise description, see Anatomy of a floating point number.)

You might expect double.Epsilon in C# corresponds to DBL_EPSILON in C. I did, until a unit test failed on some numerical code I was porting from C++ to C#. But in C# double.Epsilon is the smallest positive value a (denormalized) double can take. It is similar to DBL_MIN, except that double.Epsilon is the possible smallest value of a double, not requiring normalization. The constant DBL_MIN is on the order of 10^-308 while double.Epsilon is on the order of 10^-324 because it allows denormalized values. (See Anatomy of a floating point number for details of denormalized numbers.)

Incidentally, the C constants DBL_MAX, DBL_MIN, and DBL_EPSILON equal the return values of max, min, and epsilon for the C++ class numeric_limits<double>.

To summarize,

  • double.MaxValue in C# equals DBL_MAX in C.
  • double.MinValue in C# equals -DBL_MAX in C.
  • double.Epsilon is similar to DBL_MIN in C, but orders of magnitude smaller.
  • C# has no analog of DBL_EPSILON from C.

One could argue that the C# names are better than the C names. It makes sense for double.MinValue to be the negative of double.MaxValue. But the use of Epsilon was a mistake. The term “epsilon” in numeric computing has long been established and is not limited to C. It would have been better if Microsoft had used the name MinPositiveValue to be more explicit and not conflict with established terminology.

Related posts:

C# random number generation
Math library functions that seem unnecessary
Floating point numbers are a leaky abstraction

Read More

Math library functions that seem unnecessary

This post will give several examples of functions include in the standard C math library that seem unnecessary at first glance.

Function log1p(x) = log(1 + x)

The function log1p computes log(1 + x).  How hard could this be to implement?

log(1 + x);

Done.

But wait. What if x is very small? If x is 10-16, for example, then on a typical system 1 + x = 1 because machine precision does not contain enough significant bits to distinguish 1 + x from 1. (For details, see Anatomy of a floating point number.) That means that the code log(1 + x) would first compute 1 + x, obtain 1, then compute log(1), and return 0. But log(1 + 10-16) ≈ 10-16. This means the absolute error is about 10-16 and the relative error is 100%. For values of x larger than 10-16 but still fairly small, the code log(1 + x) may not be completely inaccurate, but the relative error may still be unacceptably large.

Fortunately, this is an easy problem to fix. For small x, log(1 + x) is approximately x. So for very small arguments, just return x. For larger arguments, compute log(1 + x) directly. See details here.

Why does this matter? The absolute error is small, even if the code returns a zero for a non-zero answer. Isn’t that OK? Well, it might be. It depends on what you do next. If you add the result to a large number, then the relative error in the answer doesn’t matter. But if you multiply the result by a large number, your large relative error becomes a large absolute error as well.

Function expm1(x) = exp(x) – 1

Another function that may seem unnecessary is expm1. This function computes ex – 1. Why not just write this?

exp(x) - 1.0;

That’s fine for moderately large x. For very small values of x, exp(x) is close to 1, maybe so close to 1 that it actually equals 1 to machine precision. In that case, the code exp(x) - 1 will return 0 and result in 100% relative error. As before, for slightly larger values of x the naive code will not be entirely inaccurate, but it may be less accurate than needed. The solution is to compute exp(x) – 1 directly without first computing exp(x). The Taylor series for exp(x) is 1 + x + x2/2 … So for very small x, we could just return x for exp(x) – 1. Or for slightly larger x, we could return x + x2/2. (See details here.)

Functions erf(x) and erfc(x)

The C math library contains a pair of functions erf and erfc. The “c” stands for “complement” because erfc(x) = 1 – erf(x). The function erf(x) is known as the error function and is not trivial to implement. But why have a separate routine for erfc? Isn’t it trivial to implement once you have code for erf? For some values of x, yes, this is true. But if x is large, erf(x) is near 1, and the code 1 - erf(x) may return 0 when the result should be small but positive. As in the examples above, the naive implementation results in a complete loss of precision for some values and a partial loss of precision for other values.

Functions Γ(x) and log Γ(x)

The standard C math library has two functions related to the gamma function: tgamma that returns Γ(x) and lgamma that return log Γ(x). Why have both? Why can’t the latter just use the log of the former? The gamma function grows extremely quickly. For moderately large arguments, its value exceeds the capacity of a computer number. Sometimes you need these astronomically large numbers as intermediate results. Maybe you need a moderate-sized number that is the ratio of two very large numbers. (See an example here.) In such cases, you need to subtract lgamma values rather than take the ratio of tgamma values. (Why is the function called “tgamma” and not just “gamma”? See the last paragraph and first comment on this post.)

Conclusion

The standard C math library distills a lot of experience. Some of the functions may seem unnecessary, and so they are for some arguments. But for other arguments these functions are indispensable.

Related posts:

What’s so hard about finding a hypotenuse?
Floating point numbers are a leaky abstraction
Comparing math.h on POSIX and Windows systems

Read More

The dark matter of programmers

Kate Gregory said in an interview that she likes to call C++ programmers the dark matter of the software development world. They’re not visible — they don’t attend conferences and they don’t buy a lot of books — and yet there are a lot of them.

I’ve thought the same about those who write code for games and embedded systems. I’ve hardly met anyone who works in either area, but there is a huge amount of such software and someone has to be writing it.

Game developers and embedded systems folk often work in C++, so my observation would overlap with Kate Gregory’s.

Related posts:

Termites and programmers
Disagreeing with Torvalds on C++
Complementary perspectives on the design of C++

Read More

C++ 0X overview

Scott Meyers gives an overview of the new C++ standard in his interview on Software Engineering Radio.

On the one hand, some of the new features sound very nice. For example, C++ will gain

  • Type inference. This will make it easier to work with complex (i.e. template) type definitions.
  • Lambda expressions. This will make working with the standard library much easier.
  • Raw string literals. This will make regular expressions easier to read and write.
  • R-value references. This will make some code more efficient.

On the other hand, the new standard will be about twice the size of the previous standard. A complex language is about to become a lot more complex.

The show notes have several links to much more information on the new standard.

Related posts:

Has C++ jumped the shark?
Two perspectives on the design of C++

Read More

Interview with Clojure author

Simple-talk has an interview with Rich Hickey, author of the programming language Clojure (pronounced “closure”). Clojure is a dialect of Lisp designed to run on top of the Java Virtual Machine. The language is also being ported to the .NET framework as Clojure CLR.

Two things stood out to me in the interview: a comparison of Lisp with C++, and a discussion of complexity.

You’ll often hear a programmer argue that language X is better than language Y.  To support their argument, they’ll say they wrote a program in Y, then wrote it in X in less time. For example, someone might argue that Ruby is better than Python because they were able to rewrite their web site using Ruby in half the time it took to write the original Python version. Such arguments are weak because you can write anything faster the second time. The first implementation required analysis and design that the second implementation can reuse entirely or at least learn from.

Rich Hickey argues that he can develop programs in Lisp faster than in C++. He offers as support that he first wrote something in Lisp and then took three times longer to rewrite it in C++. This is just a personal anecdote, not a scientific study, but it carries more weight than the usual anecdote because he’s claiming the first language was more efficient than the second.

In his discussion of incidental complexity, complexity coming from ones tools rather than from the intrinsic complexity of the problem being solved, Hickey says

I think programmers have become inured to incidental complexity, in particular by confusing familiar or concise with simple. And when they encounter complexity, they consider it a challenge to overcome, rather than an obstacle to remove. Overcoming complexity isn’t work, it’s waste.

The phrase “confusing familiar or concise with simple” is insightful. I never appreciated the arguments about the complexity of C++ until I got a little distance from the language; C++ was so familiar I didn’t appreciate how complex it is until I had a break from writing it. Also, simple solutions are usually concise, but concise solutions may not be simple. I chuckle whenever I hear someone say a problem was simple to solve because they were able to solve it in one line — one long stream of entirely mysterious commands.

Thanks to Omar Gomez for pointing out the interview article.

Related posts:

A little simplicity goes a long way
I disagree with Torvalds about C++
Baklava code

Read More