Golden ratio and special angles

The golden ratio comes up in unexpected places. This post will show how the sines and cosines of some special angles can be expressed in terms of the golden ratio and its complement.

Recall the golden ratio is

φ = (1 + √ 5)/2

and the complementary golden ratio is

φ’ = (1 – √ 5)/2.

The derivation begins by solving the trigonometric equation

sin 2θ = cos 3θ

in two different ways. To make the solution unique, we look for the smallest positive solution.

First, note that the sine of an angle is the cosine of its complement, i.e.

sin(x) = cos(π/2 – x).

So our equation can be written as

cos(π/2 – 2θ) = cos 3θ.

The smallest positive solution satisfies π/2 – 2θ = 3θ, and so θ = π/10 or 18°.

Now let’s solve the same equation another way. First, we use the double and triple angle identities.

sin 2θ = 2 sin θ cos θ
cos 3θ = 4 cos3 θ – 3 cos θ

Set the two equations above equal to each other and divide by cos θ. Then we have

4 cos2 θ – 3 = 2 sin θ.

Substitute 1 – sin2 θ for cos2 θ and the result is a quadratic equation in sin θ:

4 sin2 θ + 2 sin θ – 1 = 0.

From the quadratic equation, the solutions are sin θ = (-1 ± √ 5)/2. The positive solution is

sin θ = (-1 + √ 5)/2 = -φ’/2.

Setting the solutions obtained from both methods equal to each other,

sin π/10 = sin 18°= -φ’/2.

We can now use common trig identities and the above result to express the sines and cosines of other angles in terms of  φ. Switching to degrees will make the following a little easier to read.

We know sin 18° = -φ’/2, and so cos 72° = -φ’/2. We can use the sum angle identities to express the sine and cosine of every multiple of 18° in terms of φ. Also, we could apply the half angle identities to express the sine of cosine of 9° in terms of φ, and then again by addition formula we could extend this to all multiples of 9°.

This post was an expanded form of a derivation given in The Divine Proportion.

Would you rather have a chauffeur or a Ferrari?

Dan Bricklin commented in a recent interview on how the expectations of computers from science fiction have not panned out. The point is not that computers are more or less powerful than expected, but that we have wanted to put computers to different uses than expected.

photo of red Ferrari

Fictional computers such as the HAL 9000 from 2001: A Space Odyssey were envisioned as chauffeurs. You tell the computer what to do and then go along passively for the ride. Bricklin says it looks like people would rather have a Ferrari than a chauffeur. We want our computers to be powerful tools, but we want to be actively involved in using them.

I’d refine that to say we either want to actively use our computers, or we want them to be invisible. Maybe there’s an uncanny valley between these extremes. Most people are blissfully ignorant of the computers embedded in their cars, thermostats, etc. But they don’t want some weird HAL 9000-Clippy hybrid saying “Dave, it looks like you’re updating your résumé. I’ll take care of that for you.”

Update: See Chauffeurs and Ferraris revisited.

Programs, not just projects

My frustration with personal productivity systems like GTD is that they’re all about projects and tasks. They leave out a third category: programs. GTD thinks of a project as something that can be broken into a manageable number of tasks and scratched off a list. But programs go on indefinitely and cannot be divided into a small number of one-time tasks.

I’m using the word “program” as in an “exercise program” or a “research program.” (I could think of my exercise program as a project, but it’s one I hope not to complete for a few more decades.) Sometimes there is a neat hierarchy where programs spawn off projects that can be divided into tasks. But sometimes you just have programs and tasks.

One of my frustrations with managing software development in an academic environment was the large number of programs disguised as projects. (Sorry, I know it’s confusing to talk about “programs” in the context of software development and not mean computer instructions.) You can’t manage programs as if they were projects. For example, you can’t talk about “after” project is done if it’s not really a project but a never-ending program. You have to either acknowledge that a program is really a program, or you have to have some way to make it into a finite project.

Connecting Fibonacci and geometric sequences

Here’s a quick demonstration of a connection between the Fibonacci sequence and geometric sequences.

The famous Fibonacci sequence starts out 1, 1, 2, 3, 5, 8, 13, … The first two terms are both 1, then each subsequent terms is the sum of the two preceding terms.

A generalized Fibonacci sequence can start with any two numbers and then apply the rule that subsequent terms are defined as the sum of their two predecessors. For example, if we start with 3 and 4, we get the sequence 3, 4, 7, 11, 18, 29, …

Let φ be the golden ratio, the positive solution to the equation 1 + x = x2. Let φ’ be the conjugate golden ratio, the negative solution to the same quadratic equation. Then

φ = (1 + √ 5)/2


φ’ = (1 – √ 5)/2.

Now let’s look at a generalized Fibonacci sequence starting with 1 and φ. Then our terms are 1, φ, 1 + φ, 1 + 2φ, 2 + 3φ, 3 + 5φ, … Let’s see whether we can simplify this sequence.

Now 1 + φ = φ2 because of the quadratic equation φ satisfies. That tells us the third term equals φ2. So our series starts out 1, φ, φ2. This is looking like a geometric sequence. Could the fourth term be φ3? In fact, it is. Since the fourth term is the sum of the second and third terms, it equals φ +φ2 = φ(1 + φ) = φ(φ2) = φ3. You can continue this line of reasoning to prove that the generalized Fibonacci sequence starting with 1 and φ is in fact the geometric sequence 1, φ, φ2, φ3, …

Now start a generalized Fibonacci sequence with φ’. Because φ’ is also a solution to 1 + x = x2, it follows that the sequence 1, φ’, 1 + φ’, 1 + 2φ’, 2 + 3φ’, … equals the geometric sequence 1, φ’, (φ’)2, (φ’)3, …

Related posts:

All languages equally complex

This post compares complexity in spoken languages and programming languages.

There is a theory in linguistics that all human languages are equally complex. Languages may distribute their complexity in different ways, but the total complexity is roughly the same across all spoken languages. One language may be simpler in some aspect than another but more complicated in some other respect. For example, Chinese has simple grammar but a complex tonal system.

Even if all languages are equally complex, that doesn’t mean all languages are equally difficult to learn. An English speaker might find French easier to learn than Russian, not because French is simpler than Russian in some objective sense, but because French is more similar to English.

All spoken languages are supposed to be equally complex because languages reach an equilibrium between at least two forces. Skilled adult speakers tend to complicate languages by looking for ways to be more expressive. But children must be able to learn their language relatively quickly, and less skilled speakers need to be able to use the language as well.

I wonder what this says about programming languages. There are analogous dynamics. Programming languages can be relatively simpler in some way while being relatively complex in another way. And programming languages become more complex over time due to the demands of skilled users.

But there are several important differences. Programming languages are part of a complex system of language, standard libraries, idioms, tools, etc. It may make more sense to speak of a programming “system” to make better comparisons, taking into account the language and its environment.

I do not think that all programming systems are equally complex. Some are better designed than others. Some are more appropriate for a given task than others. Some programming systems achieve simplicity by sacrificing efficiency. Some abstractions leak less than others.

On the other hand, I imagine the levels of complexity are more similar when comparing programming systems rather than just comparing programming languages.  Larry Wall said something to the effect that Perl is ugly so you can write beautiful programs in it. I think there’s some truth to that. A language can always be small and elegant by simply not providing much functionality, forcing the user to implement that functionality in application code.

See Larry Wall’s article Natural Language Principles in Perl for more comparisons of spoken languages and programming languages.

Related posts:

Plain Python

Perl is cool, much more so than Python. But I prefer writing Python.

Perl is fun to read about. It has an endless stream of features to discover. Python by comparison is kinda dull. But the aspects of a language that make it fun to read about do not necessarily make it pleasant to use.

I wrote Perl off and on for several years before trying Python. People would tell me I should try Python and every six months or so I’d skim through a Python book. My impression was that Python was prosaic. It didn’t seem to offer any advantage over Perl, so I stuck with Perl. (Not that I was ever very good at Perl.)

Then I read an article by Bruce Eckel saying that he liked Python because he could remember the syntax. He said that despite teaching Java and writing books on Java, he could never remember the syntax for opening a file in Java, for example. But he could remember the corresponding Python syntax. I would never have picked up on that by skimming books. You’ve got to actually use a language a while to know how memorable the syntax is. But  I had used Perl enough to know that I could not remember its syntax without frequent use. Memorable syntax increases productivity. You don’t have to break your train of thought as often to reach for a reference book.

I stand by my initial impression that Python is plain, but I now think that’s a strength. It just gets out of my way and lets me get my work done. I’m sure  Perl gurus can be extremely productive in Perl. I tried being a Perl guru, and I never made it. I wouldn’t say I’m a Python guru, but I also don’t feel the need to be a guru to use Python.

Python code is not cool in a line-by-line sense, not in the way that an awesomely powerful Perl one-liner is cool. Python is cool in more subtle ways.

Related posts:

High productivity, low productivity

Greg Wilson pointed out an article on productivity by Jason Cohen that makes a lot of sense. Here’s a story that Jason tells to set up his point.

You get in your car at home and head out towards your mother’s house 60 miles away. … You hit traffic during the first half of the trip, so after 30 miles you’ve averaged only 30 miles per hour. Now the traffic opens up and you can go as fast as you want. The question is: How fast do you have to go during the second half of the trip such that you’ve averaged 60 mph over the entire trip?

The key is that you cannot go fast enough to make up for lost time. Your average will be less than 60 mph no matter how fast you go for the second half of the trip. His conclusion: “It’s amazing how periods of low velocity wash away gains of high velocity.” The title of his post is about how to double your productivity, but about one third of the article is devoted to explaining why even larger gains are not possible, i.e. his observation that unproductive periods limit potential productivity gains. As he explains, “the thing to do is eliminate the low-velocity stuff.”

The best way to be more productive may be to concentrate on “what” more than “how.” Concentrate on what to do, and more importantly, what not to do. There may be more to gain by adding to the “not to do” list than by being better at what’s on the “to do” list.

Highlights from Reproducible Ideas

Here are some of my favorite posts from the Reproducible Ideas blog.

Three reasons to distrust microarray results
Provenance in art and science
Forensic bioinformatics (continued)
Preserving (the memory of) documents
Programming is understanding
Musical chairs and reproducibility drills
Taking your code out for a walk

The most popular and most controversial was the first in the list, reasons to distrust microarray results.

The emphasis shifts from science to software development as you go down the list, though science and software are intertwined throughout the posts.

[Update: Reproducible Ideas has gone away.]

Blogging about reproducible research

I’m in the process of folding into the new site. I will be giving the .org domain name to the folks now running the .net site. (See the announcement for a little more information.)

As part of this process, I’m winding down the blog that I started last July as part of the site. I plan to keep the links to my old posts valid, but I do not know whether the new site will have a new blog. I wrote about reproducible research on this blog before starting the site, and I will go back to writing about reproducible research here. (See reproducibility in the tag cloud.)

I wanted to point out an article by Steve Eddins posted this morning: Reproducible research in signal processing. His article comments on the article by Patrick Vandewalle, Jelena Kovačević, and Martin Vetterli announced recently on

Readers interested in reproducible research may also want to take a look at the Science in the open blog.

Related posts:

Cinco de Mayo and the world’s largest cake

Today is Cinco de Mayo, the holiday that celebrates the Mexican army’s defeat of French forces at the Battle of Puebla on May 5, 1862.

Cinco de Mayo is unusual in that it is a Mexican holiday more popular in the United States than in Mexico. According to Wikipedia,

While Cinco de Mayo has limited or no significance nationwide in Mexico, the date is observed in the United States and other locations around the world as a celebration of Mexican heritage and pride.

Cinco de Mayo is a bigger holiday in Texas than Texas Independence Day. (Readers unfamiliar with Texas history may be surprised to learn that Texas was once a sovereign nation. The Republic of Texas existed for nearly a decade between gaining independence from Mexico in 1836 and joining the United States in 1845.)

Texas Independence Day, March 2, usually goes virtually unnoticed. However in 1986, the sesquicentennial, there was a big celebration in Austin. Activities included baking the world’s largest cake. The left-overs were distributed to the dorms at the University of Texas and so I had some of the cake. Quite a bit, actually. You might think that a cake baked for the purpose of setting a world record would be barely edible, but it was actually pretty good lemon cake.

A surprising theorem in complex variables

Here’s a strange theorem I ran across last week. I’ll state the theorem then give some footnotes.

Suppose f(z) and g(z) are two functions meromorphic in the plane. Suppose also that there are five distinct numbers a1, …, a5 such that the solution sets {z : f(z) = ai} and {z : g(z) = ai} are equal. Then either f(z) and g(z) are equal everywhere or they are both constant.


A complex function of a complex variable is meromorphic if it is differentiable except at isolated singularities. The theorem above applies to functions that are (complex) differentiable in the entire plane except at isolated poles.

The theorem is due to Rolf Nevanlinna. There’s a whole branch of complex analysis based on Nevanlinna’s work, but I’d not heard of it until last week. I have no idea why the theorem is true. It doesn’t seem that it should be true; the hypothesis seems far too weak for such a strong conclusion. But that’s par for the course in complex variables.

Update: I edited this post in response to the first comment below to make the theorem statement clearer.

More: Applied complex analysis

Rules for computing happiness

Some time ago I ran across a blog post Al3x’s rules for computing happiness by Alex Payne. I agree with the spirit of the list, though I disagree at least to some extent with most of the points. It seems to me that the underlying idea of the list is to set some boundaries on how you use your computer. Instead of just asking the easiest way to accomplish the immediate task, think of longer term (unintended) consequences.

Here’s Alex’s first rule:

Use as little software as possible.

You could interpret the first rule at least a couple ways. First, don’t use software when a low-tech solution works as well or better. Second, don’t buy or download hundreds of different applications. Learn how to use a few applications well. I agree with both interpretations.

Here are the second and third rules.

Use software that does one thing well. Do not use software that does many things poorly.

If that means having hundreds of little applications, then there’s a tension with the first rule. I suppose it matters how you define a “thing.” If your “thing” is broad enough, such as for example editing images, then there’s no conflict. I don’t think Alex would suggest using thousands of little utilities for image editing rather than using a package like Photoshop or GIMP. I imagine he’s referring to overly ambitious applications, such as software that tries to be word processor, email client, Lisp interpreter, floor wax, and dessert topping.

Here are a few more of the rules I appreciate.

  • Use a plain text editor that you know well.  Not a word processor, a plain text editor.
  • Keep as much as possible in plain text. Not Word or Pages documents, plain text.
  • Pay for software that’s worth paying for, but only after evaluating it for no less than two weeks.
  • Buy as large an external display as you can afford if you’ll be working on the computer for more than three hours at a time.

The emphasis on plain text files may seem reactionary, but there are still numerous advantages to plain text. Word has its advantages as well. Choose wisely.

I particularly like his advice to pay for software that’s worth paying for. I understand the attraction of software that is “free as in beer,” especially at work. Even though the cost of commercial software doesn’t come out of my pocket, the bureaucratic hassle and delay of corporate purchasing make free software more attractive.  But some free software gives a false economy because the software is difficult to use. The software may be free up front, but there’s an opportunity cost for using it, a tax you pay as long as you use it.
Related posts: