Toffoli gates are all you need

Landauer’s principle gives a lower bound on the amount of energy it takes to erase one bit of information:

E ≥ log(2) kB T

where kB is the Boltzmann constant and T is the ambient temperature in Kelvin. The lower bound applies no matter how the bit is physically stored. There is no theoretical lower limit on the energy required to carry out a reversible calculation.

In practice the energy required to erase a bit is around a billion times greater than Landauer’s lower bound. You might reasonably conclude that reversible computing isn’t practical since we’re nowhere near the Landauer limit. And yet in practice reversible circuits have been demonstrated to use less energy than conventional circuits. We’re far from the ultimate physical limit, but reversibility still provides practical efficiency gains today.

A Toffoli gate is a building block of reversible circuits. A Toffoli gate takes three bits as input and returns three bits as output:

T(abc) = (abc XOR (a AND b)).

In words, a Toffoli gate flips its third bit if and only if the first two bits are ones.

A Toffoli gate is its own inverse, and so it is reversible. This is easy to prove. If ab = 1, then the third bit is flipped. Apply the Toffoli gate again flips the bit back to what it was. If ab = 0, i.e. at least one of the first two bits is zero, then the Toffoli gate doesn’t change anything.

There is a theorem that any Boolean function can be computed by a circuit made of only NAND gates. We’ll show that you can construct a NAND gate out of Toffoli gates, which shows any Boolean function can be computed by a circuit made of Toffoli gates, which shows any Boolean function can be computed reversibly.

To compute NAND, i.e. ¬ (ab), send (ab, 1) to the Toffoli gate. The third bit of the output will contain the NAND of a and b.

T(a, b, 1) = (ab, ¬ (ab))

A drawback of reversible computing is that you may have to send in more input than you’d like and get back more output than you’d like, as we can already see from the example above. NAND takes two input bits and returns one output bit. But the Toffoli gate simulating NAND takes three input bits and returns three output bits.

Related posts

Quantum Y2K

I’m skeptical that quantum computing will become practical. However, if it does become practical before we’re prepared, the world’s financial system could collapse. Everyone agrees we should prepare for quantum computing, even those of us who doubt it will be practical any time soon.

Quantum computers exist now, but the question is when and if a cryptographically relevant quantum computer (CRQC) is coming. At the moment, a quantum computer cannot factor 21 without cheating (i.e. not implementing circuits that you know a priori won’t be needed). But that could change suddenly. And some believe quantum computers could quickly go from being able to factor numbers with two digits to being able to factor numbers with thousands of digits (i.e. breaking RSA encryption) without much incremental transition between.

The move to post-quantum encryption may be a lot like Y2K, fixing vast amounts of 20th century software that represented years with two digits. Y2K turned out to be a big nothingburger, but only because the world spent half a trillion dollars on preparation to make sure it would be a big nothingburger.

Programmers in the 1970s obviously knew that the year 2000 was coming, but they also knew that they needed to conserve bytes at the time. And they assumed, reasonably but incorrectly, that their software would all be replaced before two-digit dates became a problem.

Programmers still need to conserve bytes, though this is less obvious today. Quantum-resistant signatures and encryption keys are one or two orders of magnitude bigger. This takes up bandwidth and storage space, which may or may not be a significant problem, depending on context. Programmers may conclude that it’s not (yet) worth the extra overhead to use post-quantum encryption. Like their counterparts 50 years ago, they may assume, rightly or wrongly, that their software will be replaced by the time it needs to be.

Moving to post-quantum cryptography ASAP is not a great idea if you can afford to be more strategic. It takes many years to gain confidence that new encryption algorithms are secure. The SIKE algorithm, for example, was a semi-finalist the NIST post-quantum encryption competition, but someone found a way to break it using an hour of computing on a laptop.

Another reason to not be in a hurry is that it may be possible to be more clever than simply replacing pre-quantum algorithms with post-quantum analogs. For example, some blockchains are exploring zero-knowledge proofs as a way to aggregate signatures. Simply moving to post-quantum signatures could make every transaction block 100 times bigger. But replacing a set of signatures by a (post-quantum) zero-knowledge proof of the existence of the signatures, transaction blocks could be smaller than now.

As with Y2K, the move to post-quantum cryptography will be gradual. Some things have already moved, and some are in transition now. You may have seen the following warning when connecting to a remote server.

** WARNING: connection is not using a post-quantum key exchange algorithm.
** This session may be vulnerable to "store now, decrypt later" attacks.
** The server may need to be upgraded. See https://openssh.com/pq.html

Key sizes don’t matter as much to sftp connections as they do to blockchains. And the maturity of post-quantum algorithms is mitigated by OpenSSH using hybrid encryption: well-established encryption (like ECDH) wrapped by newer quantum-resistant encryption (like MK-KEM). If the newer algorithm isn’t as secure as expected, you’re no worse off than if you had only used the older algorithm.

When clocks rolled over from 1999 to 2000 without incident, many people felt the concern about Y2K had been overblown. Maybe something similar will happen with quantum computing. Let’s hope so.

Related posts

Set intersection and difference at the command line

A few years ago I wrote about comm, a utility that lets you do set theory at the command line. It’s a really useful little program, but it has two drawbacks: the syntax is hard to remember, and the input files must be sorted.

If A and B are two sorted lists,

    comm A B

prints A − B, B − A, and A ∩ B. You usually don’t want all three, and so comm lets you filter the output. It’s a little quirky in that you specify what you don’t want instead of what you do. And you have to remember that 1, 2, and 3 correspond to A − B, B − A, and A ∩ B respectively.

Venn diagram of comm parameters

A couple little scripts can hide the quirks. I have a script intersect

    comm -12 <(sort "$1") <(sort "$2")

and another script setminus

    comm -23 <(sort "$1") <(sort "$2")

that sort the input files on the fly and eliminate the need to remember comm‘s filtering syntax.

The setminus script computes A − B. To find B − A call the script with the arguments reversed.

Embedded regex flags

The hardest part of using regular expressions is not crafting regular expressions per se. In my opinion, the two hardest parts are minor syntax variations between implementations, and all the environmental stuff outside of regular expressions per se.

Embedded regular expression modifiers address one of the environmental complications by putting the modifier in the regular expression itself.

For example, if you want to make a grep search case-insensitive, you pass it the -i flag. But if you want to make a regex case-insensitive inside a Python program, you pass a function the argument re.IGNORECASE. But if you put (?i) at the beginning of your regular expression, then the intention to make the match case-insensitive is embedded directly into the regex. You could use the regex in any environment that supports (?i) without having to know how to specify modifiers in that environment.

I was debugging a Python script this morning that worked under one version of Python and not under another. The root of the problem was that it was using re.findall() with several huge regular expression that had embedded modifiers. That was OK up to Python 3.5, then it was a warning between versions 3.6 and 3.10, and it’s an error in versions 3.11 and later.

The problem isn’t with all embedded modifiers, only global modifiers that don’t appear at the beginning of the regex. Older versions of Python, following Perl’s lead, would let you put a modifier like (?i) in the middle of a regex, and apply the modifier from that point to the end of the expression. In the latest versions of Python, you can either place the modifier at the beginning of the regex, or use a scoped modifier like (?:…) in the middle of the expression.

I didn’t want to edit the regular expressions in my code—some had over a thousand characters—so I changed re.findall() to regex.findall(). The third-party regex module is generally more Perl-compatible than Python’s standard re module.

An AI Odyssey, Part 1: Correctness Conundrum

I recently talked with a contact who repeated what he’d heard regarding agentic AI systems—namely, that they can greatly increase productivity in professional financial management tasks. However, I pointed out that though this is true, these tools do not guarantee correctness, so one has to be very careful letting them manage critical assets such as financial data.

It is widely recognized that AI models, even reasoning models and agentic systems, can make mistakes. One example is a case showing that one of the most recent and capable AI models made multiple factual mistakes in drawing together information for a single slide of a presentation.  Sure, people can give examples where agentic systems can perform amazing tasks. But it’s another question as to whether they can always do them reliably. Unfortunately, we do not yet have procedural frameworks that provides reliability guarantees that are comparable to those required in other high-stakes engineering domains.

Many leading researchers have acknowledged that current AI systems have a degree of technical unpredictability that has not been resolved. For example, one has recently said, “Anyone who has worked with AI models understands that there is a basic unpredictability to them, that in a purely technical way we have not solved.”

What industrial-strength reliability looks like

Manufacturing has the notion of Six Sigma quality, to reduce the level of manufacturing defects to an extremely low level. In computing, the correctness requirements are even higher, sometimes necessitating provable correctness. The Pentium FDIV bug in the 1990s caused actual errors in calculations to occur in the wild, even though the chance of error was supposedly “rare.” These were silent errors that might have occurred undetected in mission critical applications, leading to failure. This being said, the Pentium FDIV error modes were precisely definable, whereas AI models are probabilistic, making it much harder to bound the errors.

Exact correctness is at times considered so important that there is an entire discipline, known as formal verification, to prove specified correctness properties for critical hardware and software systems (for example, the manufacture of computing devices). These methods play a key role in multi-billion dollar industries.

When provable correctness is not available, having at least a rigorous certification process (see here for one effort) is a step in the right direction.

It has long been recognized that reliability is a fundamental challenge in modern AI systems. Despite dramatic advances in capability, strong correctness guarantees remain an open technical problem. The central question is how to build AI systems whose behavior can be bounded, verified, or certified at domain-appropriate levels. Until this is satisfactorily resolved, we should use these incredibly useful tools in smart ways that do not create unnecessary risks.

Shell variable ~-

After writing the previous post, I poked around in the bash shell documentation and found a handy feature I’d never seen before, the shortcut ~-.

I frequently use the command cd - to return to the previous working directory, but didn’t know about ~- as a shortcut for the shell variable $OLDPWD which contains the name of the previous working directory.

Here’s how I will be using this feature now that I know about it. Fairly often I work in two directories, and moving back and forth between them using cd -, and need to compare files in the two locations. If I have files in both directories with the same name, say notes.org, I can diff them by running

    diff notes.org ~-/notes.org

I was curious why I’d never run into ~- before. Maybe it’s a relatively recent bash feature? No, it’s been there since bash was released in 1989. The feature was part of C shell before that, though not part of Bourne shell.

Working with file extensions in bash scripts

I’ve never been good at shell scripting. I’d much rather write scripts in a general purpose language like Python. But occasionally a shell script can do something so simply that it’s worth writing a shell script.

Sometimes a shell scripting feature is terse and cryptic precisely because it solves a common problem succinctly. One example of this is working with file extensions.

For example, maybe you have a script that takes a source file name like foo.java and needs to do something with the class file foo.class. In my case, I had a script that takes a La TeX file name and needs to create the corresponding DVI and SVG file names.

Here’s a little script to create an SVG file from a LaTeX file.

    #!/bin/bash

    latex "$1"
    dvisvgm --no-fonts "${1%.tex}.dvi" -o "${1%.tex}.svg"

The pattern ${parameter%word} is a bash shell parameter expansion that removes the shortest match to the pattern word from the end of the expansion of parameter. So if $1 equals foo.tex, then

    ${1%.tex}

evaluates to

    foo

and so

${1%.tex}.dvi

and

${1%.tex}.svg

expand to foo.dvi and foo.svg.

You can get much fancier with shell parameter expansions if you’d like. See the documentation here.

Bitcoin mining difficulty

The previous post looked at the Bitcoin network hash rate, currently around one zettahash per second, i.e. 1021 hashes per second. The difficulty of mining a Bitcoin block adjusts over time to keep the rate of block production relatively constant, around one block every 10 minutes. The plot below shows this in action.

Bitcoin hash rate, difficulty, and ratio of the two

Notice the difficulty graph is more quantized than the hash rate graph. This is because the difficulty changes every 2,016 blocks, or about every two weeks. The number 2016 was chosen to be the number of blocks that would be produced in two weeks if every block took exactly 10 minutes to create.

The ratio of the hash rate to difficulty is basically constant with noise. The noticeable dip in mid 2021 was due to China cracking down on Bitcoin mining. This caused the hash rate to drop suddenly, and it took a while for the difficulty level to be adjusted accordingly.

Mining difficulty

At the current difficulty level, how many hashes would it take to mine a Bitcoin block if there were no competition? How does this compare to the number of hashes the network computes during this time?

To answer these questions, we have to back up a bit. The current mining difficulty is around 1014, but what does that mean?

The original Bitcoin mining task was to produce a hash [1] with 32 leading zeros. On average, this would take 232 attempts. Mining difficulty is defined so that the original mining difficult was 1 and current mining difficulty is proportional to the expected number of hashes needed. So a difficulty of around 1014 means that the expected number of hashes is around

1014 × 232 = 4.3 × 1023.

At one zetahash per second, the number of hashes computed by the entire network over a 10 minute interval would be

1021 × 60 × 10 = 6 × 1023.

So the number of hashes computed by the entire network is only about 40% greater than what would be necessary to mine a block without competition.

Related posts

[1] The hash function used in Bitcoin’s proof of work is double SHA256, i.e. the Bitcoin hash of x is SHA256( SHA256( x ) ). So a single Bitcoin hash consists of two applications of the SHA256 hash function.

10,000,000th Fibonacci number

I’ve written a couple times about Fibonacci numbers and certificates. Here the certificate is auxiliary data that makes it faster to confirm that the original calculation was correct.

This post puts some timing numbers to this.

I calculated the 10 millionth Fibonacci number using code from this post.

n = 10_000_000    
F = fib_mpmath(n)

This took 150.3 seconds.

As I’ve discussed before, a number f is a Fibonacci number if and only if 5f² ± 4 is a square r². And for the nth Fibonacci number, we can take ± to be positive if n is even and negative if n is odd.

I computed the certificate r with

r = isqrt(5*F**2 + 4 - 8*(n%2))

and this took 65.2 seconds.

Verifying that F is correct with

n = abs(5*F**2 - r**2)
assert(n == 4)

took 3.3 seconds.

About certificates

So in total it took 68.5 seconds to verify that F was correct. But if someone had pre-computed r and handed me F and r I could use r to verify the correctness of F in 3.3 seconds, about 2% of the time it took to compute F.

Sometimes you can get a certificate of correctness for free because it is a byproduct of the problem you’re solving. Other times computing the certificate takes a substantial amount of work. Here computing F and r took about 40% more work than just computing F.

What’s not typical about this example is that the solution provider carries out exactly the same process to create the certificate that someone receiving the solution without a certificate would do. Typically, even if the certificate isn’t free, it does come at a “discount,” i.e. the problem solver could compute the certificate more efficiently than someone given only the solution.

Proving you have the right Fibonacci number

Now suppose I have you the number F above and claim that it is the 10,000,000th Fibonacci number. You carry out the calculations above and say “OK, you’ve convinced me that F is a Fibonacci number, but how do I know it’s the 10,000,000th Fibonacci number?

The 10,000,000th Fibonacci number has 2,089,877 digits. That’s almost enough information to verify that a Fibonacci number is indeed the 10,000,000th Fibonacci number. The log base 10 of the nth Fibonacci number is very nearly

n log10 φ − 0.5 log10 5

based on Binet’s formula. From this we can determine that the nth Fibonacci number has 2,089,877 digits if n = 10,000,000 + k where k equals 0, 1, 2, or 3.

Because

log10 F10,000,000 = 2089876.053014785

and

100.053014785 = 1.129834377783962

we know that the first few digits of F10,000,000 are 11298…. If I give you a 2,089,877 digits that you can prove to be a Fibonacci number, and its first digit is 1, then you know it’s the 10,000,000th Fibonacci number.

You could also verify the number by looking at the last digit. As I wrote about here, the last digits of Fibonacci number have a period of 60. That means the last digit of the 10,000,000th Fibonacci number is the same as the last digit of the 40th Fibonacci number, which is 5. That is enough to rule out the other three possible Fibonacci numbers with 2,089,877 digits.

Computing big, certified Fibonacci numbers

I’ve written before about computing big Fibonacci numbers, and about creating a certificate to verify a Fibonacci number has been calculated correctly. This post will revisit both, giving a different approach to computing big Fibonacci numbers that produces a certificate along the way.

As I’ve said before, I’m not aware of any practical reason to compute large Fibonacci numbers. However, the process illustrates techniques that are practical for other problems.

The fastest way to compute the nth Fibonacci number for sufficiently large n is Binet’s formula:

Fn = round( φn / √5 )

where φ is the golden ratio. The point where n is “sufficiently large” depends on your hardware and software, but in my experience I found the crossover to be somewhere 1,000 and 10,000.

The problem with Binet’s formula is that it requires extended precision floating point math. You need extra guard digits to make sure the integer part of your result is entirely correct. How many guard digits you’ll need isn’t obvious a priori. This post gave a way of detecting errors, and it could be turned into a method for correcting errors.

But how do we know an error didn’t slip by undetected? This question brings us back to the idea of certifying a Fibonacci number. A number f Fibonacci number if and only if one of 5f2 ± 4 is a perfect square.

Binet’s formula, implemented in finite precision, takes a positive integer n and gives us a number f that is approximately the nth Fibonacci number. Even in low-precision arithmetic, the relative error in the computation is small. And because the ratio of consecutive Fibonacci numbers is approximately φ, an approximation to Fn is far from the Fibonacci numbers Fn − 1 and Fn + 1. So the closest Fibonacci number to an approximation of Fn is exactly Fn.

Now if f is approximately Fn, then 5f2 is approximately a square. Find the integer r minimizing  |5f2r²|. In Python you could do this with the isqrt function. Then either r² + 4 or r² − 4 is divisible by 5. You can know which one by looking at r² mod 5. Whichever one is, divide it by 5 and you have the square of Fn. You can take the square root exactly, in integer arithmetic, and you have Fn. Furthermore, the number r that you computed along the way is the certificate of the calculation of Fn.