The linear algebra of bit twiddling

The previous post looked at the tempering step of the Mersenne Twister, formulating a sequence of bit operations as multiplication by a matrix mod 2. This post will look at the components more closely.

The theorems of linear algebra generally hold independent of the field of scalars. Typically the field is ℝ or ℂ, but most of basic linear algebra works the same over every field [1]. In particular, we can do linear algebra over a finite field, and we’re interested in the most finite of finite fields GF(2), the field with just two elements, 0 and 1.

In GF(2), addition corresponds to XOR. We will denote this by ⊕ to remind us that although it’s addition, it’s not the usual addition, i.e. 1 ⊕ 1 = 0. Similarly, multiplication corresponds to AND. We’ll work with 8-bit numbers to make the visuals easier to see.

Shifting a number left one bit corresponds to multiplication by a matrix with 1’s below the diagonal main. Shifting left by k bits is the same as shifting left by 1 bit k times, so the the matrix representation for x << k is the kth power of the matrix representation of shifting left once. This matrix has 1s on the kth diagonal below the main diagonal. Below is the matrix for shifting left two bits, x << k.

Right shifts are the mirror image of left shifts. Here’s the matrix for shifting right two bits, x >> k.

Shifts are not fully invertible because bits either fall off the left or the right end. The steps in the Mersenne Twister are invertible because shifts are always XOR’d with the original argument. For example, although the function that takes x to x >> 2 is not invertible, the function that takes x to x ⊕ (x >> 2) is invertible. This operation corresponds to the matrix below.

This is an upper triangular matrix, so its determinant is the product of the diagonal elements. These are all 1s, so the determinant is 1, and the matrix is invertible.

Bitwise AND multiplies each bit of the input by the corresponding bit in another number known as the mask. The bits aligned with a 1 are kept and the bits aligned with a 0 are cleared. This corresponds to multiplying by a diagonal matrix whose diagonal elements correspond to the bits in the mask. For example, here is the matrix that corresponds to taking the bitwise AND with 10100100.

Each of the steps in the Mersenne Twister tempering process are invertible because they all correspond to triangular matrices with all 1’s on the diagonal. For example, the line

y ^= (y <<  7) & 0x9d2c5680 

says to shift the bits of y left 7 places, then zero out the elements corresponding to 0s in the mask, then XOR the result with y. In matrix terms, we multiply by a lower triangular matrix with zeros on the main diagonal, then multiply by a diagonal matrix that zeros out some of the terms, then add the identity matrix. So the matrix corresponding to the line of code above is lower triangular, with all 1s on the diagonal, so it is invertible.

[1] Until you get to eigenvalues. Then it matters whether the field is algebraically complete, which no finite field is.

Unified config files

I try to maintain a consistent work environment across computers that I use. The computers differ for important reasons, but I’d rather they not differ for unimportant reasons.

Unified keys

One thing I do is remap keys so that the same key does the same thing everywhere, to the extent that’s practical. This requires remapping keys. In particular, I want the key functionality, not the key name, to be the same across operating systems. For example, the Command key on a Mac does what the Control key does on Windows and Linux. I have my machines set up so the key in the lower left corner, whatever you call it, does things like copy, paste, and cut.

Unified config files

I use bash everywhere even though zshell is the default on Mac OS. But Linux and MacOS are sufficiently different that I have two .bashrc files, one for each OS. However, both .bashrc files source a common file .bash_aliases for aliases. You can set that up by putting the following code in your .bashrc file.

if [ -f ~/.bash_aliases ]; then
    . ~/.bash_aliases
fi

Sometimes you need some kind of branching logic even for two machines running the same OS. For example, if you have two computers, named Castor and Pollux, you might have code like the following in your .bashrc.

HOSTNAME_SHORT=$(hostname -s)
if [ "$HOSTNAME_SHORT" = "Castor" ]; then
...
elif [ "$HOSTNAME_SHORT" = "Pollux" ]; then
...

One problem with using hostname is that the OS can change the name on you. At least MacOS will do this; I don’t know whether other operating systems will. If you give your machine an uncommon name then this is unlikely to happen.

I have one init.el file for Emacs. It contains some branching logic for OS:

(cond
    ((string-equal system-type "windows-nt") ; Microsoft Windows
     ...)
    ((string-equal system-type "gnu/linux") ; linux
     ...)
    ((string-equal system-type "darwin") ; Mac
     ...)
)

You can also branch on hostname.

(if (string-equal (system-name) "Castor")
...)
(if (string-equal (system-name) "Pollux")
...)

This isn’t difficult, but in the short run it’s easier to just make one ad hoc edit to a config file at a time, letting the files drift apart. Along those lines, see bicycle skills.

Toffoli gates are all you need

Landauer’s principle gives a lower bound on the amount of energy it takes to erase one bit of information:

E ≥ log(2) kB T

where kB is the Boltzmann constant and T is the ambient temperature in Kelvin. The lower bound applies no matter how the bit is physically stored. There is no theoretical lower limit on the energy required to carry out a reversible calculation.

In practice the energy required to erase a bit is around a billion times greater than Landauer’s lower bound. You might reasonably conclude that reversible computing isn’t practical since we’re nowhere near the Landauer limit. And yet in practice reversible circuits have been demonstrated to use less energy than conventional circuits. We’re far from the ultimate physical limit, but reversibility still provides practical efficiency gains today.

A Toffoli gate is a building block of reversible circuits. A Toffoli gate takes three bits as input and returns three bits as output:

T(abc) = (abc XOR (a AND b)).

In words, a Toffoli gate flips its third bit if and only if the first two bits are ones.

A Toffoli gate is its own inverse, and so it is reversible. This is easy to prove. If ab = 1, then the third bit is flipped. Apply the Toffoli gate again flips the bit back to what it was. If ab = 0, i.e. at least one of the first two bits is zero, then the Toffoli gate doesn’t change anything.

There is a theorem that any Boolean function can be computed by a circuit made of only NAND gates. We’ll show that you can construct a NAND gate out of Toffoli gates, which shows any Boolean function can be computed by a circuit made of Toffoli gates, which shows any Boolean function can be computed reversibly.

To compute NAND, i.e. ¬ (ab), send (ab, 1) to the Toffoli gate. The third bit of the output will contain the NAND of a and b.

T(a, b, 1) = (ab, ¬ (ab))

A drawback of reversible computing is that you may have to send in more input than you’d like and get back more output than you’d like, as we can already see from the example above. NAND takes two input bits and returns one output bit. But the Toffoli gate simulating NAND takes three input bits and returns three output bits.

Related posts

Quantum Y2K

I’m skeptical that quantum computing will become practical. However, if it does become practical before we’re prepared, the world’s financial system could collapse. Everyone agrees we should prepare for quantum computing, even those of us who doubt it will be practical any time soon.

Quantum computers exist now, but the question is when and if a cryptographically relevant quantum computer (CRQC) is coming. At the moment, a quantum computer cannot factor 21 without cheating (i.e. not implementing circuits that you know a priori won’t be needed). But that could change suddenly. And some believe quantum computers could quickly go from being able to factor numbers with two digits to being able to factor numbers with thousands of digits (i.e. breaking RSA encryption) without much incremental transition between.

The move to post-quantum encryption may be a lot like Y2K, fixing vast amounts of 20th century software that represented years with two digits. Y2K turned out to be a big nothingburger, but only because the world spent half a trillion dollars on preparation to make sure it would be a big nothingburger.

Programmers in the 1970s obviously knew that the year 2000 was coming, but they also knew that they needed to conserve bytes at the time. And they assumed, reasonably but incorrectly, that their software would all be replaced before two-digit dates became a problem.

Programmers still need to conserve bytes, though this is less obvious today. Quantum-resistant signatures and encryption keys are one or two orders of magnitude bigger. This takes up bandwidth and storage space, which may or may not be a significant problem, depending on context. Programmers may conclude that it’s not (yet) worth the extra overhead to use post-quantum encryption. Like their counterparts 50 years ago, they may assume, rightly or wrongly, that their software will be replaced by the time it needs to be.

Moving to post-quantum cryptography ASAP is not a great idea if you can afford to be more strategic. It takes many years to gain confidence that new encryption algorithms are secure. The SIKE algorithm, for example, was a semi-finalist the NIST post-quantum encryption competition, but someone found a way to break it using an hour of computing on a laptop.

Another reason to not be in a hurry is that it may be possible to be more clever than simply replacing pre-quantum algorithms with post-quantum analogs. For example, some blockchains are exploring zero-knowledge proofs as a way to aggregate signatures. Simply moving to post-quantum signatures could make every transaction block 100 times bigger. But replacing a set of signatures by a (post-quantum) zero-knowledge proof of the existence of the signatures, transaction blocks could be smaller than now.

As with Y2K, the move to post-quantum cryptography will be gradual. Some things have already moved, and some are in transition now. You may have seen the following warning when connecting to a remote server.

** WARNING: connection is not using a post-quantum key exchange algorithm.
** This session may be vulnerable to "store now, decrypt later" attacks.
** The server may need to be upgraded. See https://openssh.com/pq.html

Key sizes don’t matter as much to sftp connections as they do to blockchains. And the maturity of post-quantum algorithms is mitigated by OpenSSH using hybrid encryption: well-established encryption (like ECDH) wrapped by newer quantum-resistant encryption (like MK-KEM). If the newer algorithm isn’t as secure as expected, you’re no worse off than if you had only used the older algorithm.

When clocks rolled over from 1999 to 2000 without incident, many people felt the concern about Y2K had been overblown. Maybe something similar will happen with quantum computing. Let’s hope so.

Related posts

Set intersection and difference at the command line

A few years ago I wrote about comm, a utility that lets you do set theory at the command line. It’s a really useful little program, but it has two drawbacks: the syntax is hard to remember, and the input files must be sorted.

If A and B are two sorted lists,

    comm A B

prints A − B, B − A, and A ∩ B. You usually don’t want all three, and so comm lets you filter the output. It’s a little quirky in that you specify what you don’t want instead of what you do. And you have to remember that 1, 2, and 3 correspond to A − B, B − A, and A ∩ B respectively.

Venn diagram of comm parameters

A couple little scripts can hide the quirks. I have a script intersect

    comm -12 <(sort "$1") <(sort "$2")

and another script setminus

    comm -23 <(sort "$1") <(sort "$2")

that sort the input files on the fly and eliminate the need to remember comm‘s filtering syntax.

The setminus script computes A − B. To find B − A call the script with the arguments reversed.

Embedded regex flags

The hardest part of using regular expressions is not crafting regular expressions per se. In my opinion, the two hardest parts are minor syntax variations between implementations, and all the environmental stuff outside of regular expressions per se.

Embedded regular expression modifiers address one of the environmental complications by putting the modifier in the regular expression itself.

For example, if you want to make a grep search case-insensitive, you pass it the -i flag. But if you want to make a regex case-insensitive inside a Python program, you pass a function the argument re.IGNORECASE. But if you put (?i) at the beginning of your regular expression, then the intention to make the match case-insensitive is embedded directly into the regex. You could use the regex in any environment that supports (?i) without having to know how to specify modifiers in that environment.

I was debugging a Python script this morning that worked under one version of Python and not under another. The root of the problem was that it was using re.findall() with several huge regular expression that had embedded modifiers. That was OK up to Python 3.5, then it was a warning between versions 3.6 and 3.10, and it’s an error in versions 3.11 and later.

The problem isn’t with all embedded modifiers, only global modifiers that don’t appear at the beginning of the regex. Older versions of Python, following Perl’s lead, would let you put a modifier like (?i) in the middle of a regex, and apply the modifier from that point to the end of the expression. In the latest versions of Python, you can either place the modifier at the beginning of the regex, or use a scoped modifier like (?:…) in the middle of the expression.

I didn’t want to edit the regular expressions in my code—some had over a thousand characters—so I changed re.findall() to regex.findall(). The third-party regex module is generally more Perl-compatible than Python’s standard re module.

An AI Odyssey, Part 1: Correctness Conundrum

I recently talked with a contact who repeated what he’d heard regarding agentic AI systems—namely, that they can greatly increase productivity in professional financial management tasks. However, I pointed out that though this is true, these tools do not guarantee correctness, so one has to be very careful letting them manage critical assets such as financial data.

It is widely recognized that AI models, even reasoning models and agentic systems, can make mistakes. One example is a case showing that one of the most recent and capable AI models made multiple factual mistakes in drawing together information for a single slide of a presentation.  Sure, people can give examples where agentic systems can perform amazing tasks. But it’s another question as to whether they can always do them reliably. Unfortunately, we do not yet have procedural frameworks that provides reliability guarantees that are comparable to those required in other high-stakes engineering domains.

Many leading researchers have acknowledged that current AI systems have a degree of technical unpredictability that has not been resolved. For example, one has recently said, “Anyone who has worked with AI models understands that there is a basic unpredictability to them, that in a purely technical way we have not solved.”

What industrial-strength reliability looks like

Manufacturing has the notion of Six Sigma quality, to reduce the level of manufacturing defects to an extremely low level. In computing, the correctness requirements are even higher, sometimes necessitating provable correctness. The Pentium FDIV bug in the 1990s caused actual errors in calculations to occur in the wild, even though the chance of error was supposedly “rare.” These were silent errors that might have occurred undetected in mission critical applications, leading to failure. This being said, the Pentium FDIV error modes were precisely definable, whereas AI models are probabilistic, making it much harder to bound the errors.

Exact correctness is at times considered so important that there is an entire discipline, known as formal verification, to prove specified correctness properties for critical hardware and software systems (for example, the manufacture of computing devices). These methods play a key role in multi-billion dollar industries.

When provable correctness is not available, having at least a rigorous certification process (see here for one effort) is a step in the right direction.

It has long been recognized that reliability is a fundamental challenge in modern AI systems. Despite dramatic advances in capability, strong correctness guarantees remain an open technical problem. The central question is how to build AI systems whose behavior can be bounded, verified, or certified at domain-appropriate levels. Until this is satisfactorily resolved, we should use these incredibly useful tools in smart ways that do not create unnecessary risks.

Shell variable ~-

After writing the previous post, I poked around in the bash shell documentation and found a handy feature I’d never seen before, the shortcut ~-.

I frequently use the command cd - to return to the previous working directory, but didn’t know about ~- as a shortcut for the shell variable $OLDPWD which contains the name of the previous working directory.

Here’s how I will be using this feature now that I know about it. Fairly often I work in two directories, and moving back and forth between them using cd -, and need to compare files in the two locations. If I have files in both directories with the same name, say notes.org, I can diff them by running

    diff notes.org ~-/notes.org

I was curious why I’d never run into ~- before. Maybe it’s a relatively recent bash feature? No, it’s been there since bash was released in 1989. The feature was part of C shell before that, though not part of Bourne shell.

Working with file extensions in bash scripts

I’ve never been good at shell scripting. I’d much rather write scripts in a general purpose language like Python. But occasionally a shell script can do something so simply that it’s worth writing a shell script.

Sometimes a shell scripting feature is terse and cryptic precisely because it solves a common problem succinctly. One example of this is working with file extensions.

For example, maybe you have a script that takes a source file name like foo.java and needs to do something with the class file foo.class. In my case, I had a script that takes a La TeX file name and needs to create the corresponding DVI and SVG file names.

Here’s a little script to create an SVG file from a LaTeX file.

    #!/bin/bash

    latex "$1"
    dvisvgm --no-fonts "${1%.tex}.dvi" -o "${1%.tex}.svg"

The pattern ${parameter%word} is a bash shell parameter expansion that removes the shortest match to the pattern word from the end of the expansion of parameter. So if $1 equals foo.tex, then

    ${1%.tex}

evaluates to

    foo

and so

${1%.tex}.dvi

and

${1%.tex}.svg

expand to foo.dvi and foo.svg.

You can get much fancier with shell parameter expansions if you’d like. See the documentation here.

Bitcoin mining difficulty

The previous post looked at the Bitcoin network hash rate, currently around one zettahash per second, i.e. 1021 hashes per second. The difficulty of mining a Bitcoin block adjusts over time to keep the rate of block production relatively constant, around one block every 10 minutes. The plot below shows this in action.

Bitcoin hash rate, difficulty, and ratio of the two

Notice the difficulty graph is more quantized than the hash rate graph. This is because the difficulty changes every 2,016 blocks, or about every two weeks. The number 2016 was chosen to be the number of blocks that would be produced in two weeks if every block took exactly 10 minutes to create.

The ratio of the hash rate to difficulty is basically constant with noise. The noticeable dip in mid 2021 was due to China cracking down on Bitcoin mining. This caused the hash rate to drop suddenly, and it took a while for the difficulty level to be adjusted accordingly.

Mining difficulty

At the current difficulty level, how many hashes would it take to mine a Bitcoin block if there were no competition? How does this compare to the number of hashes the network computes during this time?

To answer these questions, we have to back up a bit. The current mining difficulty is around 1014, but what does that mean?

The original Bitcoin mining task was to produce a hash [1] with 32 leading zeros. On average, this would take 232 attempts. Mining difficulty is defined so that the original mining difficult was 1 and current mining difficulty is proportional to the expected number of hashes needed. So a difficulty of around 1014 means that the expected number of hashes is around

1014 × 232 = 4.3 × 1023.

At one zetahash per second, the number of hashes computed by the entire network over a 10 minute interval would be

1021 × 60 × 10 = 6 × 1023.

So the number of hashes computed by the entire network is only about 40% greater than what would be necessary to mine a block without competition.

Related posts

[1] The hash function used in Bitcoin’s proof of work is double SHA256, i.e. the Bitcoin hash of x is SHA256( SHA256( x ) ). So a single Bitcoin hash consists of two applications of the SHA256 hash function.