What is a privacy budget?

The idea behind differential privacy is that it doesn’t make much difference whether your data is in a data set or not. How much difference your participation makes is made precise in terms of probability statements. The exact definition doesn’t for this post, but it matters that there is an exact definition.

Someone designing a differentially private system sets an upper limit on the amount of difference anyone’s participation can make. That’s the privacy budget. The system will allow someone to ask one question that uses the whole privacy budget, or a series of questions whose total impact is no more than that one question.

If you think of a privacy budget in terms of money, maybe your privacy budget is $1.00. You could ask a single $1 question if you’d like, but you couldn’t ask any more questions after that. Or you could ask one $0.30 question and seven $0.10 questions.

Some metaphors are dangerous, but the idea of comparing cumulative privacy impact to a financial budget is a good one. You have a total amount you can spend, and you can chose how you spend it.

The only problem with privacy budgets is that they tend to be overly cautious because they’re based on worst-case estimates. There are several ways to mitigate this. A simple way to stretch privacy budgets is to cache query results. If you ask a question twice, you get the same answer both times, and you’re only charged once.

(Recall that differential privacy adds a little random noise to query results to protect privacy. If you could ask the same question over and over, you could average your answers, reducing the level of added noise, and so a differentially private system will rightly charge you repeatedly for repeated queries. But if the system adds noise once and remembers the result, there’s no harm in giving you back that same answer as often as you ask the question.)

A more technical way to get more from a privacy budget is to use Rényi differential privacy (RDP) rather than the original ε-differential privacy. The former simplifies privacy budget accounting due to simple composition rules, and makes privacy budgets stretch further by leaning away from worst-case analysis a bit and leaning toward average-case analysis. RDP depends on a tuning parameter that includes ε-differential privacy, so one can control how much RDP acts like ε-differential privacy by adjusting that parameter.

There are other ways to stretch privacy budgets as well. The net effect is that when querying a large database, you can often ask all the questions like, and get sufficiently accurate answers, without worrying about privacy budget.

Related posts

Hyperexponential and hypoexponential distributions

There are a couple different ways to combine random variables into a new random variable: means and mixtures. To take the mean of X and Y you average their values. To take the mixture of X and Y you average their densities. The former makes the tails thinner. The latter makes the tails thicker. When X and Y are exponential random variables, the mean has a hypoexponential distribution and the mixture has a hyperexponential distribution.

Hypoexponential distributions

Suppose X and Y are exponentially distributed with mean μ. Then their sum X + Y has a gamma distribution with shape 2 and scale μ. The sum has mean 2μ and variance 2μ². The coefficient of variation, the ratio of the standard deviation to the mean, is 1/√2. The hypoexponential distribution is so-called because its coefficient of variation is less than 1, whereas an exponential distribution has coefficient of variation 1 because the mean and standard deviation are always the same.

The means of X and Y don’t have to be the same. When they’re different, the sum does not have a gamma distribution, and so hypoexponential distributions are more general than gamma distributions.

A hypoexponential random variable can also be the sum of more than two exponentials. If it is the sum of n exponentials, each with the same mean, then the coefficient of variation is 1/√n. In general, the coefficient of variation for a hypoexponential distribution is the coefficient of variation of the means [1].

In the introduction we talked about means rather than sums, but it makes no difference to the coefficient of variation because going from sum to mean divides the mean and the standard deviation by the same amount.

Hyperexponential distributions

Hyperexponential random variables are constructed as a mixture of exponentials rather than an average. Instead selecting a value from X and a value from Y, we select a value from X or a value from Y. Given a mixture probability p, we choose a sample from X with probability p and a value from Y with probability 1 – p.

The density function for a mixture is a an average of the densities of the two components. So if X and Y have density functions fX and fY, then the mixture has density

p fX + (1 – p) fY

If you have more than two random variables, the distribution of their mixture is a convex combination of their individual densities. The coefficients in the convex combination are the probabilities of selecting each random variable.

If X and Y are exponential with means μX and μY, and we have a mixture that selects X with probability p, then then mean of the mixture is the mixture of the means

μ = p μX + (1 – p) μY

which you might expect, but the variance

σ² = p μX ² + (1 – p) μY ² + p(1 – p)(μX – μY

is not quite analogous because of the extra p(1 – p)(μX – μY)² term at the end. If μX = μY this last term drops out and the coefficient of variation is 1: mixing two identically distributed random variables doesn’t do anything to the distribution. But when the means are different, the coefficient of variation is greater than 1 because of the extra term in the variance of the mixture.

Example

Suppose μX = 2 and μY = 8. Then the average of X and Y has mean 5, and so does an equal mixture of X and Y.

The average of X and Y has standard deviation √17, and so the coefficient of variation is √17/5 = 0.8246.

An exponential distribution with mean 5 would have standard deviation 5, and so the coefficient of variation 1.

An equal mixture of X and Y has standard deviation √43, and so the coefficient of variation is √43/5 = 1.3114.

Related posts

[1] If exponential random variables Xi have means μi, then the coefficient of variation of their sum (or average) is

√(μ1² + μ2² + … + μn²) / (μ1 + μ2 + … + μn)

Fat tails and the t test

Suppose you want to test whether something you’re doing is having any effect. You take a few measurements and you compute the average. The average is different than what it would be if what you’re doing had no effect, but is the difference significant? That is, how likely is it that you might see the same change in the average, or even a greater change, if what you’re doing actually had no effect and the difference is due to random effects?

The most common way to address this question is the one-sample t test. “One sample” doesn’t mean that you’re only taking one measurement. It means that you’re taking a set of measurements, a sample, from one thing. You’re not comparing measurements from two different things.

The t test assumes that the data are coming from some source with a normal (Gaussian) distribution. The Gaussian distribution has thin tails, i.e. the probability of seeing a value far from the mean drops precipitously as you move further out. What if the data are actually coming from a distribution with heavier tails, i.e. a distribution where the probability of being far from the mean drops slowly?

With fat tailed data, the t test loses power. That is, it is less likely to reject the null hypothesis, the hypothesis that the mean hasn’t changed, when it should. First we will demonstrate by simulation that this is the case, then we’ll explain why this is to be expected from theory.

Simulation

We will repeatedly draw a sample of 20 values from a distribution with mean 0.8 and test whether the mean of that distribution is not zero by seeing whether the t test produces a p-value less than the conventional cutoff of 0.05. We will increase the thickness of the distribution tails and see what that does to our power, i.e. the probability of correctly rejecting the hypothesis that the mean is zero.

We will fatten the tails of our distribution by generating samples from a Student t distribution and decreasing the number of degrees of freedom: as degrees of freedom go down, the weight of the tail goes up.

With a large number of degrees of freedom, the t distribution is approximately normal. As the number of degrees of freedom decreases, the tails get fatter. With one degree of freedom, the t distribution is a Cauchy distribution.

Here’s our Python code:

from scipy.stats import t, ttest_1samp

n = 20
N = 1000

for df in [100, 30, 10, 5, 4, 3, 2, 1]:
    rejections = 0
    for _ in range(N):
        y = 0.8 + t.rvs(df, size=n)
        stat, p = ttest_1samp(y, 0)
        if p < 0.05:
            rejections += 1
    print(df, rejections/N)

And here’s the output:

100 0.917
 30 0.921 
 10 0.873 
  5 0.757  
  4 0.700    
  3 0.628  
  2 0.449  
  1 0.137  

When the degrees of freedom are high, we reject the null about 90% of the time, even for degrees of freedom as small as 10. But with one degree of freedom, i.e. when we’re sampling from a Cauchy distribution, we only reject the null around 14% of the time.

Theory

Why do fatter tails lower the power of the t test? The t statistic is

\frac{\bar{y} - \mu_0}{s / \sqrt{n}}

where y bar is the sample average, μ0 is the mean under the null hypothesis (μ0 = 0 in our example), s is the sample standard deviation, and n is the sample size.

As distributions become fatter in the tails, the sample standard deviation increases. This means the denominator in the t statistic gets larger and so the t statistic gets smaller. The smaller the t statistic, the greater the probability that the absolute value of a t random variable is greater than the statistic, and so the larger the p-value.

t statistic, t distribution, t test

There are a lot of t‘s floating around in this post. I’ll finish by clarifying what the various t things are.

The t statistic is the thing we compute from our data, given by the expression above. It is called a t statistic because if the hypotheses of the test are satisfied, this statistic has a t distribution with n-1 degrees of freedom. The t test is a hypothesis test based on the t statistic and its distribution. So the t statistic, the t distribution, and the t test are all closely related.

The t family of probability distributions is a convenient example of a family of distributions whose tails get heavier or lighter depending on a parameter. That’s why in the simulation we drew samples from a t distribution. We didn’t need to, but it was convenient. We would get similar results if we sampled from some other distribution whose tails get thicker, and so variance increases, as we vary some parameter.

Related posts

Testing Rupert Miller’s suspicion

I was reading Rupert Miller’s book Beyond ANOVA when I ran across this line:

I never use the Kolmogorov-Smirnov test (or one of its cousins) or the χ² test as a preliminary test of normality. … I have a feeling they are more likely to detect irregularities in the middle of the distribution than in the tails.

Rupert wrote these words in 1986 when it would have been difficult to test is hunch. Now it’s easy, and so I wrote up a little simulation to test whether his feeling was justified. I’m sure this has been done before, but it’s easy (now—it would not have been in 1986) and so I wanted to do it myself.

I’ll compare the Kolmogorov-Smirnov test, a popular test for goodness-of-fit, with the Shapiro-Wilks test that Miller preferred. I’ll run each test 10,000 times on non-normal data and count how often each test produces a p-value less than 0.05.

To produce departures from normality in the tails, I’ll look at samples from a Student t distribution. This distribution has one parameter, the number of degrees of freedom. The fewer degrees of freedom, the thicker the tails and so the further from normality in the tails.

Then I’ll look at a mixture of a normal and uniform distribution. This will have thin tails like a normal distribution, but will be flatter in the middle.

If Miller was right, we should expect the Shapiro-Wilks to be more sensitive for fat-tailed t distributions, and the K-S test to be more sensitive for mixtures.

First we import some library functions we’ll need and define our two random sample generators.

from numpy import where
from scipy.stats import *

def mixture(p, size=100):
    u = uniform.rvs(size=size)
    v = uniform.rvs(size=size)
    n = norm.rvs(size=size)
    x = where(u < p, v, n)
    return x

def fat_tail(df, size=100):
    return t.rvs(df, size=size)

Next is the heart of the code. It takes in a sample generator and compares the two tests, Kolmogorov-Smirnov and Shapiro-Wilks, on 10,000 samples of 100 points each. It returns what proportion of the time each test detected the anomaly at the 0.05 level.

def test(generator, parameter):

    ks_count = 0
    sw_count = 0

    N = 10_000
    for _ in range(N):
        x = generator(parameter, 100)

        stat, p = kstest(x, "norm")
        if p < 0.05:
            ks_count += 1
    
        stat, p = shapiro(x)
        if p < 0.05:
            sw_count += 1
    
    return (ks_count/N, sw_count/N)

Finally, we call the test runner with a variety of distributions.

for df in [100, 10, 5, 2]:
    print(test(fat_tail, df))

for p in [0.05, 0.10, 0.15, 0.2]:
    print(test(mixture,p))

Note that the t distribution with 100 degrees of freedom is essentially normal, at least as far as a sample of 100 points can tell, and so we should expect both tests to report a lack of fit around 5% of the time since we’re using 0.05 as our cutoff.

Here’s what we get for the fat-tailed samples.

(0.0483, 0.0554)
(0.0565, 0.2277)
(0.1207, 0.8799)
(0.8718, 1.0000)   

So with 100 degrees of freedom, we do indeed reject the null hypothesis of normality about 5% of the time. As the degrees of freedom decrease, and the fatness of the tails increases, both tests reject the null hypothesis of normality more often. However, in each chase the Shapiro-Wilks test picks up on the non-normality more often than the K-S test, about four times as often with 10 degrees of freedom and about seven times as often with 5 degrees of freedom. So Miller was right about the tails.

Now for the middle. Here’s what we get for mixture distributions.

(0.0731, 0.0677)
(0.1258, 0.1051)
(0.2471, 0.1876)
(0.4067, 0.3041)

We would expect both goodness of fit tests to increase their rejection rates as the mixture probability goes up, i.e. as we sample from the uniform distribution more often. And thatis what we see. But the K-S test outperforms the S-W test each time. Both test have rejection rates that increase with the mixture probability, but the rejection rates increase faster for the K-S test. Miller wins again.

Related posts

Estimating vocabulary size with Heaps’ law

Heaps’ law says that the number of unique words in a text of n words is approximated by

V(n) = K nβ

where K is a positive constant and β is between 0 and 1. According to the Wikipedia article on Heaps’ law, K is often between 10 and 100 and β is often between 0.4 an 0.6.

(Note that it’s Heaps’ law, not Heap’s law. The law is named after Harold Stanley Heaps. However, true to Stigler’s law of eponymy, the law was first observed by someone else, Gustav Herdan.)

I’ll demonstrate Heaps law looking at books of the Bible and then by looking at novels of Jane Austen. I’ll also look at unique words, what linguists call “hapax legomena.”

Demonsrating Heaps law

For a collection of related texts, you can estimate the parameters K and β from data. I decided to see how well Heaps’ law worked in predicting the number of unique words in each book of the Bible. I used the King James Version because it is easy to download from Project Gutenberg.

I converted each line to lower case, replaced all non-alphabetic characters with spaces, and split the text on spaces to obtain a list of words. This gave the following statistics:

    |------------+-------+------|
    | Book       |     n |    V |
    |------------+-------+------|
    | Genesis    | 38520 | 2448 |
    | Exodus     | 32767 | 2024 |
    | Leviticus  | 24621 | 1412 |
                    ...
    | III John   |   295 |  155 |
    | Jude       |   609 |  295 |
    | Revelation | 12003 | 1283 |
    |------------+-------+------|

The parameter values that best fit the data were K = 10.64 and β = 0.518, in keeping with the typical ranges of these parameters.

Here’s a sample of how the actual vocabulary size and predicted vocabulary size compare.

    |------------+------+-------|
    | Book       | True | Model |
    |------------+------+-------|
    | Genesis    | 2448 |  2538 |
    | Exodus     | 2024 |  2335 |
    | Leviticus  | 1412 |  2013 |
                    ...
    | III John   |  155 |   203 |
    | Jude       |  295 |   296 |
    | Revelation | 1283 |  1387 |
    |------------+------+-------|

Here’s a visual representation of the results.

KJV bible total words vs distinct words

It looks like the predictions are more accurate for small books, and that’s true on an absolute scale. But the relative error is actually smaller for large books as we can see by plotting again on a log-log scale.

KJV bible total words vs distinct words

Jane Austen novels

It’s a little surprising that Heaps’ law applies well to books of the Bible since the books were composed over centuries and in two different languages. On the other hand, the same committee translated all the books at the same time. Maybe Heaps’ law applies to translations better than it applies to the original texts.

I expect Heaps’ law would fit more closely if you looked at, say, all the novels by a particular author, especially if the author wrote all the books in his or her prime. (I believe I read that someone did a vocabulary analysis of Agatha Christie’s novels and detected a decrease in her vocabulary in her latter years.)

To test this out I looked at Jane Austen’s novels on Project Gutenberg. Here’s the data:

    |-----------------------+--------+------|
    | Novel                 |      n |    V |
    |-----------------------+--------+------|
    | Northanger Abbey      |  78147 | 5995 |
    | Persuasion            |  84117 | 5738 |
    | Sense and Sensibility | 120716 | 6271 |
    | Pride and Prejudice   | 122811 | 6258 |
    | Mansfield Park        | 161454 | 7758 |
    | Emma                  | 161967 | 7092 |
    |-----------------------+--------+------|

The parameters in Heaps’ law work out to K = 121.3 and β = 0.341, a much larger K than before, and a smaller β.

Here’s a comparison of the actual and predicted vocabulary sizes in the novels.

    |-----------------------+------+-------|
    | Novel                 | True | Model |
    |-----------------------+------+-------|
    | Northanger Abbey      | 5995 |  5656 |
    | Persuasion            | 5738 |  5799 |
    | Sense and Sensibility | 6271 |  6560 |
    | Pride and Prejudice   | 6258 |  6598 |
    | Mansfield Park        | 7758 |  7243 |
    | Emma                  | 7092 |  7251 |
    |-----------------------+------+-------|

If a suspected posthumous manuscript of Jane Austen were to appear, a possible test of authenticity would be to look at its vocabulary size to see if it is consistent with her other works. One could also look at the number of words used only once, as we discuss next.

Hapax legomenon

In linguistics, a hapax legomenon is a word that only appears once in a given context. The term comes comes from a Greek phrase meaning something said only once. The term is often shortened to just hapax.

I thought it would be interesting to look at the number of hapax legomena in each book since I could do it with a minor tweak of the code I wrote for the first part of this post.

Normally if someone were speaking of hapax legomena in the context of the Bible, they’d be looking at unique words in the original languages, i.e. Hebrew and Greek, not in English translation. But I’m going with what I have at hand.

Here’s a plot of the number of haxap in each book of the KJV compared to the number of words in the book.

Hapax logemenon in Bible, linear scale

This looks a lot like the plot of vocabulary size and total words, suggesting the number of hapax also follow a power law like Heaps law. This is evident when we plot again on a logarithmic scale and see a linear relation.

Number of hapax logemena on a log-log scale

Just to be clear on the difference between two analyses this post, in the first we looked at vocabulary size, the number of distinct words in each book. In the second we looked at words that only appear once. In both cases we’re counting unique words, but unique in different senses. In the first analysis, unique means that each word only counts once, no matter how many times it’s used. In the second, unique means that a work only appears once.

Related posts

Testing Cliff RNG with DIEHARDER

My previous post introduced the Cliff random number generator. The post showed how to find starting seeds where the generator will start out by producing approximately equal numbers. Despite this flaw, the generator works well by some criteria.

I produced a file of s billion 32-bit integers by multiplying the output values, which were floating point numbers between 0 and 1, by 232 and truncating to integer. Then I ran the DIEHARDER random number generator test suite.

The results were interesting. Before running the tests, I thought the tests would nearly all pass or nearly all fail, more likely the latter. But what happened was that many tests passed and some failed hard [1].

Here’s a list of the tests that passed:

  • diehard_birthdays
  • diehard_rank_32x32
  • diehard_rank_6x8
  • diehard_bitstream
  • diehard_oqso
  • diehard_dna
  • diehard_count_1s_str
  • diehard_count_1s_byt
  • diehard_runs
  • sts_monobit
  • sts_serial
  • rgb_bitdist
  • rgb_kstest_test
  • dab_dct
  • dab_filltree2
  • dab_monobit2

The tests that failed were:

  • diehard_parking_lot
  • diehard_2sphere
  • diehard_3sphere
  • diehard_squeeze
  • diehard_craps
  • marsaglia_tsang_gcd
  • rgb_lagged_sum
  • dab_bytedistrib

I’ve left out a few test restults that were ambiguous as well as tests that were described as “Suspect” and “Do not use” on the DIEHARDER web site.

The site I mentioned in the previous post where I ran across this generator said that it passed a spherical generation test. I assume the implementation of that test was less demanding that the version included in DIEHARD. But the generator does well by other tests.

The lagged sum test tests for autocorrelation. Maybe the failure of this test has something to do with the fixed points discussed earlier.

Update: After writing this post I discovered that the generator has a short period, as I discuss here. That explains why the lagged sum test fails: the output has perfect autocorrelation at a lag equal to the period.

Related posts

[1] By “failed hard” I mean the test return a p-value of zero. The p-value couldn’t actually be zero, but it was close enough that it the displayed value was exactly zero.

Serious applications of a party trick

people at a party

In a group of 30 people, it’s likely that two people have the same birthday. For a group of 23 the probability is about 1/2, and it goes up as the group gets larger.

In a group of a few dozen people, it’s unlikely that anyone will have a particular birthday, but it’s likely that two people will have the same birthday. This makes a good party trick, but it’s much more than a party trick. It illustrates a general principle that comes up frequently in applications.

For example, it explains a common mistake in seeding random number generators and it lets you predict the chances of hash function collisions. Also, simulation of the birthday problem is used as a test for random number generators because the simulation is sensitive to flaws such as correlated output.

What are other examples of mathematical party tricks that illustrate an important principle in applications? I’m sure there are others, but the birthday problem is the first one that comes to mind.

Per stirpes and random walks

If an inheritance is to be divided per stirpes, each descendant gets an equal share. If a descendant has died but has living descendants, his or her share is distributed by applying the rule recursively.

Example

For example, suppose a man had two children, Alice and Bob, and stipulates in his will that his estate is to be divided per stirpes. If Alice and Bob are still alive when he dies, his estate is split evenly. Suppose, however, that Alice is still alive but Bob has died, and that Bob has three living children, Carol, David, and Erin. In that case Alice would inherit 1/2 of the estate, and each of Bob’s children would inherit 1/6, i.e. 1/3 of Bob’s 1/2.

State law

In some states, such as Texas, per stirpes is the default when someone dies without a will. Who knows what they do in Nevada? Maybe the descendants play poker for the inheritance. I don’t know. I’m not a lawyer, certainly not a Nevada lawyer.

Random walk

Here’s a random process whose expected value gives the same result as per stirpes.

Convert the inheritance to a jar of pennies, possibly a very large jar. Repeat the following steps until all the pennies are gone.

  1. Take a penny out of the jar and perform a random walk on the family tree representing the descendants of the departed.
  2. When you come to a branch in the tree, choose a branch at random with each branch having equal probability.
  3. When you encounter a living person, give them the penny.

This assumes that you first prune the descendant tree of any lines that have died out. That is, we assume every terminal node of the tree represents a living person.

Why is it necessary to trim the tree? If you ended up at a dead end, couldn’t you just put the penny back and start over? No. Suppose in the example above that Alice and Carol are the only surviving descendants. Then per stirpes says they should receive equal amounts, since Carol inherits all of her father’s share. But if we did the random walk without removing David and Erin, then 1/2 the time we’d give a penny to Alice, 1/6 of the time we’d give it to Carol, and 1/3 of the time we’d start over. Alice would get 75% of the estate.

Related posts

Using one RNG to sample another

Suppose you have two pseudorandom bit generators. They’re both fast, but not suitable for cryptographic use. How might you combine them into one generator that is suitable for cryptography?

Coppersmith et al [1] had a simple but effective approach which they call the shrinking generator, also called irregular decimation. The idea is to use one bit stream to sample the other. Suppose the two bit streams are ai and bi. If ai = 1, then output bi. Otherwise, throw it away. In NumPy or R notation, this is simply b[a > 0].

Examples in Python and R

For example, in Python

    >>> import numpy as np
    >>> a = np.array([1,0,1,1,0,0,0,1])
    >>> b = np.array([1,0,1,0,0,1,1,0])
    >>> b[a > 0]
    array([1, 1, 0, 0])

Here’s the same example in R.

    > a = c(1,0,1,1,0,0,0,1)
    > b = c(1,0,1,0,0,1,1,0)
    > b[a>0]
    [1] 1 1 0 0

Linear Feedback Shift Registers

Coppersmith and colleagues were concerned specifically with linear feedback shift register (LFSR) streams. These are efficient sources of pseudorandom bits because they lend themselves to hardware implementation or low-level software implementation. But the problem is that linear feedback shift registers are linear. They have an algebraic structure that enables simple cryptographic attacks. But the procedure above is nonlinear and so less vulnerable to attack.

A potential problem is that the shrinking generator outputs bits at an irregular rate, and a timing attack might reveal something about the sampling sequence a unless some sort of buffering conceals this.

Other stream sources

While the shrinking generator was developed in the context of LFSRs, it seems like it could be used to combine any two PRNGs in hopes that the combination is better than the components. At a minimum, it doesn’t seem it should make things worse [2].

There is a problem of efficiency, however, because on average the shrinking generator has to generate 4n bits to output n bits. For very efficient generators like LFSRs this isn’t a problem, but it could be a problem for other generators.

Self-shrinking generators

A variation on the shrinking generator is the self-shrinking generator. The idea is to use half the bits of a stream to sample the other half. Specifically, look at pairs of bits, and if the first bit is a 1, return the second bit. If the first bit is a 0, return nothing.

Use in stream ciphers

The eSTREAM competition for cryptographically secure random bit generators included one entry, Decim v2, that uses shrinking generators. The competition had two categories, methods intended for software implementation and methods intended for hardware implementation. Decim was entered in the hardware category. According to the portfolio PDF [3] on the competition site,

Decim contains a unique component in eSTREAM, that of irregular decimation, and is an interesting addition to the field of stream ciphers.

That sounds like it was the only method to use irregular decimation (i.e. shrinking generators).

The first version of Decim had some flaws, but the document says “no adverse cryptanalytic results are known” for the second version. Decim v2 made it to the second round of the eSTREAM competition but was not chosen as a finalist because

… the cipher doesn’t seem to deliver such a satisfying performance profile, so while there might be some very interesting elements to the Decim construction, we feel that the current proposal doesn’t compare too well to the other submissions for the hardware profile.

That seems to imply Decim might be competitive with a different implementation or in some context with different trade-offs.

Related posts

[1] Coppersmith, D. Krawczyk, H. Mansour, Y. The Shrinking Generator. Advances in Cryptology — CRYPTO ’93. Lecture Notes in Computer Scienc, vol. 773, pp. 22–39. Springer, Berlin.

[2] If a and b are good sources of random bits, it seems b sampled by a should be at least as good. In fact, if a is poor quality but b is good quality, b sampled by a should still be good. However, the reverse could be a problem. If b is biased, say it outputs more 1s than 0s, then if a is a quality sample, that sample will be biased in favor of 1s as well.

[3] The link to this report sometimes works but often doesn’t. There’s something unstable about the site. In case it works, here’s the URL: http://www.ecrypt.eu.org/stream/portfolio.pdf

Random sampling from a file

I recently learned about the Linux command line utility shuf from browsing The Art of Command Line. This could be useful for random sampling.

Given just a file name, shuf randomly permutes the lines of the file.

With the option -n you can specify how many lines to return. So it’s doing sampling without replacement. For example,

    shuf -n 10 foo.txt

would select 10 lines from foo.txt.

Actually, it would select at most 10 lines. You can’t select 10 lines without replacement from a file with less than 10 lines. If you ask for an impossible number of lines, the -n option is ignored.

You can also sample with replacement using the -r option. In that case you can select more lines than are in the file since lines may be reused. For example, you could run

    shuf -r -n 10 foo.txt

to select 10 lines drawn with replacement from foo.txt, regardless of how many lines foo.txt has. For example, when I ran the command above on a file containing

    alpha
    beta
    gamma

I got the output

    beta
    gamma
    gamma
    beta
    alpha
    alpha
    gamma
    gamma
    beta

I don’t know how shuf seeds its random generator. Maybe from the system time. But if you run it twice you will get different results. Probably.

Related