One of the most widely used types of pseudo-random number generator is the linear congruential sequence. These are defined by recurrence relations of the form \begin{equation}\label{eq:genrandnum_hull_dobell_01} x_{i+1} \ \equiv \ ax_{i} + c \quad (\operatorname{mod} m) \end{equation} where all the terms are non-negative integers. The usual terminology is that \(a\) is called the ‘multiplier’, \(c\) the ‘increment’ and \(m\) the ‘modulus’.

Any such sequence must be periodic since it contains at most \(m\) different numbers each of which is determined only by its predecessor. The period can therefore never exceed \(m\) but even that cannot be guaranteed. A long period is necessary but not sufficient for good pseudo-randomness, so it is important to understand the conditions that determine its length.

That is the objective of a paper published in 1962 by T. E. Hull and A. R. Dobell (Hull and Dobell, 1962). If you have read anything about the subject you have probably heard of something commonly referred to as ‘the Hull-Dobell theorem’. The paper actually presents *two* theorems, but it seems that only the first gets called ‘the’.

Considering them in reverse order, the second theorem is about sequences where \(c = 0\), i.e. ‘multiplicative’ ones. It makes use of some quite advanced number-theoretical concepts and even Hull and Dobell admit that:

‘The basic theorem for the multiplicative congruential methods is more complicated and more difficult than for the mixed congruential methods.’for which reason they present only a condensed proof, referring the reader to textbooks to flesh out the details.

The first theorem, on the other hand, is not so difficult, and Hull and Dobell present their proof in full. This theorem is about sequences where \(c \neq 0\), which H&D refer to as ‘mixed’. The theorem says that, in that case, a period of length \(m\) is achievable and will occur if:

\(c\) and \(m\) are coprime,

all the prime factors of \(m\) divide \(a-1\) and

if 4 divides \(m\), then 4 divides \(a-1\).

The proof, despite what H&D say, is not easy to follow, and their paper is a classic of ‘proof by intimidation’. Unpacking all the ‘*it is easy to show that*’s and all the ‘*obviously*’s took me weeks. I therefore thought I’d post my workings in case there is anyone out there who might like to see them. Maybe I might attempt the second theorem at some point in the future.

The following proof relies on a number of preliminary results. Rather than putting them in a massive preamble that would make any reader's eyes glaze over, I have instead put them in a separate page and labelled them as ‘Theorem’s, to which the textual references are hyperlinked.

## Notation

Copious use is made of the following notations:

Symbol | Meaning | Examples |

\(\operatorname{\%}\) | Remainder after the number on the left of the operator is divided by the number on its right | \(127 \operatorname{\%} 11 = 6\) |

\(\mid\) | Divides (exactly). The number on the right of the operator is an integer multiple of the number on its left. | \(7 \mid 56\) |

\(\nmid\) | Doesn't divide. The number on the right of the operator is NOT an integer multiple of the number on its left. | \(7 \nmid 156\) |

\(\equiv\) | Congruent ‘modulo’ some number | \(29 \equiv 5 \; \operatorname{mod} 12\) |

Knuth (1998) uses the symbol ‘\(\perp\)’ to mean coprime, but I have never seen it used outside that text and its companion volumes, so I have decided not to use it.

Many authors, at least in the field of random number generation, use the symbol ‘mod’ to mean the remainder after division, but this invites confusion with situations where the same symbol is used to denote the modulus of a congruence relation under modular arithmetic. Knuth (1997) uses the symbol ‘modulo’ for the latter and ‘mod’ for the former. However, I have decided to stick with ‘mod’ for the modulus of a congruence relation and use ‘%’ for remainder after division, as they are shorter. The percent sign is well established for this purpose as exemplified by its use in many programming languages.

## Proof

Consider the sequence defined by the recurrence relation \begin{equation}\tag{\ref{eq:genrandnum_hull_dobell_01}r} x_{i+1} \ \equiv \ ax_{i} + c \quad (\operatorname{mod} m) \end{equation} where all quantities are non-negative integers and \(c \neq 0\). The period of the sequence is the smallest value of \(n\) such that \begin{equation}\label{eq:genrandnum_hull_dobell_period_n_k} x_{i+n} \equiv x_i \ (\operatorname{mod} \ m) \end{equation}

The fact that each member of the sequence is determined only by its predecessor means that if \eqref{eq:genrandnum_hull_dobell_period_n_k} holds for any particular \(i\), then it holds for all \(i\) (See Theorem 12).

To find \(n\) we need a formula for \(x_{i+n}\) in terms of only \(x_i\), i.e., not a recurrence relation. To find such a formula, first define a new sequence \(\{x_i^\prime\}\) that obeys the same recurrence relation, but with an equals sign instead of a congruence sign, and with the two sequences having the same value at an arbitrarily chosen index \(k\), which provides a ‘connection’ between the two sequences. In symbols:

\begin{equation}\label{eq:genrandnum_hull_dobell_01_equality} x^\prime_{i+1} = ax^\prime_{i} + c, \qquad x^\prime_k = x_k \end{equation}So whereas the sequence \(\{x_i\}\) obeys the rules of modular arithmetic and always stays in the range \(0 \leqslant x_i \leqslant (m-1)\), the sequence \(\{x^\prime_i\}\) obeys the rules of ordinary algebra and is not subject to any constraints. We can therefore derive formulae using \(x_i^\prime\) without having to worry about the peculiarities of modular arithmetic, such as keeping track of remainders, and can translate the result to \(x_i\) at the end using the relation:

\begin{equation}\label{eq:genrandnum_hull_dobell_01_congruence_equality} x_i^\prime \equiv x_i \quad (\operatorname{mod} m) \end{equation}To prove \eqref{eq:genrandnum_hull_dobell_01_congruence_equality}, tentatively assume it is true for a particular value of the index \(i\). It follows from the addition and multiplication rules for congruences that

\begin{equation*} \left( ax^\prime_{i} + c \right) \ \equiv \ \left( ax_{i} + c \right) \quad (\operatorname{mod} m) \end{equation*}so

\begin{equation*} x_{i+1}^\prime \equiv x_{i+1} \quad (\operatorname{mod} m) \end{equation*}which is the same result as is obtained by merely incrementing the index \(i\). So if \eqref{eq:genrandnum_hull_dobell_01_congruence_equality} is true for any particular value of \(i\), then it is true for \(i+1\). We know \eqref{eq:genrandnum_hull_dobell_01_congruence_equality} is true when \(i = k\), so it is true for all \(i\).

We can also prove \eqref{eq:genrandnum_hull_dobell_01_congruence_equality} using visual imagery. The sequence \(\{x_i\}\) obeys ‘clock arithmetic’ and so its values can be visualised as lying on the circumference of a circle, whereas the sequence \(\{x_i^\prime\}\) obeys ordinary arithmetic and so can be visualised as lying on a straight line extending to infinity in both directions. We then pick an index, \(k\), on the circle, find the same value on the line and place the circle on the line so the two \(k\)s are at the point of contact. If we then roll the circle along the line an arbitrary distance in either direction, the indices at the point of contact, though not necessarily equal, will by definition always be congruent. Therefore, any function that is equivalent to a forward (or backward) roll will return congruent values when fed congruent arguments. This is true of addition, since that literally *is* just a forward roll, and of multiplication, since that is the same as repeated addition.

Next we derive a formula for \(x_{i+n}^\prime\) in terms of \(x_i^\prime\). We start by evaluating the first few recurrences of \eqref{eq:genrandnum_hull_dobell_01_equality} to see if a pattern emerges. \begin{equation*} x^\prime_{i+1} = ax^\prime_{i} + c \end{equation*} \begin{align*} x^\prime_{i+2} &= ax^\prime_{i+1} + c \\ &= a\left(ax^\prime_{i} + c\right) + c \\ &= a^2x^\prime_{i} + (a+1) c \end{align*} \begin{align*} x^\prime_{i+3} &= ax^\prime_{i+2} + c \\ &= a \left( a^2x^\prime_{i} + (a+1) c \right) + c \\ &= a^3x^\prime_{i} + \left(a^2+a+1\right)c \end{align*} \begin{align*} x^\prime_{i+4} &= ax^\prime_{i+3} + c \\ &= a\left( a^3x^\prime_{i} + \left(a^2+a+1\right)c \right) + c \\ &= a^4x^\prime_{i} + \left(a^3+a^2+a+1\right)c \end{align*} from which we may conjecture that: \begin{equation}\label{eq:genrandnum_hull_dobell_02_equality} x^\prime_{i+n}= a^nx^\prime_{i} + \left(a^{n-1}+a^{n-2}+\cdots+a+1\right)c \end{equation}

We can now prove this by induction. First replace \(n\) by \(n+1\) in \eqref{eq:genrandnum_hull_dobell_02_equality}

\begin{equation}\label{eq:genrandnum_hull_dobell_03} x^\prime_{i+(n+1)}= a^{n+1}x^\prime_{i} + \left(a^{n}+a^{n-1}+\cdots+a+1\right)c \end{equation}Next derive \(x^\prime_{i + (n+1)}\) using the recurrence formula Equation \eqref{eq:genrandnum_hull_dobell_01_equality}: \begin{align} x^\prime_{i+(n+1)} &= ax^\prime_{i+n} + c \notag \\ &= a\left( a^nx^\prime_{i} + \left(a^{n-1}+a^{n-2}+\cdots+a+1\right)c \right) + c \notag \\ &= a^{n+1}x^\prime_{i} + \left(a^n+a^{n-1}+\cdots+a^2+a+1\right)c \label{eq:genrandnum_hull_dobell_2a} \end{align} which is the same as \ref{eq:genrandnum_hull_dobell_03}, so the result is proved.

It is part of the definition of the sequence \(\{x_i^\prime\}\) that \(x_k^\prime = x_k\) so replacing \(i\) by \(k\) in Equation \eqref{eq:genrandnum_hull_dobell_02_equality} and then substituting \(x_k^\prime = x_k\) gives: \begin{equation*} x^\prime_{k+n} = a^nx_{k} + \left(a^{n-1}+a^{n-2}+\cdots+a+1\right)c \end{equation*} and we showed above that \(x_{k+n} \equiv x_{k+n}^\prime \; (\operatorname{mod} m)\) so \begin{equation}\label{eq:genrandnum_hull_dobell_02} \boxed{ x_{k+n} \ \equiv \ a^nx_{k} + \left(a^{n-1}+a^{n-2}+\cdots+a+1\right)c \quad (\operatorname{mod} m) } \end{equation}

The fact that \(k\) is arbitrary means that \eqref{eq:genrandnum_hull_dobell_02} holds for all \(k\). In other words \eqref{eq:genrandnum_hull_dobell_02} gives the value of the member of the sequence that is \(n\) places further on from any particular other member.

Finally, substitute the formula for the sum of a geometric series \begin{equation*} a^{n-1} + a^{n-2} + \cdots + a + 1 = \frac{a^n-1}{a-1} \end{equation*} into \ref{eq:genrandnum_hull_dobell_02} giving \begin{equation}\label{eq:genrandnum_hull_dobell_01a} \boxed{ x_{k+n} \ \equiv \ a^nx_{k} + \frac{a^n – 1}{a-1}c \quad (\operatorname{mod} m) } \end{equation}

Equation \eqref{eq:genrandnum_hull_dobell_01a} is sometimes called the ‘jump ahead’ formula since, in theory at least, a random number generator that is based on a linear congruential sequence could use it to enable jumping ahead an arbitrary number of steps, which could be useful if you want to generate distinct sequences that are uncorrelated with each other. In practice it isn't that easy since evaluating Equation \eqref{eq:genrandnum_hull_dobell_01a} exactly would require storing numbers that are too big for the computer to store. Fortunately in 1994 F. B. Brown proposed an algorithm that solves this problem (Brown, 1994). It would be too much of a diversion to go into that here, however.

We can now use these formulae to find out the conditions under which \(x_{k+n} = x_k\).

### Case where \(a \equiv 1 \; (\operatorname{mod} \ m) \)

When \(a \equiv 1 \; (\operatorname{mod} \ m) \) it follows from \eqref{eq:genrandnum_hull_dobell_02} and the multiplcation rule for congruences that

\begin{equation}\label{eq:genrandnum_hull_dobell_04} x_{k+n} \equiv x_k + nc \quad (\operatorname{mod} m) \end{equation}It follows that \(x_{k+n} \equiv x_k\) if

\begin{equation}\label{eq:genrandnum_hull_dobell_04a} nc \equiv 0 \quad (\operatorname{mod} m) \end{equation}which can only occur if \(nc\) is exactly divisible by \(m\). If \(c\) by itself is divisible by \(m\) then \eqref{eq:genrandnum_hull_dobell_04a} is true for all \(n,\) which would mean that \(x_{k+n} \equiv x_k\) for all \(n\), which would mean that all members of the sequence are identical, i.e. the sequence is ‘stuck’ on a single value. Otherwise \(nc = jm\), where \(j\) is a non-negative integer, which is the same as saying that the period of the sequence is \(m\).

This result applies to all linear congruential generators when \(c \neq 0\) and \(a = 1\) irrespective of whether any of the conditions of this theorem are met. However, if \(c\) is small compared with \(m\) the sequence will very non-random.

### Case where \(m\) is the product of distinct primes

Condition \(2\) says that \(a-1\) is divisible by each prime factor of \(m\), which implies that it is divisible by their least common multiple, which, since they are all prime, is equal to their product. If \(m\) is the product of distinct primes, that is if each prime factor appears only once, then \(m\) is *equal* to the product of its prime factors, rather than merely being divisible by that product. This means that \(a-1\) is divisible by \(m\) as a whole, not just by a common divisor less than \(m\). In other words, the greatest common divisor of \(m\) and \(a-1\) is \(m\). This means that:
\begin{equation*}
a \equiv 1 \quad (\operatorname{mod} m)
\end{equation*}
Substituting \(a \equiv 1\) into \eqref{eq:genrandnum_hull_dobell_02} and applying the sum, product and power rules for congruences results in \eqref{eq:genrandnum_hull_dobell_04} again.

When \(\alpha = 1\) condition (ii) leads us again to the trivial case with \(a=1\). We therefore need only consider \(a > 2\).and Knuth (1998, p17) says:

It turns out that … when \(m\) is the product of distinct primes, only \(a = 1\) will produce the full period, but when \(m\) is divisible by a high power of some prime there is considerable latitude in the choice of \(a\).

Why did they both use the equality operator and not the congruence operator? Maybe it is because the only value of \(a\) that is less than \(m\) *and* congruent to \(1 \; ( \operatorname{mod} m ) \) is \(1\), so if \(a\) is required to be less than \(m\), which would be the case if \(m = 2^w\), where \(w\) is the width in bits of the unsigned integer data type that contains \(a\), \(c\) and \(x\), then the equality would suffice.

### Case where \(a > 1\) and \(m\) is a power of a prime

Since the period of the sequence is the smallest value of \(n\) such that \(x_{k+n} \equiv x_k \ (\operatorname{mod} \ m)\), we first substitute \(x_{k+n} \equiv x_k\) into \eqref{eq:genrandnum_hull_dobell_01a}:

\begin{equation*} x_k \ \equiv \ a^nx_{k} + \frac{a^n – 1}{a-1}c \quad (\operatorname{mod} m). \end{equation*}which rearranges to

\begin{equation}\label{eq:genrandnum_hull_dobell_01b} \frac{a^n – 1}{a-1} \left( x_k(a-1) + c\right) \ \equiv \ 0 \quad (\operatorname{mod} m) \end{equation}If \eqref{eq:genrandnum_hull_dobell_01b} is true, either

\(m \mid \left(x_k(a-1) + c\right)\)

or

\(m \mid \frac{a^n-1}{a-1}\)

First consider the possibility that \(m \mid \left( x_k(a-1) + c\right)\). If that were the case \eqref{eq:genrandnum_hull_dobell_01b} would be true for all values of \(n\), which would mean that all members of the sequence are equal, so you could say that all values of \(n\) (or none, depending on how you look at it) are the period. To see that this is indeed the case, observe that \begin{equation*} x_k(a-1) + c = x_{k+1} – x_k \end{equation*}So \(m \mid \left(x_k(a-1) + c\right) \; \implies \; m \mid \left(x_{k+1} – x_k\right)\) which by definition means that \begin{equation*} x_{k+1} \equiv x_k \quad (\operatorname{mod} m) \end{equation*}

To avoid such an eventuality, we need a way of ruling out the possibility that \(m \mid \left(x_k(a-1) + c\right)\). One way of doing this is to impose conditions on the possible values of \(a\) and \(c\) that exclude combinations for which \(m \mid \left(x_k(a-1) + c\right)\).

Theorem 5 suggests that we could ensure that \(m \nmid \left(x_k(a-1) + c\right)\) by requiring that one part of the sum is coprime to \(m\) and the other part is divisible by all the prime factors of \(m\). If we chose \(x_k(a-1)\) to be coprime to \(m\) then both \(x_k\) and \((a-1)\) would have to be coprime to \(m\), but we want our theorem to apply for all values of \(x_k\), so that won't do. However, if we choose a value of \(a\) that makes \((a-1)\) divisible by all the prime factors of \(m\) then \(x_k(a-1)\) is also divisible by all the prime factors of \(m\), irrespective of the value of \(x_k\). Therefore, this leads us to the first two of Hull and Dobell’s conditions, namely

\(c\) and \(m\) are coprime

all the prime factors of \(m\) divide \(a-1\)

This doesn't necessarily mean that there aren't any other combinations of \(a\) and \(c\) for which \(m \nmid \left(x_k(a-1) + c\right)\), but we don't need to be able to find *all* such combinations, we just need to be able to find one when we want one.

Greenberger (1961) observes that in the special case where \(m\) is a power of \(2\), we can show that \(m \nmid \left(x_k(a-1) + c\right)\) without resorting to Theorem 5 since in that case condition 1 implies that \(c\) is odd and condition 2 implies that \((a-1)\) is even, and the sum of an odd and an even must be odd and an even cannot divide an odd.

It follows that when \(a\) and \(c\) are subject to the above two conditions, then \(x_{k+n} \equiv x_k\) if:

\begin{equation}\label{eq:genrandnum_hulldobell_intermediate_01} \frac{a^n – 1}{a-1} \equiv 0 \quad (\operatorname{mod} m) \end{equation}or, equivalently, \(\left(a^n-1 \right) / (a-1)\) is divisible by \(m\). So the period of the sequence \eqref{eq:genrandnum_hull_dobell_01} is the smallest value of \(n\) that makes \eqref{eq:genrandnum_hulldobell_intermediate_01} true.

Equation \eqref{eq:genrandnum_hulldobell_intermediate_01} doesn't involve \(x_k\) at all, and so implies that the period, \(n\), must be independent of the value of \(x_k\). This means that the number of terms between one appearance of a given number and the next is the same throughout the entire sequence, which is consistent with the corollary to Theorem 12.

At first sight it might seem like a useful simplification to make use of the fact that \eqref{eq:genrandnum_hulldobell_intermediate_01} implies that \(a^n-1 \equiv 0 \; (\operatorname{mod} m)\), but it turns out not to make any difference, in terms of simplicity, which of these options we choose, so we follow Hull and Dobell (1962) and proceed directly from \eqref{eq:genrandnum_hulldobell_intermediate_01}.

We tentatively try \(n = m = p^\alpha\), where \(p\) is prime and \(\alpha\) is a positive integer, to see if this value will satisfy \eqref{eq:genrandnum_hulldobell_intermediate_01}. To do this we need to assume a functional form for \(a\) that guarantees Condition 2 is satisfied, i.e. that \(p \mid (a-1)\). Hull and Dobell (1962) chose the following form for \(a\): \begin{equation}\label{eq:genrandnum_hull_dobell_m_p_alpha} a = 1 + kp^{\beta} \end{equation} where \(k\) is chosen to be coprime to \(p\) and \(\beta \geqslant 1\) is an as yet undetermined integer.

While this is certainly consistent with the requirement that \(p \mid (a-1)\), it is not the most general functional form that \(a\) could have and still be consistent with that requirement. It therefore leaves a nagging doubt as to whether some other functional form for \(a\), that is also consistent with \(p \mid (a-1)\), would have produced a different result. To avoid this possibility we choose the following functional form for \(a\): \begin{equation}\label{eq:genrandnum_hull_dobell_m_p_alpha_general} a = 1 + pf(p) \end{equation} where \(f(p)\) is an unspecified function of \(p\) (and possibly other arguments too, as long as they don't include \(a\) or \(c\)) that we tentatively assume is not divisible by \(p\), while keeping open the option of relaxing that assumption later if necessary.

The binomial expansion of \(\left(z+1\right)^n\) is

\begin{align} (z+1)^n &= \sum_{r=0}^{n} \binom{n}{r} \, z^r \notag \\ &= 1 + \sum_{r=1}^{n} \binom{n}{r} \, z^r \label{eq:genrandnum_binomial_expansion} \end{align}where \(\binom{n}{r}\) is a binomial coefficient. Substituting \(z = pf\) into \eqref{eq:genrandnum_binomial_expansion} gives

\begin{equation*} \left(1 + pf \right)^n = a^n = 1 + \sum_{r=1}^{n} \binom{n}{r} \, \left( pf \right)^r \end{equation*}so

\begin{equation}\label{eq:genrandnum_binomial_expansion_k_p_alpha_1} a^n – 1 = \sum_{r=1}^{n} \binom{n}{r} \, \left( pf \right)^{r} \end{equation}so

\begin{equation}\label{eq:genrandnum_binomial_expansion_k_p_alpha_2} \frac{a^n – 1}{a-1} = \frac{a^n – 1}{pf} = \sum_{r=1}^{n} \binom{n}{r} \, \left( pf \right)^{r-1} \end{equation}Substitute \(n = p^\alpha\) into \eqref{eq:genrandnum_binomial_expansion_k_p_alpha_2}

\begin{equation}\label{eq:genrandnum_binomial_expansion_k_p_alpha_3} \frac{a^{p^\alpha} – 1}{a-1} = \sum_{r=1}^{p^\alpha} \binom{p^\alpha}{r} \, \left( pf \right)^{r-1} \end{equation}Theorem 2 implies that if we could show that every term in the sum on the right hand side of \eqref{eq:genrandnum_binomial_expansion_k_p_alpha_3} is divisible by \(p^\alpha\) then that would be sufficient to show that the whole sum is divisible by \(p^\alpha\), which in turn would be sufficient to show that \eqref{eq:genrandnum_hulldobell_intermediate_01} is true when \(n = m = p^\alpha\).

The \(r\)th term of \eqref{eq:genrandnum_binomial_expansion_k_p_alpha_3} is

\begin{equation}\label{eq:genrandnum_binomial_expansion_k_p_alpha_3_rth_term} \binom{p^\alpha}{r} \, \left( pf \right)^{r-1} \end{equation}Kummer’s theorem says that the highest power of a prime \(p\) that divides the binomial coefficient \(\binom nr\) is

\begin{equation*} \nu = \dfrac{S_p(r) + S_p(n-r) – S_p(n)}{p-1} \end{equation*}where \(S_p(\cdot)\) is the sum of the digits in the base-\(p\) representation of its integer argument.

The base-\(p\) representation of \(p^\alpha\) has a \(1\) at position \(\alpha\) and zero at all other positions, so \(S_p(n) = 1\).

We don't need to assume any particular value of \(n\) to show that when \(r = n\), \(\nu = 0\) since \(S_p(r) = S_p(n)\) and \(S_p(n-r) = S_p(0) = 0\). This can be checked by observing that \(\binom{n}{n} = 1\), which is not divisible by \(p\).

Having found \(S(r)\) when \(r = n\), we now consider the case where \(r < n\), that is \(r < p^\alpha\).

Since the base-\(p\) representation of \(p^\alpha\) has a \(1\) at position \(\alpha\) and zero at all other positions, the most significant digit in the base-\(p\) representation of \(r < p^\alpha\) must be in a position lower than \(\alpha\). Therefore, let the base-\(p\) representation of \(r\) be \(a_{\alpha-1} a_{\alpha-2} \ldots a_1 a_0\) and let \(\gamma\) be the position of the first non-zero digit, that is, \(a_i = 0\) when \(i < \gamma\). In other words the base-\(p\) representation of \(r\) has zeros in its \(\gamma\) least significant places. Since we are using base \(p\), this is the same as saying that \(r\) is divisible by \(p^\gamma\) and, since \(\gamma\) is the index of the first non-zero digit, it must be the largest power of \(p\) that divides \(r\). It is, of course, possible that \(\gamma\) could sometimes be zero.

We can therefore write the subtraction of \(r\) from \(p^\alpha\) in base \(p\) as follows:

1 | 0 | 0 | … | 0 | 0 | … | 0 | |

\(-\) | \(a_{\alpha-1}\) | \(a_{\alpha-2}\) | … | \(a_{\gamma}\) | 0 | … | 0 | |

To perform the subtraction we strike out the \(1\) in the most significant position of \(p^\alpha\) and add \(p-1\) to each digit in a lower position (which are all zero) until we get to position \(\gamma\) where we add \(p\), so

\(\cancelto{0}{1}\) | \(\cancelto{p-1}{0}\) | \(\cancelto{p-1}{0}\) | \(\cdots\) | \(\cancelto{p-1}{0}\) | \(\cancelto{p}{0}\) | 0 | \(\cdots\) | 0 | |

\(-\) | \(a_{\alpha-1}\) | \(a_{\alpha-2}\) | \(\cdots\) | \(a_{\gamma+1}\) | \(a_\gamma\) | 0 | \(\cdots\) | 0 | |

\(p-1-a_{\alpha-1}\) | \(p-1-a_{\alpha-2}\) | \(\cdots\) | \(p-1-a_{\gamma+1}\) | \(p-a_\gamma\) | 0 | \(\cdots\) | 0 |

The result contains \(\alpha – \gamma\) instances of \(p\), \(\alpha – 1 – \gamma\) instances of \(-1\) and the negative of each of the digits of \(r\), so

\begin{equation*} S_p\left(p^\alpha – r \right) = (\alpha – \gamma)p – (\alpha – 1 – \gamma) – S_p(r) \end{equation*}so

\begin{align*} S_p(r) + S_p(n-r) – S_p(n) &= S_p(r) + (\alpha – \gamma)p – \alpha + 1 + \gamma – S_p(r) – \cancelto{1}{S_p(n)}\\ &= (\alpha – \gamma)(p-1) \end{align*}so

\begin{equation}\label{eq:genrandnum_hull_dobell_nu_alpha_gamma} \nu = \alpha – \gamma \end{equation}This means that the largest power of \(p\) that divides \(\binom{p^\alpha}{r}\) is \(p^{\alpha-\gamma}\) where \(p^\gamma\) is the largest power of \(p\) that divides \(r\), which makes intuitive sense. Formula \eqref{eq:genrandnum_hull_dobell_nu_alpha_gamma} also gives the right answer in the case where \(r = p^\alpha\), even though we excluded that case to make the subtraction easier, since, by definition, if \(r = p^\alpha\) then \(\gamma = \alpha\) and so \(\nu = 0\), which we have already obtained by a different method. The relation \eqref{eq:genrandnum_hull_dobell_nu_alpha_gamma} therefore applies to all values of \(r \in \{1, 2, \ldots, p^\alpha\}\).

Next we remind ourselves of the form of the \(r\)th term of the expansion of \((a^{p^\alpha}-1)/(a-1)\), i.e. Equation \eqref{eq:genrandnum_binomial_expansion_k_p_alpha_3_rth_term}:

\begin{equation}\tag{\ref{eq:genrandnum_binomial_expansion_k_p_alpha_3_rth_term}r} \binom{p^\alpha}{r} \, \left( pf \right)^{r-1} \end{equation}We have just shown that the largest power of \(p\) that divides \(\binom{p^\alpha}{r}\) is \(p^{\alpha – \gamma}\). Tentatively assuming that \(p \nmid f(p)\), so as to avoid having to look inside \(f\) for extra factors of \(p\), the highest power of \(p\) that divides the whole term is

\begin{equation}\label{eq:genrandnum_hull_dobell_sum_term_max_divisor} p^{\alpha – \gamma + r – 1} \end{equation}and so the term is divisible by \(p^\alpha\) whenever

\begin{equation}\label{eq:hull_dobell_17} r \geqslant \gamma + 1 \end{equation}If we had chosen to start our derivation from \(a^n-1 \equiv 0 \; (\operatorname{mod} m)\) instead of from \(\left(a^n – 1\right)/(a-1) \equiv 0 \; (\operatorname{mod} m)\) then the \(r\)th term of the expansion of \((a^{p^\alpha}-1)\) would be \(\binom{p^\alpha}{r} \, \left( pf \right)^{r}\) but condition \(2\) of the theorem implies that \(p \mid \left(a-1\right)\) which would imply that the numerator of \eqref{eq:genrandnum_hulldobell_intermediate_01} must actually be divisible by \(p^{\alpha + 1}\) so the requirement would be that \(p^{\alpha + 1} \mid \left(a^n-1\right)\). So the highest power of \(p\) that divides the \(r\)th term would be \(p^{\alpha – \gamma + r}\) which would be divisible by \(p^{\alpha + 1}\) when \(\alpha – \gamma + r \geqslant \alpha + 1\), which rearranges to \begin{equation*} r \geqslant \gamma + 1 \end{equation*} so the rest of the proof would be identical from this point on.

The definition of \(\gamma\) implies that, for all \(r\), \(r = kp^\gamma\), where \(k\) is an integer that is not divisible by \(p\), so

\begin{equation}\label{eq:genrandnum_hull_dobell_20} kp^\gamma \geqslant \gamma + 1 \end{equation}But \eqref{eq:genrandnum_hull_dobell_20} is true for all values of \(k\) iff is true when \(k=1\), so we just need to show that

\begin{equation}\label{eq:genrandnum_hull_dobell_20a} p^\gamma \geqslant \gamma + 1 \end{equation}which rearranges to

\begin{equation}\label{eq:genrandnum_hull_dobell_21} p \geqslant (\gamma + 1)^{1/\gamma} \end{equation}Although \((\gamma + 1)^{1/\gamma}\) is not defined when \(\gamma=0\), \eqref{eq:genrandnum_hull_dobell_20a} reduces to \(1 \geqslant 1\) in that case, and so is true when \(\gamma=0\).

Having shown that \eqref{eq:genrandnum_hull_dobell_20a} is true when \(\gamma=0\), we can apply \eqref{eq:genrandnum_hull_dobell_21} over the set \(\gamma \in \{1, 2, \ldots, \alpha\}\). Since \eqref{eq:genrandnum_hull_dobell_21} is of the form \((ax+b)^{1/x}\), with \(a > 0\) and \(b \geqslant 1\), Theorem 9 says that it is monotonically decreasing over the range \(0 < x < \infty\), so the largest value of \((\gamma + 1)^{1/\gamma}\) over the set \(\gamma \in \{1, 2, \ldots, \alpha\}\) occurs when \(\gamma = 1\), which implies that

\begin{equation}\label{eq:genrandnum_hull_dobell_23} p \geqslant 2 \end{equation}In other words, if \(p \geqslant 2\) (and all primes satisfy \(p \geqslant 2\) since \(1\) is not prime) then every term in the sum on the right hand side of Equation \eqref{eq:genrandnum_binomial_expansion_k_p_alpha_3} is divisible by \(p^\alpha\) and so

\begin{equation}\label{eq:genrandnum_hulldobell_intermediate_02} \frac{a^{p^\alpha} – 1}{a-1} \equiv 0 \quad (\operatorname{mod} p^\alpha) \end{equation}We are not finished yet, however! The period of the sequence is defined as the *smallest* value of \(n\) for which Equation \eqref{eq:genrandnum_hulldobell_intermediate_01} is true, so we also need to show that, under the same conditions, there is no value of \(n < p^\alpha\) such that \eqref{eq:genrandnum_hulldobell_intermediate_01} is true, in other words

In the case where \(p=2\) and \(\alpha = 1\), the only possible value of \(n < p^\alpha\) is \(n=1\) in which case \begin{equation}\label{eq:genrandnum_hull_dobell_ore_01_case_2} \frac{a^{n < p^\alpha} - 1}{a-1} = \frac{a - 1}{a-1} = 1 \not\equiv 0 \ (\operatorname{mod} \ \text{anything}) \end{equation} and so \eqref{eq:genrandnum_hull_dobell_ore_01} is true when \(p=2\) and \(\alpha = 1\).

To show that it is also true for cases where \(p > 2\) and cases where \(p=2\) and \(\alpha>1\) we first assume that \eqref{eq:genrandnum_hull_dobell_ore_01} is false. In that case its left hand side is divisible by \(p^\alpha\), which implies that \(a^{n < p^\alpha} - 1\) is divisible by \(p^\alpha\) so \begin{equation}\label{eq:genrandnum_hull_dobell_ore_04} a^{n < p^\alpha} \equiv 1 \quad (\operatorname{mod} p^\alpha) \end{equation}

Theorem 7 implies that if \(n\) exists, then \(n \mid p^\alpha\). This makes intuitive sense since if \(n\) is the period of the sequence, subsequent occurrences of the same output must occur at multiples of the period.

Since \(p\) is prime, the fact that \(n \mid p^\alpha\) means that \(n\) is also a power of \(p\), so to prove that there is no such \(n\), we need only prove that if \(\kappa < \alpha\) then \begin{equation}\tag{\ref{eq:genrandnum_hulldobell_intermediate_01}r} \frac{a^{p^\kappa} - 1}{a-1} \not\equiv 0 \quad (\operatorname{mod} p^\alpha) \end{equation}

To reduce clutter, define \begin{equation*} Z_\kappa = \frac{a^{p^\kappa} – 1}{a-1} \end{equation*}

Replacing \(\alpha\) with \(\kappa\) in \eqref{eq:genrandnum_hulldobell_intermediate_02} implies that \begin{equation*} p^\kappa \mid Z_\kappa \end{equation*}

If we could show that \begin{equation}\label{eq:genrandnum_hull_dobell_kappa_plus_one} p^{\kappa+1} \nmid Z_\kappa \end{equation} then it would follow that \(Z_\kappa\) is not divisible by any power of \(p\) greater than \(p^\kappa\).

Expand \(Z_\kappa\) as a binomial as before \begin{align} Z_\kappa &= \sum_{r=1}^{p^\kappa} \binom{p^\kappa}{r} \, \left( pf \right)^{r-1}\notag\\ &= p^\kappa + \sum_{r=2}^{p^\kappa} \binom{p^\kappa}{r} \, \left( pf \right)^{r-1} \label{eq:genrandnum_hull_dobell_kappa_binomial} \end{align}

The first term, \(p^\kappa\), is not divisible by \(p^{\kappa+1}\), since \(\frac{p^\kappa}{p^{\kappa+1}} = \frac{1}{p}\), which is not a whole number. Theorem 4 says that if *all* of the subsequent terms in the sum *are* divisible by \(p^{\kappa+1}\) then \(Z_\kappa\) is *not* divisible by \(p^{\kappa+1}\).

By analogy with \eqref{eq:genrandnum_hull_dobell_sum_term_max_divisor} the largest power of \(p\) that divides \(Z_\kappa\) is \(p^{\kappa – \gamma + r-1} = p^{(\kappa + 1) – 1 – \gamma + r – 1}\), which is divisible by \(p^{\kappa+1}\) whenever \begin{equation}\label{eq:hull_dobell_18} r \geqslant \gamma + 2 \end{equation} which is identical to \eqref{eq:hull_dobell_17} except with \(\gamma+2\) on the right hand side instead of \(\gamma + 1\).

Inequality \eqref{eq:hull_dobell_18} is false when \(r=1\), but the sum on the right hand side of \eqref{eq:genrandnum_hull_dobell_kappa_binomial} only ranges over values of \(r \in \{2, \ldots, p^\kappa\}\) so it doesn't matter.

Limiting ourselves to \(r \in \{2, \ldots, p^\kappa\}\), the definition of \(\gamma\) implies that \(r = kp^\gamma\), where \(k\) is an integer that is not divisible by \(p\), so Inequality \eqref{eq:hull_dobell_18} is true when \begin{equation}\label{eq:hull_dobell_19} kp^\gamma \geqslant \gamma + 2 \end{equation}

Inequality \eqref{eq:hull_dobell_19} is true when \(\gamma=0\) because in that case it reduces to \(k \geqslant 2\) and the only way that both \(\gamma=0\) and \(k < 2\) can occur together is when \(r=1\) and we are only considering \(r \in \{2, \ldots, p^\alpha\}.\)

When \(\gamma \geqslant 1\), Inequality \eqref{eq:hull_dobell_19} is true for all values of \(k\) iff it is true when \(k=1\), so we just need to show that

\begin{equation}\label{eq:hull_dobell_19a} p^\gamma \geqslant \gamma + 2 \end{equation}which rearranges to \begin{equation}\label{eq:hull_dobell_20} p \geqslant (\gamma + 2)^{1/\gamma} \end{equation}

As before, the right hand side of Inequality \eqref{eq:hull_dobell_20} is of the form \((ax+b)^{1/x}\) with \(a > 0\) and \(b \geqslant 1\), so Theorem 9 says that it is monotonically deceasing over the range \(0 < x < \infty\), so the largest value of \((\gamma + 2)^{1/\gamma}\) over the set \(\gamma \in \{1, 2, \ldots, \kappa\}\) occurs when \(\gamma=1\), in which case \eqref{eq:hull_dobell_20} becomes

\begin{equation}\label{eq:hull_dobell_21} p \geqslant 3 \end{equation}which means that \(Z_\kappa\) can be guaranteed not divisible by \(p^{\kappa + 1}\) when \(p \geqslant 3\).

This is probably what Hull and Dobell (1962) meant when they said:

…providedwe now make use of our assumption that \(p\) is odd …

(They introduce this assumption at the same time as \(m = p^\alpha\) rather than waiting till it emerges naturally during the course of the derivation.)

Inequality \eqref{eq:hull_dobell_21} *doesn't* mean that if \(p < 3\) then \(Z_{\kappa}\) necessarily *is* divisible by \(p^{\kappa+1}\), only that it is not guaranteed not to be.

If \(p\) were limited to values of \(3\) or higher then cases where \(m\) is a multiple of \(2\) would be excluded. Unfortunately these are practically the only ones that are ever used, since practical LCGs invariably have divisors of \(2^{32}\) or \(2^{64}\), in which case the results we have obtained so far would have nothing to say. That would be extremely annoying after such an enormous amount of effort.

Fortunately we can snatch victory from the jaws of defeat by dropping our tentative assumption that \(p \nmid f(p)\) and write \(f = pg(p)\) where \(p \nmid g(p)\). That would make the \(r\)th term in the sum equal to: \begin{equation*} \binom{p^\kappa}{r} \, \left( p^2g \right)^{r-1} \end{equation*} which means that the largest power of \(p\) that divides the \(r\)th term of \(Z_\kappa\) is now \(p^{\kappa – \gamma + 2(r-1)} = p^{(\kappa + 1) – 1 – \gamma + 2r – 2}\), which is divisible by \(p^{\kappa+1}\) whenever

\begin{equation} -1 – \gamma + 2r – 2 \geqslant 0 \end{equation} which rearranges to \begin{equation}\label{eq:genrandnum_hull_dobell_p_lt_3_01} r \geqslant \frac{\gamma + 3}{2} \end{equation}As before, Inequality \eqref{eq:genrandnum_hull_dobell_p_lt_3_01} is false when \(r=1\) but the sum on the right hand side of \eqref{eq:genrandnum_hull_dobell_kappa_binomial} only applies when \(r \in \{2, \ldots, p^\kappa\}\) so it doesn't matter.

The same rigmarole as above therefore leads to the requirement that

\begin{equation} p \geqslant \left( \frac{\gamma + 3}{2} \right)^{1/\gamma} \end{equation}which, as before, is of the form \((ax+b)^{1/x}\), so Theorem 9 says that it is monotonically decreasing and so its maximum over the set \(\gamma \in \{1, 2, \ldots, \kappa\}\) occurs when \(\gamma=1\), in which case \begin{equation} p \geqslant \left( \frac{1 + 3}{2} \right)^{1/1} = 2 \end{equation}

Now \(f = pg(p) \iff p^2 \mid (a-1)\) so we now have that \(Z_{\kappa}\) can be guaranteed not divisible by \(p^{\kappa + 1}\) when: \begin{equation} \begin{cases} p \geqslant 3 &\text{if} \ p \mid (a-1) \\ p < 3 &\text{if} \ p^2 \mid (a-1) \\ \end{cases} \end{equation}

so if we add the condition \(p^2 \mid (a-1)\) to the theorem then we can guarantee that the period of the generator is \(p^\alpha\).

This extra condition isn't needed when \(p \geqslant 3\), or when \(p=2\) and \(\alpha = 1\) as shown by Equation \eqref{eq:genrandnum_hull_dobell_ore_01_case_2}, so we only need to invoke it when \(p=2\) and \(\alpha \geqslant 2\). The discussion so far only considers the case where \(m = p^\alpha\), so the condition becomes \begin{equation} 2^2 \mid (a-1) \quad \text{if} \quad m = 2^\alpha \ \text{where} \ \alpha \geqslant 2 \end{equation} this is equivalent to \begin{equation} 2^2 \mid (a-1) \quad \text{if} \quad 2^2 \mid m \end{equation} which is Hull & Dobell’s third condition.

So we have now proved that, under the stated conditions of the theorem, if \(m = p^\alpha\), where \(p\) and \(\alpha\) are non-negative integers and \(p\) is prime, then the linear congruential sequence defined by \eqref{eq:genrandnum_hull_dobell_01} has period \(m\). We now wish to generalize to the case where \(m\) is composite.

### Case where \(a > 1\) and \(m\) is composite

Now that the theorem is established when \(m\) is restricted to being a power of a prime, it is quite easy to generalize to the case where \(m\) is composite. In fact we simply put

\begin{equation*} m = p_1^{\alpha_1} p_2^{\alpha_2} \cdots p_s^{\alpha_s} \qquad a = 1 + k p_1^{\beta_1} p_2^{\beta_2} \cdots p_s^{\beta_s} \end{equation*}with \(p_i\) prime, \(\alpha_i\) a positive integer, \(k \neq 0\) and relatively prime to \(m\), \(\beta_i \geqslant 1\) or, if \(p_i = 2\) and \(\alpha_i \geqslant 2\), \(\beta_i \geqslant 2\). Then the argument proceeds almost exactly as before, and the theorem is established in the general case. The End.

But, I hear you ask, if it really is ‘quite easy’, why didn't they just do it that way in the first place? Well, if you feel like giving it a try, good luck!

Fortunately, Knuth (1998, §3.2.1.2, Lemma Q, p18) has provided a way forward that doesn't require repeating the entire proof (if, indeed, that even works).

Theorem 10 says that if \(\{x_i\}\) is a linear congruential sequence with divisor \(m\), and if \(d \mid m\), then the remainders after each member of \(\{x_i\}\) is divided by \(d\) also form a linear congruential sequence, with the same multiplier and increment but with divisor \(d\) instead of \(m\). In symbols this can be written as: \begin{gather*} x_{n+1} = \left(ax_n + c\right) \operatorname{\%} m \quad \text{and} \quad y_{n} = x_n \operatorname{\%} d \quad \text{and} \quad d \mid m \\ \Downarrow \\ y_{n+1} = \left(ay_n + c\right) \operatorname{\%} d \end{gather*}

This is the reason why the least significant bits have a shorter period than the number as a whole. Let the base-\(2\) representation of \(x\) be \(a_{63} a_{62} \ldots a_n a_{n-1} \ldots a_1 a_0\) where every \(a\) is either \(1\) or \(0\) and \(n < 63\). We can write \(x\) as \begin{equation*} x = a_{63}2^{63} + \cdots + a_n2^n + a_{n-1}2^{n-1} + \cdots + a_0 \end{equation*} from which it follows that \begin{align*} y &= x \ \% \ 2^n\\ &= a_{n-1}2^{n-1} + \cdots + a_0 \end{align*}

So if \(m = 2^{64}\) and \(a\) and \(c\) satisfy Hull & Dobell’s three conditions, and since we have proved the theorem for the case where \(m\) is a power of a prime, the sequence defined by \begin{equation*} x_{n+1} = \left(ax_n+c\right) \ \% \ 2^{64} \end{equation*} has period \(2^{64}\) but the sequence defined by \begin{equation*} y_{n+1} = \left(ay_n+c\right) \ \% \ 2^{n} \end{equation*} has period \(2^n\). Strikingly, when \(n = 1\), \(y\) has period \(2\) which means that the last bit simply alternates between \(1\) and \(0\).

Let \(\lambda\) be the period of the sequence \(\{x_i\}\), i.e. \(\lambda\) is the *smallest* \(n\) such that \(x_{i+n} = x_i\), and let \(\mu\) be the period of the sequence \(\{y_i\}\), similarly defined, then, by the definition of ‘period’:
\begin{gather*}
x_{n+\lambda} = x_n \\
y_{n+\mu} = y_n
\end{gather*}
so
\begin{align*}
y_{n+\mu} &= y_n \\
&= x_n \% d \\
&= x_{n+\lambda} \% d \\
&= y_{n+\lambda}
\end{align*}
which means that:

The sequence \(\{y_i\}\) repeats itself after every \(\mu\) terms and after every \(\lambda\) terms.

Because each member of the sequence is determined only by its predecessor, the corollary to Theorem 12 means that whichever is the smaller of \(\lambda\) and \(\mu\) must divide the other.

\(\mu\) cannot be bigger than \(\lambda\) because the period of the sequence \(\{y_i\}\) can at most be equal to the smaller of \(\lambda\) and \(\mu\), but the period of \(\{y_i\}\) is \(\mu\), so \(\mu \leqslant \operatorname{min}(\mu, \lambda)\), which can only be true if \(\mu \leqslant \lambda\).

It follows that \(\mu \mid \lambda\).

Given that \(d\) is only specified to be ‘a’ divisor of \(m\), the preceding results must be true of

*all*divisors of \(m\).If more than one \(\mu \mid \lambda\) then their least common multiple also divides \(\lambda\).

To summarise the situation so far, we designate by \(\lambda\) the period of the sequence \(\{x_i\}\) defined by the recurrence relation

\begin{equation}\tag{\ref{eq:genrandnum_hull_dobell_01}r} x_{n+1} \equiv ax_n + c \quad (\operatorname{mod} m) \end{equation}Let \(a\) and \(c\) satisfy the conditions of the theorem and let the prime factorisation of \(m\) be \begin{equation}\label{eq:prime_factorisation_of_m} m = p_1^{\alpha_1}p_2^{\alpha_2} \cdots p_s^{\alpha_s} \end{equation} from which it follows that \begin{align*} p_1^{\alpha_1} &\mid m \\ p_2^{\alpha_2} &\mid m \\ &\vdots \\ p_s^{\alpha_s} &\mid m \end{align*}

Next consider the sequence \begin{equation}\label{eq:hull_dobell_subsequence_single_prime_power} y_n \ \equiv \ ay_{n-1} + c \quad (\operatorname{mod} p^\alpha). \end{equation} with the same multiplier, \(a\), and increment, \(c\), as \eqref{eq:genrandnum_hull_dobell_01}, which, as already mentioned, satisfy Hull & Dobell’s three conditions. We have laboriously shown in the preceding marathon that the sequence defined by \eqref{eq:hull_dobell_subsequence_single_prime_power} has period \(p^\alpha\), so

\begin{align*} p_1^{\alpha_1} &\mid \lambda \\ p_2^{\alpha_2} &\mid \lambda \\ &\vdots \\ p_s^{\alpha_s} &\mid \lambda \end{align*}from which it follows that

\begin{equation*} \operatorname{lcm}(p_1^{\alpha_1}, p_2^{\alpha_2}, \ldots, p_s^{\alpha_s}) \mid \lambda \end{equation*}but since the \(p_i\)s are all prime, the lcm is equal to the product, i.e.:

\begin{equation*} \operatorname{lcm}(p_1^{\alpha_1}, p_2^{\alpha_2}, \ldots, p_s^{\alpha_s}) = p_1^{\alpha_1} p_2^{\alpha_2} \cdots p_s^{\alpha_s} \end{equation*}so

\begin{equation*} \left( p_1^{\alpha_1} p_2^{\alpha_2} \cdots p_s^{\alpha_s}\right) \mid \lambda \end{equation*}but, by definition, \(p_1^{\alpha_1} p_2^{\alpha_2} \cdots p_s^{\alpha_s} = m\) so

\begin{equation*} m \mid \lambda \end{equation*}but it is a property of linear congruential sequences that the period cannot exceed the divisor, which means that

\begin{equation*} \lambda = m \end{equation*}which is what we were trying to prove!

Phew!

# References

Forrest B. Brown. Random number generation with arbitrary strides. Transactions of the American Nuclear Society, 71(CONF-941102-):202–203, 1994. ISSN 0003-018X. URL https://mcnpx.lanl.gov/pdf_files/Proceedings_1994_TotANS_Brown_202–203.pdf.

T. E. Hull and A. R. Dobell. Random Number Generators. SIAM Review, 4(3):230–254, 1962. doi: 10.1137/1004061. URL https://doi.org/10.1137/1004061. Also available at https://dspace.library.uvic.ca:8443/bitstream/handle/1828/3142/Random_Number_Generators.pdf Accessed 3023-06-18.

Martin Greenberger. Notes on a New Pseudo-Random Number Generator. Journal of the ACM, 8(2):163–167, apr 1961. ISSN 0004-5411. doi: 10.1145/321062.321065. URL https://doi.org/10.1145/321062.321065

Donald E. Knuth. The Art of Computer Programming – Volume 1 Fundamental algorithms. Addison-Wesley, third edition, 1997. ISBN 978-0-201-89683-1.

Donald E. Knuth. The Art of Computer Programming – Volume 2 Seminumerical Algorithms. Addison Wesley, third edition, 1998. ISBN 978-0-201-89684-8.

© Copyright 2023 Howard J. Rudd all rights reserved.