Introduction
Welcome to the third of five (sort of) posts about Monte Carlo risk analysis using Excel VBA. Actually, what was going to have been today’s post has turned out to be much longer than my target of around 3,000 words, so I have decided to split it in two, with the second half appearing tomorrow.
Yesterday we looked at how to generate arrays of random numbers using a combination of inverse transform and ‘latin hypercube’ sampling. Today we get to the heart of the problem and consider how to induce correlation onto our random sample using the ImanConover method. This is based on the idea of rankorder correlation. It involves the following steps:

generate a set of univariate samples, i.e. the array ‘OrderedSample(i, j)’ with the desired distributions;

set up a ‘dummy’ sample with the desired correlation structure;

calculate the rank orders of these ‘dummy’ variables;

reorder our marginal distributions so they have the same rank orders as the ‘dummy’ set.
And that’s all there is to it! To avoid long convoluted sentences, it would be a good idea to first agree a few terms. Apologies if you are familiar with these already.
Samples
Statisticians call the process by which an abstract random variable is converted into a concrete numerical value an ‘experiment’. The result of a single experiment is a ‘realisation’. A set containing many realisations is a ‘sample’.
Multivariate random variable
A multivariate random variable, X, is an ordered set of univariate random variables regarded as a single entity. A realisation of X is an ordered set of realisations of its components. Its distribution is specified by one function of all of its components, NOT by a set of separate distributions, one for each component.
Random vector
A multivariate random variable is best written as a vector:
\[ \begin{equation} X = \left[ X_{1}, X_{2}, \ldots , X_{m} \right] \label{eqn:7} \end{equation} \]
Generally, a random vector is defined as a vector whose components are random variables. Random vectors can be row or column vectors. In this project I use row vectors because, in a sample matrix (see next section) each row corresponds to a single ‘experiment’ and each column corresponds to a different variable determined in the experiment. This corresponds to the way that data are collected and recorded in the real world, e.g. in laboratory notebooks, and so is more intuitive.
Sample matrix
Let \( \mathbf{X}\) be a sample from \(X\), given by Equation (\ref{eqn:7}) above. This is best written as a matrix with each row corresponding to one realisation of X and each column corresponding to one of the components of X.
\[ \begin{equation} \mathbf{X} = \begin{bmatrix}x_{11} & x_{12} & \cdots & x_{1m}\\ x_{21} & x_{22} & \cdots & x_{2m}\\ \vdots & \vdots & \ddots & \vdots\\ x_{n1} & x_{n2} & \cdots & x_{nm} \end{bmatrix} \label{eqn:8} \end{equation} \]
Marginal distributions
The marginal distributions of a multivariate random variable are the projections of its distribution function onto its coordinate axes.
The concept is best illustrated by reference to a discrete bivariate random variable with a small sample space. Table 2 below shows the probabilities of different scores in a football match. The column headings are possible numbers of goals scored by Team A and the row headings are the same for Team B. Goal scoring is assumed to be a Poisson process so the probability that team A has scored \( k\) goals by the end of the match is given by \( P_{A}\left(k\right)=e^{\lambda_{A}}{\lambda_{A}^{k}}/k!\) and similarly for team B \(P_{B}\left(k\right)=e^{\lambda_{B}}{\lambda_{B}^{k}}/k!\), where \( \lambda\) is the number of goals a team is expected to score, on average, in one unit of time, which in this case is chosen to be the duration of the match. In this example Team B is assumed to be slightly better than team A so I have chosen \( \lambda_{B} = 1.2\) and \( \lambda_{A} = 0.8\). It can be seen from Table 2 that the most likely score is onenil to Team B and the next most likely score is nilnil. Higher scores rapidly become much less probable.
0  1  2  3  4  
0  0.135335  0.108268  0.043307  0.011549  0.00231  0.300769 
1  0.162402  0.129922  0.051969  0.013858  0.002772  0.360923 
2  0.097441  0.077953  0.031181  0.008315  0.001663  0.216553 
3  0.038977  0.031181  0.012472  0.003326  0.000665  0.086621 
4  0.011693  0.009354  0.003742  0.000998  0.0002  0.025987 
0.445848  0.356678  0.142671  0.038046  0.00761 
Table 2: Illustration of marginal distributions
The numbers in blue are the sums of the probabilities in a single row or a single column. The ones in the last column give the probabilities that Team B scores \( k\) goals irrespective of how many Team A scores and the ones in the last row give the probabilities that Team A scores \( k\) goals irrespective of how many Team B scores. These numbers are traditionally written in the margins of the table, which is why they are called marginal distributions.
The concept can be straightforwardly extended to continuous distributions by using integrals instead of sums. These are still called marginals even though they obviously can’t be written in the margins of a table, not even using an infinitely small font. In what follows I will refer to our initial set of univariate samples, i.e. the array ‘OrderedSample(i, j)’ generated using the functions developed in yesterday’s post, as the marginal distributions.
Correlation
Correlation is the phenomenon measured by a correlation coefficient. There are several types of these and each defines a subtly different concept. For the purposes of this exercise we are interested in two of them; the Pearson and the Spearman.
The Pearson coefficient is so much more common than any other type that it is usually just called ‘the’ correlation coefficient. If you see the term ‘correlation coefficient’ used without further qualification then it is the Pearson. It is defined by:
\[ \begin{equation} r_{X,Y}=\frac{cov\left(X,Y\right)}{\sigma_{X}\sigma_{Y}} \label{eqn:9} \end{equation} \]
where
\[ \begin{equation} cov\left(X,Y\right)=E\left[\left(XE\left[X\right]\right)\left(YE\left[Y\right]\right)\right] \label{eqn:10} \end{equation} \]
and
\[ \begin{equation} \sigma_{X}^{2}=cov\left(X,X\right) \label{eqn:11} \end{equation} \]
The Pearson coefficient of a sample from a bivariate random variable \( \left(X_{1},X_{2}\right)\) is therefore:
\[ \begin{equation} r_{X_{2},X_{2}}=\frac{\sum_{i=1}^{n}\left(x_{i1}\mu_{X_{1}}\right)\left(x_{i2}\mu_{X_{2}}\right)}{\left(\sqrt{\sum_{i=1}^{n}\left(x_{i1}\mu_{X_{1}}\right)^{2}}\right)\left(\sqrt{\sum_{i=1}^{n}\left(x_{i2}\mu_{X_{2}}\right)^{2}}\right)} \label{eqn:12} \end{equation} \]
The Spearman correlation coefficient is the Pearson coefficient of the rankorders of a sample
\[ \begin{equation} \rho_{X_{1},X_{2}} = r_{R_{1},R_{2}} \label{eqn:13} \end{equation} \]
where \( R_{1}\) and \( R_{2}\) are arrays containing the rank orders of \( X_{1}\) and \( X_{2}\). That is, the ith position in \( R_{1}\) contains an integer, k, such that the ith position in \( X_{1}\) contains the kth largest member of \( X_{1}\).
Substituting \( R_{1}\) and \( R_{2}\) into the above equations, followed by a lot of tedious algebraic manipulation, results in
\[ \begin{equation} \rho=1\frac{6\sum_{i=1}^{n}d_{i}^{2}}{n\left(n^{2}1\right)} \label{eqn:14} \end{equation} \]
where \( d_{i} = {R_{i2} – R_{i1}}\)
Correlation matrix
If \(X\) is a random rowvector:
\[ \begin{equation} X = \left[ X_{1}, X_{2}, \ldots , X_{m} \right] \label{eqn:15} \end{equation} \]
then there will be \( {\frac{1}{2}} n \left(n + 1 \right)\) pairs of components, each of which can have its own covariance and its own correlation coefficient. The set of these can be written as \( n \times n\) matrices:
\[ \begin{equation} \textrm{cov}\left(X\right)=\begin{bmatrix}\sigma_{X_{1}}^{2} & \textrm{cov}\left(X_{1},\, X_{2}\right) & \cdots & \textrm{cov}\left(X_{1},\, X_{n}\right)\\ \textrm{cov}\left(X_{2},\, X_{1}\right) & \sigma_{X_{2}}^{2} & \cdots & \textrm{cov}\left(X_{2},\, X_{n}\right)\\ \vdots & \vdots & \ddots & \vdots\\ \textrm{cov}\left(X_{n},\, X_{1}\right) & \textrm{cov}\left(X_{n},\, X_{2}\right) & \cdots & \sigma_{X_{n}}^{2} \end{bmatrix} \label{eqn:16} \end{equation} \]
and
\[ \begin{equation} \textrm{corr}\left(X\right)=\begin{bmatrix} 1 & \textrm{corr}\left(X_{1},\, X_{2}\right) & \cdots & \textrm{corr}\left(X_{1},\, X_{n}\right)\\ \textrm{corr}\left(X_{2},\, X_{1}\right) & 1 & \cdots & \textrm{corr}\left(X_{2},\, X_{n}\right)\\ \vdots & \vdots & \ddots & \vdots\\ \textrm{corr}\left(X_{n},\, X_{1}\right) & \textrm{corr}\left(X_{n},\, X_{2}\right) & \cdots & 1 \end{bmatrix} \label{eqn:17} \end{equation} \]
these are both symmetric because \( \textrm{cov}\left(X_{i},\, X_{j}\right) = \textrm{cov}\left(X_{j},\, X_{i}\right)\) and \( \textrm{corr}\left(X_{i},\, X_{j}\right) = \textrm{corr}\left(X_{j},\, X_{i}\right)\).
As I mentioned in Post 1 of this series, a set of marginal distributions and a correlation matrix are not sufficient to completely specify the probability of a random vector. To do this, a multivariate distribution function (also called a joint distribution function) is required. It is possible to construct an infinite number of different joint distribution functions that have the same marginals and the same correlation matrix, so if we only know the latter, our random vector will be underdetermined. Unfortunately we never have enough information to completely fix the joint distribution, so we have to make do with a set of marginals and a correlation matrix.
In this project we wish to specify a correlation matrix like (\ref{eqn:17}) as part of the set of input parameters, and the object of the exercise is to make the correlation matrix of our samples the same this target matrix. However, because of the way the ImanConover method works, our sample will end up with the same Spearman rank order correlation matrix as the dummy sample, which will have the same Pearson correlation matrix as our desired correlation matrix. This means that neither the Pearson nor the Spearman correlation matrices of our sample will be exactly the same as our desired correlation matrix, but hopefully it wont be too different.
How to generate a dummy array and make it correlated
We can generate a multivariate random variable with a specified correlation matrix by taking linear combinations of a set of independent random variables.
Linear combinations of independent random variables
If we take a random vector
\[ \begin{equation} X = \left[ X_{1}, X_{2}, \ldots , X_{n} \right] \label{eqn:18} \end{equation} \]
we can produce from it another random vector
\[ \begin{equation} Y = \left[ Y_{1}, Y_{2}, \ldots , Y_{m} \right] \label{eqn:19} \end{equation} \]
whose elements are given by
\[ \begin{equation} Y_{ i } = a_{ 1i }X_{ 1 } + a_{ 2i }X_{ 2 } + \ldots + a_{ n i }X_{ n } \label{eqn:20} \end{equation} \]
where the \( a_is\) are constants. This can be written in matrix form as
\[ \begin{equation} Y = XA \label{eqn:21} \end{equation} \]
It can then easily be shown that
\[ \begin{equation} \textrm{cov} \left( Y \right) = {A^{\mathsf{T}}} {\textrm{cov} \left( X \right)} A \label{eqn:22} \end{equation} \]
If the components of \( X\) are independent and have unit variance, then \( {\textrm{cov} \left( X \right)} = I \), the identity matrix, and so
\[ \begin{equation} \textrm{cov} \left( Y \right) = {A^{\mathsf{T}}} A \label{eqn:23} \end{equation} \]
If the \( X_{i}\)s are are all normally distributed then \( Y\) will be distributed according to the multivariate normal distribution. The marginal distributions of the multivariate normal (i.e. the \( Y_{i}\)s) are all univariate normal. If the \( X_{i}\)s are are not normally distributed then it is still true that \( \textrm{cov} \left( Y \right) = {A^{\mathsf{T}}} A\), but the marginals are not simply related to the distributions of the \( X_{i}\)s and there is no practical way of finding a combination of \( X\) and \( A\) so that \( Y\) has specified marginals.
If the components of \( X\) all have unit variance, then \(X\)’s correlation matrix is the same as its covariance matrix. This means that, starting with a set of independent random variables that have unit variance, we can produce a set of correlated random variables that also has unit variance and which has a correlation matrix \( C\) given by:
\[ \begin{equation} C = A^{\textsf{T}}A \label{eqn:24} \end{equation} \]
\( C\) is a ‘target’ correlation matrix that we wish to choose arbitrarily and then impose on \(Y\). To do this, we need a method of finding \( A\) from \(C\). Fortunately, such a method exists. It’s called the Cholesky factorisation or Cholesky decomposition and is described in the next section.
If we wish the components of \(Y\) to have variances other than unity, we could choose the elements of \( C\) to lie outside the range [0, 1), but then it wouldn’t be a correlation matrix. However, as we are only using \( Y\) as a source of rank orders, it doesn’t matter what its variances are, so it is easiest to choose the elements of \(C\) to be our desired correlation coefficients.
Cholesky factorisation
The Cholesky factorisation is a generalisation to matrices of the idea of the square root of a number, though it isn’t the only such generalisation. Not all matrices have a Cholesky ‘root’, but those that do can be written as the product of a lower triangular matrix and its transpose, i.e. \( A = BC\) where \( B\) is lower triangular and \( C\) is its transpose, so we can write \( A = B B^{ \textsf{T} } = C^{ \textsf{T} } C\).
The question of whether we call \( B\) or \( C\) the Cholesky root of \( A\) is arbitrary. The important thing is to choose one and stick with it. Whatever alternative is chosen, the lower triangular root must be on the left and the upper triangular one on the right. For the purposes of this project, it seems logical to choose the upper triangular root, as then it matches the order of the factors in Equation (\ref{eqn:24}).
Only square, symmetric, positivedefinite matrices have a Cholesky root. Correlation matrices are square and symmetric by definition. It can be shown that they are always positive semidefinite and that if they are of full rank they are positive definite.
When specifying a correlation matrix it is easy to accidentally make it nonpositivedefinite, in which case it isn’t a valid correlation matrix. It is therefore necessary to test the input matrix for positive definiteness and to throw an exception if it doesn’t pass the test. If you encounter such an error when using the module, it means that your correlation matrix is not valid and you need to choose a different one.
As the Cholesky root is triangular it is also square, which means that we can’t get away with using fewer \(X\)s than \(Y\)s in Equation (\ref{eqn:20}).
If \( G\) is the upper triangular Cholesky root of \(A\), it is easy to show that the elements of \( G\) satisfy the following equations:
\[ \begin{equation} g_{ 11 } = \sqrt{ a_{ 11 }} \label{eqn:25} \end{equation} \]
\[ \begin{equation} \left[ g_{ 11 } \, g_{ 12 } \, g_{ 13 } \, \ldots \, g_{ 1n } \right] = \frac{1}{\sqrt{ a_{ 11 }}} \left[ a_{ 11 } \, a_{ 12 } \, a_{ 13 } \, \ldots \, a_{ 1n } \right] \label{eqn:26} \end{equation} \]
\[ \begin{equation} g_{ ii } = { \sqrt{ a_{ii } – {\sum _{ k = 1 }^{i – 1}}{ g_{ki}^2}}} \label{eqn:27} \end{equation} \]
and
\[ \begin{equation} g_{ ij } = \frac{ a_{ ij } – \sum _{ k = 1 } ^ { i – 1 } { g_{ki} g_{kj}} }{ g_{ ii }} \label{eqn:28} \end{equation} \]
The left hand sides of equations (\ref{eqn:27}) and (\ref{eqn:28}) refer to the ith row of \(G\), whereas the right hand sides refer only to preceding rows. We know the first row, from Equation (\ref{eqn:26}), so we have all the information we need to calculate all the others. Equations (\ref{eqn:25}) to (\ref{eqn:28}) can be converted directly into the following algorithm:
 Step 1:
 Calculate \(g_{11}\) using Equation \( (\ref{eqn:25}) \).
 Step 2:
 Calculate the remaining elements of the first row of \(G\) using Equation \((\ref{eqn:26})\).
 For i = 2 to n

 Step 3:
 Calculate \(g_{ii}\) using Equation \((\ref{eqn:27})\).
 Step 4:
 Calculate the remaining elements of row \(i\) using Equation \((\ref{eqn:28})\).
 Next i
\( G\) could fail to exist if its input matrix is not square or not symmetric or if any of \(G\)’s diagonal elements turn out to be zero or complex. Our Cholesky function therefore needs to return an error if any of these things is the case. How to test for the first two is obvious. We can test for the third as part of Step 3 in the above algorithm. An intermediate step in the calculation of \( g_{ii}\) using Equation (\ref{eqn:27}) is to calculate the quantity \( a_{ii } – \sum_{k = 1}^{i – 1}{g_{ki}^2}\). If this is zero or negative then the Cholesky factorisation doesn’t exit. A test for this can be included in the appropriate place in the code. The code to implement the Cholesky looks like this (ignoring declarations, tests and other boring stuff):
Public Function chol(A() As Double) As Double() ' Usage: Y = chol(A) ' ' Returns the upper triangular Cholesky root of A. A must by square, symmetric ' and positive definite. ' DECLARATIONS GO HERE ' Blah, blah, blah 'Determine the number of rows, n, and number of columns, m, in A n = UBound(A, 1)  LBound(A, 1) + 1 m = UBound(A, 2)  LBound(A, 2) + 1 ' TESTS GO HERE ' Rhubarb, rhubarb, rhubarb ' END OF TESTS (almost). Actual maths starts here! ReDim G(1 To n, 1 To m) ' Calculate 1st element G(1, 1) = Sqr(A(1, 1)) ' Calculate remainder of 1st row For j = 2 To m G(1, j) = A(1, j) / G(1, 1) Next j ' Calculate remaining rows For i = 2 To n ' Calculate diagonal element of row i G(i, i) = A(i, i) For k = 1 To i  1 G(i, i) = G(i, i)  G(k, i) * G(k, i) Next k ' Check that g(i,i) is > 0 If G(i, i) <= 0 Then Debug.Print "Attempted to perform Cholesky factorisation on a matrix" & _ "that is not positive definite" Err.Raise Number:=vbObjectError + 5, _ Source:="chol", _ Description:="Matrix not positive definite" Exit Function End If G(i, i) = Sqr(G(i, i)) ' Calculate remaining elements of row i For j = i + 1 To m G(i, j) = A(i, j) For k = 1 To i  1 G(i, j) = G(i, j)  G(k, i) * G(k, j) Next k G(i, j) = G(i, j) / G(i, i) Next j Next i ' Calculate lower triangular half of G For i = 2 To n For j = 1 To i  1 G(i, j) = 0 Next j Next i chol = G End Function
Finite sample correction
There is one slight complication, however, that arises from the use of a finite sample. Because \( X\) is independent with unit variance, its correlation matrix is \(I\), the identity matrix. This is necessary for Equation (\ref{eqn:23}) to be true. If, however, we are dealing with a finite sample from \(X\), then its sample correlation matrix could differ slightly from \(I\). This is due to the fact that the statistics of a sample always differ slightly from those of the population from which it is taken, though the difference decreases with increasing sample size. To get round this, we use another trick invented by Iman and Conover. Consider the sample matrix \( \mathbf{X}\):
\[ \begin{equation} \mathbf{X}=\begin{bmatrix}x_{11} & x_{12} & \cdots & x_{1m}\\ x_{21} & x_{22} & \cdots & x_{2m}\\ x_{31} & x_{32} & \cdots & x_{3m}\\ x_{41} & x_{42} & \cdots & x_{4m}\\ x_{51} & x_{52} & \cdots & x_{5m}\\ x_{61} & x_{62} & \cdots & x_{6m}\\ x_{71} & x_{72} & \cdots & x_{7m}\\ \vdots & \vdots & \ddots & \vdots\\ x_{n1} & x_{n2} & \cdots & x_{nm} \end{bmatrix} \label{eqn:29} \end{equation} \]
where each row of \( \mathbf{X}\) is one set of realisations of the random vector \(X\). If the sample has zero mean then it is easy to verify by direct multiplication that \( \mathrm{cov}\left(\mathbf{X}\right) = \frac{1}{n}\mathbf{X}^{\mathrm{T}}\mathbf{X}\). Because \(X\) is standard normal, its mean is zero and its variance is unity. If we define \(E = \frac{1}{n} \mathbf{X} ^{\textsf{T} } \mathbf{X} \) then \( E \neq I\) but \( E \rightarrow I\) as \( n \rightarrow \infty\). The trick is to apply the Cholesky factorisation to \( E\) to give a new upper triangular matrix
\[ \begin{equation} F = \textrm{chol} \left( E \right) \label{eqn:30} \end{equation} \]
If we do this then the matrix \( \mathbf{X}F^{1}\) will have a covariance matrix that is exactly equal to \(I\). Consequently, if we calculate
\[ \begin{equation} \mathbf{Y} = \left( \mathbf{X} F^{1} \right) A \end{equation} \]
then \( \mathbf{Y}\) will have a covariance matrix equal to \( C = A^{\mathsf{T}} A\). It is easier to calculate \( F^{1}A\) than it is to calculate \( \mathbf{X}F^{1}\), because \( \mathbf{X}\) could have thousands of rows, whereas \( F^{1}\) and \( A\) are both \( m \times m\) where \( m\) is at most a few tens. So if we define a new \( m \times m\) matrix \( Z\) as
\[ \begin{equation} Z = F^{1}A \end{equation} \]
and then write
\[ \begin{equation} \mathbf{Y} = \mathbf{X} Z \label{eqn:31} \end{equation} \]
instead of \( \mathbf{Y} = \mathbf{X}A\) as in Equation (\ref{eqn:21}), \( \mathbf{Y}\) will have a covariance matrix equal to \( C = A^{\mathsf{T}} A\). The best way to calculate \( Z\) is by solving the equation
\[ \begin{equation} FZ = A \label{eqn:32} \end{equation} \]
This is quite easy because \( F\) and \( A\) are both upper triangular, so solving the equation involves only back substitution with no need for Gaussian elimination. Writing it out termbyterm looks like:
\[ \begin{equation} \begin{bmatrix}f_{11} & f_{12} & \cdots & f_{1n}\\ 0 & f_{22} & \cdots & f_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & f_{nn} \end{bmatrix}\begin{bmatrix}z_{11} & z_{12} & \cdots & z_{1n}\\ z_{21} & z_{22} & \cdots & z_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ z_{n1} & z_{n2} & \cdots & z_{nn} \end{bmatrix}=\begin{bmatrix}a_{11} & a_{12} & \cdots & a_{1n}\\ 0 & a_{22} & \cdots & a_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & a_{nn} \end{bmatrix} \label{eqn:34} \end{equation} \]
The \(n\)th row of \(Z\) is therefore
\[ \begin{equation} \begin{bmatrix} z_{n1} & \cdots & z_{nn} \end{bmatrix} = \begin{bmatrix} 0 & \cdots & 0 & \frac{a_{nn}}{f_{nn}} \end{bmatrix} \label{eqn:35} \end{equation} \]
and the \(i\)th row is
\[ \begin{equation} \begin{bmatrix}z_{i1} & \cdots & z_{in}\end{bmatrix}=\frac{1}{f_{ii}}\left\{ \begin{bmatrix}0 & \cdots & 0 & a_{ii} & \cdots & a_{in}\end{bmatrix}\sum_{k=i+1}^{n}f_{ik}\begin{bmatrix}z_{k1} & \cdots & z_{kn}\end{bmatrix}\right\} \label{eqn:36} \end{equation} \]
so the \(i\)th row of \(Z\) depends on rows \( i + 1\) to \(n\). We know the \(n\)th row, from Equation (\ref{eqn:35}), so we can get row \( n – 1\), then row \( n – 2\) and so on all the way back to row \( 1\). The core of the code to implement this looks like:
Public Function finvs(F, S) As Double() ' Usage: Y = finvs(F, S) ' ' Returns the product of F^1 and S for F and S both upper triangular. Actually ' solves FZ = S, i.e. finds Z such that FZ = S. ' DECLARATIONS GO HERE ' Blah, blah, blah nRowsF = UBound(F, 1)  LBound(F, 1) + 1 nColsF = UBound(F, 2)  LBound(F, 2) + 1 nRowsS = UBound(S, 1)  LBound(S, 1) + 1 nColsS = UBound(S, 2)  LBound(S, 2) + 1 ' TESTS GO HERE ' Rhubarb, rhubarb ' END OF TESTS. Actual maths starts here! n = nRowsF ReDim z(1 To n, 1 To n) ' Construct the nth row of Z For j = 1 To n  1 z(n, j) = 0 Next j z(n, n) = S(n, n) / F(n, n) ' Construct the rows of Z above the nth For i = n  1 To 1 Step 1 For j = 1 To n w = 0 For k = i + 1 To n w = w + F(i, k) * z(k, j) Next k z(i, j) = (S(i, j)  w) / F(i, i) Next j Next i finvs = z End Function
That’s the end of today’s post. Join me tomorrow for Part 3b, where I conclude the discussion of generating correlated samples.
© 2015 Howard J. Rudd all rights reserved
This is an excellent post, thank you very much.
This blog is really cool. I have bookmarked it. Do you allow guest posting
on your blog ? I can write hi quality articles for you.
Let me know.
Hello Darren
My blog is a personal one that is really only for my own views and opinions, so I don’t think that guest posts would fit in very well. Thanks for your interest and kind words, though.
Regards
Howard