$\newcommand{\0}{\mathbf 0}$$\newcommand{\one}{\mathbf 1}$In this post I’m going to work out the distribution of the area of a random parallelogram formed by the addition of bivariate Gaussian vectors. The gif below gives an example of this process, with the ticks at the bottom indicating the realized areas (this gif was made with R + imagemagick).
Computing the area
Suppose I have two vectors $u, v \in \mathbb R^2$ and I want to find the area of the parallelogram formed by the origin, $u$, $v$, and $u+v$. Let $A = [u \mid v]$ be the $2\times 2$ matrix with $u$ and $v$ as its columns.
Result 1: The area of the parallelogram is given by $|\det A|$.
Pf: If $u = \0$ or $v = \0$ then the area will be $0$ and $\det A = 0$ too so the result holds. Therefore WLOG I’ll assume $u,v\neq \0$.
I know that the area of a parallelogram is $\text{base}\times\text{height}$ (which can be derived from the area of a rectangle by chopping one triangle off of the parallelogram and gluing it to the other side to form a rectangle). I’ll pick $v$ to be the base (so $b = \|v\|$), so I just need the height.
The height is the length of the altitude dropped from $u$ onto $\text{span}(v)$. I can use the fact that the base of this triangle is the length of $\hat u$ (the projection of $u$ onto $\text{span}(v)$; I have a picture of this towards the beginning of my post here) and the hypotenuse is $\|u\|$ so $h = \sqrt{\|u\|^2 – \|\hat u\|^2}$ via the Pythagorean theorem.
Computing this, I have
$$
\begin{aligned}\hat u &= v(v^Tv)^{-1}v^Tu = \frac{u^Tv}{v^Tv} v \\&
\implies h^2 = u^Tu – \frac{(u^Tv)^2}{v^Tv} \\&
\implies b^2h^2 = u^Tuv^Tv – (u^Tv)^2 \\&
= (u_1^2+u_2^2)(v_1^2+v_2^2) – (u_1v_1 + u_2v_2)^2 \\&
= (u_1v_2 – u_2v_1)^2 \\&
= (\det A)^2\\&
\implies \text{Area} = bh = |\det A|.\end{aligned}
$$
$\square$
Now the question I’m interested in: Suppose I have $X,Y \stackrel{\text{iid}}\sim \mathcal N_2(\mathbf 0, \Sigma)$. What can I say about the distribution of the area of the parallelogram formed by $X$ and $Y$? I want to be able to do this analytically so I’ll make the simplifying assumption that
$$
\Sigma = \begin{bmatrix}1 & \rho \\ \rho & 1\end{bmatrix}.
$$
I will form the random matrix
$$
A = \begin{bmatrix}X_1 & Y_1 \\ X_2 & Y_2\end{bmatrix}
$$
and now I want to explore the distribution of
$$
R := |\det A| = |X_1Y_2 – X_2Y_1|.
$$
Moments of $R$
I’ll do this by computing the moments of $R$. I want
$$
\text E(R^n) = \text E_{(X,Y)} |X_1Y_2 – X_2Y_1|^n \\
= E_Y\left[E_{X|Y}\left(|X_1y_2 – X_2y_1|^n\right)\right].
$$
Let $v_Y = {Y_2\choose -Y_1}$. Conditioned on $Y=y$, I have
$$
X_1y_2 – X_2y_1 = v_y^TX\sim \mathcal N(0, v_y^T\Sigma v_y)
$$
so for the inner expectation I just need to work out the moments of a centered Gaussian.
Result 2: Let $W \sim \mathcal N(0, \sigma^2)$. Then
$$
\text E|W|^n = \frac{(2\sigma^2)^{n/2}}{\sqrt \pi}\Gamma\left(\frac{n+1}2\right).
$$
Pf:
$$
\begin{aligned}\text E|W|^n &= \frac{1}{\sqrt{2\pi\sigma^2}}\int_{\mathbb R} |w|^n \exp(-w^2/(2\sigma^2))\,\text dw \\&
= \sqrt{\frac 2{\pi \sigma^2}}\int_0^\infty w^n \exp(-w^2/(2\sigma^2))\,\text dw\end{aligned}
$$
since this function is even. Now letting $u = w^2/(2\sigma^2)$ I have
$$
\begin{aligned}\text E|W|^n &= \sqrt{\frac {2\sigma^2}{\pi}}\int_0^\infty (2\sigma^2 u)^{(n-1)/2} \exp(-u)\,\text du \\&
= \frac{(2\sigma^2)^{n/2}}{\sqrt \pi} \Gamma\left(\frac{n+1}{2}\right)\end{aligned}
$$
as desired.
$\square$
This means
$$
\text E_{X|Y} |v_y^TX|^n = \Gamma\left(\frac{n+1}2\right)2^{n/2}\pi^{-1/2} \left(v_Y^T\Sigma v_Y\right)^{n/2}
$$
so now this comes down to being able to compute
$$
\text E_Y \left(v_Y^T\Sigma v_Y\right)^{n/2}.
$$
I can note that
$$
v_Y = \begin{bmatrix}0 & 1 \\ -1 & 0\end{bmatrix}Y \sim \mathcal N\left(\mathbf 0, \Omega\right)
$$
for
$$
\Omega = \begin{bmatrix}1 & -\rho \\ -\rho & 1\end{bmatrix}
$$
so
$$
v_Y^T\Sigma v_Y = Z^T\Omega^{1/2}\Sigma\Omega^{1/2}Z
$$
for $Z \sim \mathcal N(\mathbf 0, I)$.
I’ll work out what $\Omega^{1/2}$ is. Just by inspection I can see that
$$
\Omega\one = (1-\rho)\mathbf 1
$$
and
$$
\Omega{1\choose -1} = (1+\rho) {1\choose -1}
$$
so I can diagonalize $\Omega$ as
$$
\Omega = Q\Lambda Q^T
$$
with
$$
Q = \frac 1{\sqrt 2} \begin{bmatrix}1 & 1 \\ -1 & 1\end{bmatrix}, \\
\Lambda = \text{diag}(1+\rho, 1-\rho).
$$
$\Sigma$ is also diagonalized by $Q$ and its eigenvalues are the same although in reverse order so $\Sigma = Q\tilde \Lambda Q^T$ with $\tilde \Lambda = \text{diag}(1-\rho, 1+\rho)$.
This means
$$
\begin{aligned}\Omega^{1/2}\Sigma\Omega^{1/2} &= Q\Lambda^{1/2}Q^TQ\tilde \Lambda Q^T Q\Lambda^{1/2}Q^T \\&
= Q\Lambda^{1/2}\tilde \Lambda \Lambda^{1/2}Q^T \\&
= Q\cdot \text{diag}(1-\rho^2, 1-\rho^2)Q^T \\&
= (1-\rho^2) I.\end{aligned}
$$
This is very convenient and means
$$
Z^T\Omega^{1/2}\Sigma\Omega^{1/2}Z = (1-\rho^2) Z^TZ \sim (1-\rho^2) \chi^2_2.
$$
Finishing this, I have
$$
\text E_Y \left(v_Y^T\Sigma v_Y\right)^{n/2} = (1-\rho^2)^{n/2}\text E S^{n/2}
$$
with $S \sim \chi^2_2$.
Result 3: Moments of a $\chi^2_k$ RV.
Let $S \sim \chi^2_k$ so $f_S(s) = \frac{1}{2^{k/2}\Gamma(k/2)}s^{k/2-1}e^{-s/2}$. Then for $t>0$
$$
\text E(S^t) =\frac{1}{2^{k/2}\Gamma(k/2)}\int_0^\infty s^{t+k/2-1}e^{-s/2}\,\text ds
$$
Letting $r = s/2$ I have
$$
\text E(S^t) =\frac{2}{2^{k/2}\Gamma(k/2)}\int_0^\infty (2r)^{t+k/2-1}e^{-r}\,\text dr \\
= \frac{2^t\Gamma(t + k/2)}{\Gamma(k/2)}.
$$
$\square$
This concludes my investigation then:
$$
\text E_Y \left(v_Y^T\Sigma v_Y\right)^{n/2} = (1-\rho^2)^{n/2}2^{n/2}\Gamma(n/2 + 1)
$$
so
$$
\text E(R^n) = \frac{2^{n}(1-\rho^2)^{n/2}}{\sqrt \pi}\Gamma\left(\frac{n+1}2\right)\Gamma\left(\frac{n+2}2\right)
$$
I can simplify this by noting that
$$
\Gamma\left(\frac{n+1}2\right)\Gamma\left(\frac{n+2}2\right) = \sqrt \pi 2^{-n}n!.
$$
Pf: by induction: for the base case I have
$$
\Gamma(1)\Gamma(3/2) = \frac 12 \Gamma(1/2) = \frac{\sqrt \pi}2
$$
as desired, and if the formula holds for $n$ then
$$
\begin{aligned}&\Gamma\left(\frac{n+2}2\right)\Gamma\left(\frac{n+3}2\right) \\&
= \Gamma\left(\frac n2 + 1\right)\Gamma\left(\frac {n+1}2 + 1\right) \\&
= \frac n2\cdot \frac{n+1}2\cdot \Gamma\left(\frac n2\right) \Gamma\left(\frac{n+1}2\right) \\&
= \frac n2\cdot \frac{n+1}2\cdot \Gamma\left(\frac {(n-1)+1}2\right) \Gamma\left(\frac{(n-1)+2}2\right) \\&
= \frac n2\cdot \frac{n+1}2\sqrt \pi 2^{-(n-1)}(n-1)! \\&
= \frac{(n+1)!\sqrt \pi}{2^{n+1}}\end{aligned}
$$
as desired.
$\square$
Plugging this in, I have
$$
\text E(R^n) = \frac{2^{n}(1-\rho^2)^{n/2}}{\sqrt \pi}\frac{n!\sqrt \pi}{2^{n}} \\
= (1-\rho^{2})^{n/2}n!.
$$
This is very interesting: these are exactly the moments of an exponential distribution with rate $\lambda = (1-\rho^2)^{-1/2}$.
Let $\alpha_n = \text E(R^n) = \left(\sqrt{1-\rho^2}\right)^nn!$. Consider the power series
$$
\sum_{n=0}^\infty \frac{\alpha_n r^n}{n!} = \sum_{n=0}^\infty \left(r\sqrt{1-\rho^2}\right)^n
$$
which is a geometric series and converges for $|r|$ sufficiently small. Theorem 30.1 in Billingsley’s Probability and Measure shows that when this condition holds, the moments uniquely determine the distribution (this MathOverflow post also discusses this further) so this means that
$$
R \sim \text{Exp}\left(\frac 1{\sqrt{1-\rho^2}}\right).
$$
As a consequence, I have
$$
\text E(R) = \sqrt{1 – \rho^2} \\
\text{Var}(R) = \text E(R^2) – (\text E R)^2 = 1-\rho^2.
$$