# §11.2: Hermite polynomials

Having defined the basic operators of importance for functions on Gaussian space, it’s useful to also develop the analogue of the Fourier expansion.

To do this we’ll proceed as in Chapter 8.1, looking for a complete orthonormal “Fourier basis” for $L^2({\mathbb R},\gamma)$, which we can extend to $L^2({\mathbb R}^n,\gamma)$ by taking products. It’s natural to start with polynomials; by Theorem 22 we know that the collection $(\phi_j)_{j \in {\mathbb N}}$, $\phi_j(z) = z^j$ is a complete basis for $L^2({\mathbb R},\gamma)$. To get an orthonormal (“Fourier”) basis we can simply perform the Gram–Schmidt process. Calling the resulting basis $(h_j)_{j \in {\mathbb N}}$ (with “$h$” standing for “Hermite”), we get \begin{equation} \label{eqn:hermite-examples} h_0(z) = 1, \quad h_1(z) = z, \quad h_2(z) = \frac{z^2-1}{\sqrt{2}}, \quad h_3(z) = \frac{z^3 – 3z}{\sqrt{6}}, \quad \dots \end{equation} Here, e.g., we obtained $h_3(z)$ in two steps. First, we made $\phi_3(z) = z^3$ orthogonal to $h_0, \dots, h_2$ as $z^3 - \langle\boldsymbol{z}^3, 1 \rangle \cdot 1 - \langle\boldsymbol{z}^3, \boldsymbol{z} \rangle \cdot z - \langle\boldsymbol{z}^3, \tfrac{\boldsymbol{z}^2 - 1}{\sqrt{2}} \rangle \cdot \tfrac{z^2-1}{\sqrt{2}} = z^3 - 3z,$ where $\boldsymbol{z} \sim \mathrm{N}(0,1)$ and we used the fact that $\boldsymbol{z}^3$ and $\boldsymbol{z}^3 \cdot \tfrac{\boldsymbol{z}^2 – 1}{\sqrt{2}}$ are odd functions and hence have Gaussian expectation $0$. Then we defined $h_3(z) = \frac{z^3 – 3z}{\sqrt{6}}$ after determining that $\mathop{\bf E}[(\boldsymbol{z}^3-3\boldsymbol{z})^2] = 6$.

Let’s develop a more explicit definition of these Hermite polynomials. The computations involved in the Gram–Schmidt process require knowledge of the moments of a Gaussian random variable $\boldsymbol{z} \sim \mathrm{N}(0,1)$. It’s most convenient to understand these moments through the moment generating function of $\boldsymbol{z}$, namely \begin{equation} \label{eqn:g-mgf} \mathop{\bf E}[\exp(t\boldsymbol{z})] = \tfrac{1}{\sqrt{2\pi}} \int_{\mathbb R} e^{tz} e^{-\frac12 z^2}\,dz = e^{\frac12 t^2} \tfrac{1}{\sqrt{2\pi}} \int_{\mathbb R} e^{-\frac12 (z-t)^2}\,dz = \exp(\tfrac{1}{2} t^2). \end{equation} In light of our interest in the $\mathrm{U}_\rho$ operators, and the fact that orthonormality involves pairs of basis functions, we’ll in fact study the moment generating function of a pair $(\boldsymbol{z}, \boldsymbol{z}’)$ of $\rho$-correlated standard Gaussians. To compute it, assume $(\boldsymbol{z},\boldsymbol{z}’)$ are generated as in Fact 7 with $\vec{u}, \vec{v}$ unit vectors in ${\mathbb R}^2$. Then \begin{align*} \mathop{\bf E}_{\substack{(\boldsymbol{z},\boldsymbol{z}’) \\ \text{$\rho$-correlated}}}[\exp(s\boldsymbol{z} + t\boldsymbol{z}')] &= \mathop{\bf E}_{\substack{\boldsymbol{g}_1, \boldsymbol{g}_2 \sim \mathrm{N}(0,1) \\ \text{independent}}}[\exp(s(u_1\boldsymbol{g}_1 + u_2 \boldsymbol{g}_2) + t(v_1\boldsymbol{g}_1 + v_2 \boldsymbol{g}_2))]\\ &= \mathop{\bf E}_{\boldsymbol{g}_1 \sim \mathrm{N}(0,1)}[\exp((su_1+tv_1)\boldsymbol{g}_1)]\mathop{\bf E}_{\boldsymbol{g}_2 \sim \mathrm{N}(0,1)}[\exp((su_2+tv_2)\boldsymbol{g}_2)]\\ &= \exp(\tfrac{1}{2} (su_1+tv_1)^2)\exp(\tfrac{1}{2} (su_2+tv_2)^2)\\ &= \exp(\tfrac{1}{2} \|\vec{u}\|_2^2 s^2 + \langle \vec{u},\vec{v} \rangle st + \tfrac{1}{2} \|\vec{v}\|_2^2 t^2) \\ &= \exp(\tfrac{1}{2} (s^2 + 2\rho st + t^2)), \end{align*} where the third equality used \eqref{eqn:g-mgf}. Dividing by $\exp(\tfrac{1}{2}(s^2+t^2))$ it follows that \begin{equation} \label{eqn:hermite-dev} \mathop{\bf E}_{\substack{(\boldsymbol{z},\boldsymbol{z}’) \\ \text{$\rho$-correlated}}}[\exp(s\boldsymbol{z} - \tfrac{1}{2} s^2)\exp(t\boldsymbol{z}' - \tfrac{1}{2} t^2)] = \exp(\rho st) = \sum_{j=0}^\infty \frac{\rho^j}{j!} s^jt^j. \end{equation} Inside the expectation above we essentially have the expression $\exp(tz-\tfrac{1}{2} t^2)$ appearing twice. It’s easy to see that if we take the power series in $t$ for this expression, the coefficient on $t^j$ will be a polynomial in $z$ with leading term $\frac{1}{j!}z^j$. Let’s therefore write \begin{equation} \label{eqn:prob-herm} \exp(tz-\tfrac{1}{2} t^2) = \sum_{j=0}^\infty \frac{1}{j!} H_j(z)t^j, \end{equation} where $H_j(z)$ is a monic polynomial of degree $j$. Now substituting this into \eqref{eqn:hermite-dev} yields $\sum_{j,k=0}^\infty \frac{1}{j!k!}\mathop{\bf E}_{\substack{(\boldsymbol{z},\boldsymbol{z}') \\ \text{\rho-correlated}}}[H_j(\boldsymbol{z})H_k(\boldsymbol{z}')]s^jt^k = \sum_{j=0}^\infty \frac{\rho^j}{j!} s^jt^j.$ Equating coefficients, it follows that we must have $\mathop{\bf E}_{\substack{(\boldsymbol{z},\boldsymbol{z}') \\ \text{\rho-correlated}}}[H_j(\boldsymbol{z})H_k(\boldsymbol{z}')] = \begin{cases} j!\rho^j & \text{if j = k,}\\ 0 & \text{if j \neq k.} \end{cases}$ In particular (taking $\rho = 1$), \begin{equation} \label{eqn:hermite-done2} \langle H_j, H_k \rangle = \begin{cases} j! & \text{if $j = k$,}\\ 0 & \text{if $j \neq k$}; \end{cases} \end{equation} i.e., the polynomials $(H_j)_{j \in {\mathbb N}}$ are orthogonal. Furthermore, since $H_j$ is monic and of degree $j$, it follows that the $H_j$’s are precisely the polynomials that arise in the Gram–Schmidt orthogonalization of $\{1, z, z^2, \dots \}$. We also see from \eqref{eqn:hermite-done2} that the orthonormalized polynomials $(h_j)_{j \in {\mathbb N}}$ are obtained by setting $h_j = \frac{1}{\sqrt{j!}} H_j$.

Let’s summarize and introduce the terminology for what we’ve deduced.

Definition 29 The probabilists’ Hermite polynomials $(H_j)_{j \in {\mathbb N}}$ are the univariate polynomials defined by the identity \eqref{eqn:prob-herm}. An equivalent definition (exercise) is \begin{equation} \label{eqn:hermite-alternate} H_j(z) = \frac{(-1)^j}{\varphi(z)} \cdot \frac{d^j}{dz^j} \varphi(z). \end{equation} The normalized Hermite polynomials $(h_j)_{j \in {\mathbb N}}$ are defined by $h_j = \frac{1}{\sqrt{j!}} H_j$; the first four are given explicitly in \eqref{eqn:hermite-examples}. For brevity we’ll simply refer to the $h_j$’s as the “Hermite polynomials”, though this is not standard terminology.

Proposition 30 The Hermite polynomials $(h_j)_{j \in {\mathbb N}}$ form a complete orthonormal basis for $L^2({\mathbb R},\gamma)$. They are also a “Fourier basis”, since $h_0 = 1$.

Proposition 31 For any $\rho \in [-1,1]$ we have $\mathop{\bf E}_{\substack{(\boldsymbol{z},\boldsymbol{z}') \\ \text{\rho-correlated}}}[h_j(\boldsymbol{z})h_k(\boldsymbol{z}')] = \langle h_j, \mathrm{U}_\rho h_k \rangle = \langle \mathrm{U}_\rho h_j, h_k \rangle = \begin{cases} \rho^j & \text{if j = k,}\\ 0 & \text{if j \neq k}. \end{cases}$

From this “Fourier basis” for $L^2({\mathbb R},\gamma)$ we can construct a “Fourier basis” for $L^2({\mathbb R}^n,\gamma)$ just by taking products, as in Proposition 8.13.

Definition 32 For a multi-index $\alpha \in {\mathbb N}^n$ we define the (normalized multivariate) Hermite polynomial $h_\alpha : {\mathbb R}^n \to {\mathbb R}$ by $h_\alpha(z) = \prod_{j = 1}^n h_{\alpha_j}(z_j).$ Note that the total degree of $h_\alpha$ is $|\alpha| = \sum_j \alpha_j$. We also identify a subset $S \subseteq [n]$ with its indicator $\alpha$ defined by $\alpha_j = 1_{j \in S}$; thus $h_S(z)$ denotes $z^S = \prod_{j \in S} z_j$.

Proposition 33 The Hermite polynomials $(h_\alpha)_{\alpha \in {\mathbb N}^n}$ form a complete orthonormal (Fourier) basis for $L^2({\mathbb R}^n,\gamma)$. Further, for any $\rho \in [-1,1]$ we have $\mathop{\bf E}_{\substack{(\boldsymbol{z},\boldsymbol{z}') \\ \text{\rho-correlated}}}[h_\alpha(\boldsymbol{z})h_\beta(\boldsymbol{z}')] = \langle h_\alpha, \mathrm{U}_\rho h_\beta \rangle = \langle \mathrm{U}_\rho h_\alpha, h_\beta \rangle = \begin{cases} \rho^{|\alpha|} & \text{if \alpha = \beta,}\\ 0 & \text{if \alpha \neq \beta}. \end{cases}$

We can now define the “Hermite expansion” of Gaussian functions.

Definition 34 Every $f \in L^2({\mathbb R}^n,\gamma)$ is uniquely expressible as $f = \sum_{\alpha \in {\mathbb N}^n} \widehat{f}(\alpha) h_\alpha,$ where the real numbers $\widehat{f}(\alpha)$ are called the Hermite coefficients of $f$ and the convergence is in $L^2({\mathbb R}^n,\gamma)$; i.e., $\left\|f - \sum_{|\alpha| \leq k} \widehat{f}(\alpha) h_\alpha\right\|_2 \to 0 \quad \text{as k \to \infty}.$ This is called the Hermite expansion of $f$.

Remark 35 If $f : {\mathbb R}^n \to {\mathbb R}$ is a multilinear polynomial, then it “is its own Hermite expansion”: $f(z) = \sum_{S \subseteq [n]} \widehat{f}(S) z^S = \sum_{S \subseteq [n]} \widehat{f}(S) h_S(z) = \sum_{\alpha_1, \dots, \alpha_n \leq 1} \widehat{f}(\alpha) h_\alpha(z).$

Proposition 36 The Hermite coefficients of $f \in L^2({\mathbb R}^n,\gamma)$ satisfy the formula $\widehat{f}(\alpha) = \langle f, h_\alpha \rangle,$ and for $f, g \in L^2({\mathbb R}^n,\gamma)$ we have the Plancherel formula $\langle f, g \rangle = \sum_{\alpha \in {\mathbb N}^n} \widehat{f}(\alpha)\widehat{g}(\alpha).$

From this we may deduce:

Proposition 37 For $f \in L^2({\mathbb R}^n,\gamma)$, the function $\mathrm{U}_\rho f$ has Hermite expansion $\mathrm{U}_\rho f = \sum_{\alpha \in {\mathbb N}^n} \rho^{|\alpha|} \widehat{f}(\alpha) h_\alpha$ and hence $\mathbf{Stab}_\rho[f] = \sum_{\alpha \in {\mathbb N}^n} \rho^{|\alpha|} \widehat{f}(\alpha)^2.$

Proof: Both statements follow from Proposition 36, with the first using $\widehat{\mathrm{U}_\rho f}(\alpha) = \langle \mathrm{U}_\rho f, h_\alpha \rangle = \langle \mathop{{\textstyle \sum}}_{\beta} \mathrm{U}_\rho \widehat{f}(\beta) h_\beta, h_\alpha \rangle = \mathop{{\textstyle \sum}}_\beta \widehat{f}(\beta) \langle \mathrm{U}_\rho h_\beta, h_\alpha \rangle = \rho^{|\alpha|} \widehat{f}(\alpha);$ we also used Proposition 33 and the fact that $\mathrm{U}_\rho$ is a contraction in $L^2({\mathbb R}^n,\gamma)$. $\Box$

Remark 38 When $f : {\mathbb R}^n \to {\mathbb R}$ is a multilinear polynomial, this formula for $\mathrm{U}_\rho f$ agrees with the formula $f(\rho z)$ given in Fact 13.

Remark 39 In a sense it’s not very important to know the explicit formulas for the Hermite polynomials, \eqref{eqn:hermite-examples}, \eqref{eqn:prob-herm}; it’s usually enough just to know that the formula for $\mathrm{U}_\rho f$ from Proposition 37 holds.

Finally, by differentiating the formula in Proposition 37 at $\rho = 1$ we deduce the following formula for the Ornstein–Uhlenbeck operator (explaining why it’s sometimes called the number operator):

Proposition 40 For $f \in L^2({\mathbb R}^n, \gamma)$ in the domain of $\mathrm{L}$ we have $\mathrm{L} f = \sum_{\alpha \in {\mathbb N}^n} |\alpha| \widehat{f}(\alpha) h_\alpha.$

(Actually, in the exercises you are asked to formally justify this and the fact that $f$ is in the domain of $\mathrm{L}$ if and only if $\sum_{\alpha} |\alpha|^2 \widehat{f}(\alpha)^2 < \infty$.) For additional facts about Hermite polynomials, see the exercises.

### 4 comments to §11.2: Hermite polynomials

• Noam Lifshitz

Minor typo:
I think in the gram schmidt process on the top, it should be-
$z^3 – \langle\boldsymbol{z^3}, 1 \rangle \cdot 1 – \langle\boldsymbol{z}^3, \boldsymbol{z} \rangle \cdot z – \langle\boldsymbol{z}^3, \tfrac{\boldsymbol{z}^2 – 1}{\sqrt{2}} \rangle \cdot \tfrac{z^2-1}{\sqrt{2}} = z^3 – 3z,$

• Ryan O'Donnell

Thanks, yes.

• Noam Lifshitz

Before eq. 3, I think it should be- the third *equality*

• Ryan O'Donnell

Yep, thanks.