Chapter 11 exercises

  1. Let $\mathcal{A}$ be the set of all functions $f : {\mathbb R}^n \to {\mathbb R}$ which are finite linear combinations of indicator functions of boxes. Prove that $\mathcal{A}$ is dense in $L^1({\mathbb R}^n, \gamma)$.
  2. Fill in proof details for the Gaussian Hypercontractivity Theorem.

  3. Prove Fact 13. (Cf. Exercise 2.21.)
  4. Show that $\mathrm{U}_{\rho_1} \mathrm{U}_{\rho_2} = \mathrm{U}_{\rho_1 \rho_2}$ for all $\rho_1, \rho_2 \in [-1,1]$. (Cf. Exercise 2.27.)
  5. Prove Proposition 16. (Hint: For $\rho \neq 0$, write $g(z) = \mathrm{U}_\rho f(z)$ and show that $g(z/\rho)$ is a smooth function using the relationship between convolution and derivatives.)
    1. Prove Proposition 17. (Hint: First prove it for bounded continuous $f$; then make an approximation and use Proposition 15.)
    2. Deduce more generally that for $f \in L^1({\mathbb R}^n,\gamma)$ the map $\rho \mapsto \mathrm{U}_\rho f$ is “strongly continuous” on $[0,1]$, meaning that for any $\rho \in [0,1]$ we have $\|\mathrm{U}_{\rho’} f – \mathrm{U}_\rho f\|_1 \to 0$ as $\rho’ \to \rho$. (Hint: Use Exercise 4.)

  6. Complete the proof of Proposition 26 by establishing the case of general $n$.
  7. Complete the proof of Proposition 28 by establishing the case of general $n$.
    1. Establish the alternative formula 11.2.(6) for the probabilists’ Hermite polynomials $H_j(z)$ given in Definition 29; equivalently, establish the formula \[ H_j(z) = (-1)^j \exp(\tfrac12 z^2) \cdot \left(\frac{d}{dz}\right)^j \exp(-\tfrac12 z^2). \] (Hint: Complete the square on the left-hand side of 11.2.(6); then differentiate $j$ times with respect to $t$ and evaluate at $0$.)
    2. Establish the recursion \[ H_j(z) = (z - \tfrac{d}{dz}) H_{j-1}(z) \quad\iff\quad h_j(z) = \tfrac{1}{\sqrt{j}} \cdot (z - \tfrac{d}{dz}) h_{j-1}(z) \] for $j \in {\mathbb N}^+$, and hence the formula $H_j(z) = (z – \tfrac{d}{dz})^j 1$.
    3. Show that $h_j(z)$ is an odd function of $z$ if $j$ is odd and an even function of $z$ if $j$ is even.
    1. Establish the derivative formula for Hermite polynomials: \[ H_j'(z) = j \cdot H_{j-1}(z) \quad\iff\quad h_j'(z) = \sqrt{j} \cdot h_{j-1}(z). \]
    2. By combining this with the other formula for $H_j’(z)$ implicit in Exercise 9(b), deduce the recursion \[ H_{j+1}(z) = zH_j(z) - jH_{j-1}(z). \]
    3. Show that $H_j(z)$ satisfies the second-order differential equation \[ j H_j(z) = z H'_j(z) - H_j''(z). \] (It’s equivalent to say that $h_j(z)$ satisfies it.) Observe that this is consistent with Propositions 26 and 40 and says that $H_j$ (equivalently, $h_j$) is an eigenfunction of the Ornstein–Uhlenbeck operator $\mathrm{L}$, with eigenvalue $j$.
  8. Prove that \[ H_j(x+y) = \sum_{k=0}^j \binom{j}{k} x^{j-k} H_k(y). \]
    1. By equating both sides of 11.2.(4) with \[ \mathop{\bf E}_{\boldsymbol{g} \sim \mathrm{N}(0,1)}[\exp(t(z + i \boldsymbol{g}))] \] (where $i = \sqrt{-1}$), show that \[ H_j(z) = \mathop{\bf E}_{\boldsymbol{g} \sim \mathrm{N}(0,1)}[(z + i \boldsymbol{g})^j]. \]
    2. Establish the explicit formulas \begin{align*} H_j(z) &= \sum_{k=0}^{\lfloor j/2 \rfloor} (-1)^{k} \binom{j}{2k} \mathop{\bf E}_{\boldsymbol{g} \sim \mathrm{N}(0,1)}[\boldsymbol{g}^{2k}] z^{j-2k} \\ &= j! \cdot \left(\frac{z^j}{0!! \cdot j!} – \frac{z^{j-2}}{2!! \cdot (j-2)!} + \frac{z^{j-4}}{4!!\cdot (j-4)!} – \frac{z^{j-6}}{6!! \cdot (j-6)!} + \cdots \right). \end{align*}
    1. Establish the formula \[ \mathop{\bf E}[\|\nabla f\|^2] = \sum_{\alpha \in {\mathbb N}^n} |\alpha| \widehat{f}(\alpha)^2 \] for all $f \in L^2({\mathbb R}^n, \gamma)$ (or at least for all $n$-variate polynomials $f$).
    2. For $f \in L^2({\mathbb R}^n, \gamma)$, establish the formula \[ \sum_{i=1}^n \mathop{\bf E}[\mathop{\bf Var}_{\boldsymbol{z}_i}[f]] = \sum_{\alpha \in {\mathbb N}^n} (\# \alpha)\widehat{f}(\alpha)^2. \]
  9. Show that for all $j \in {\mathbb N}$ and all $z \in {\mathbb R}$ we have \[ \tbinom{n}{j}^{-1/2} \cdot K^{(n)}_j\left(\frac{n}{2} - z \frac{\sqrt{n}}{2}\right) \xrightarrow{n \to \infty} h_j(z), \] where $K^{(n)}_j$ is the Kravchuk polynomial of degree $j$ from Exercise 2.59 (with its dependence on $n$ indicated in the superscript).
  10. Recall the definition 11.4.(1) of the Gaussian Minkowski content of the boundary $\partial A$ of a set $A \subseteq {\mathbb R}^n$. Sometimes the following very similar definition is also proposed for the Gaussian surface area of $A$: \[ M(A) = \liminf_{\epsilon \to 0^+} \frac{\mathrm{vol}_\gamma(\{z : \mathrm{dist}(z, A) < \epsilon\}) - \mathrm{vol}_\gamma(A)}{\epsilon}. \] Consider the following subsets of ${\mathbb R}$: \[ A_1 = \emptyset, \quad A_2 = \{0\}, \quad A_3 = (-\infty, 0), \quad A_4 = (-\infty, 0], \quad A_5 = {\mathbb R} \setminus \{0\}, \quad A_6 = {\mathbb R}. \]

    1. Show that \begin{align*} \gamma^+(A_1) &= 0 & M(A_1) &= 0 & \text{surf}_\gamma(A_1) = 0 \\ \gamma^+(A_2) &= \tfrac{1}{\sqrt{2\pi}} & M(A_2) &= \sqrt{\tfrac{2}{\pi}} & \text{surf}_\gamma(A_2) = 0 \\ \gamma^+(A_3) &= \tfrac{1}{\sqrt{2\pi}} & M(A_3) &= \tfrac{1}{\sqrt{2\pi}} & \text{surf}_\gamma(A_3) = \tfrac{1}{\sqrt{2\pi}} \\ \gamma^+(A_4) &= \tfrac{1}{\sqrt{2\pi}} & M(A_4) &= \tfrac{1}{\sqrt{2\pi}} & \text{surf}_\gamma(A_4) = \tfrac{1}{\sqrt{2\pi}} \\ \gamma^+(A_5) &= \tfrac{1}{\sqrt{2\pi}} & M(A_5) &= 0 & \text{surf}_\gamma(A_5) = 0 \\ \gamma^+(A_6) &= 0 & M(A_6) &= 0 & \text{surf}_\gamma(A_6) = 0. \\ \end{align*}
    2. For $A \subseteq {\mathbb R}^n$, the essential boundary (or measure-theoretic boundary) of $A$ is defined to be \[ \partial_* A = \left\{ x \in {\mathbb R}^n : \lim_{\delta \to 0^+} \frac{\mathrm{vol}_\gamma(A \cap B_\delta(x))}{\mathrm{vol}_\gamma(B_\delta(x))} \neq 0, 1\right\}, \] where $B_\delta(x)$ denotes the ball of radius $\delta$ centered at $x$. In other words, $\partial_* A$ is the set of points where the “local density of $A$” is strictly between $0$ and $1$. Show that if we replace $\partial A$ with $\partial_* A$ in the definition 11.4.(1) of the Gaussian Minkowski content of the boundary of $A$, then we have the identity $\gamma^+(\partial_* A_i) = \text{surf}_\gamma(A_i)$ for all $1 \leq i \leq 6$. Remark: In fact, the equality $\gamma^+(\partial_* A) = \text{surf}_\gamma(A)$ is known to hold for every set $A$ such that $\partial_* A$ is “rectifiable”.
  11. Justify the formula for the Gaussian surface area of unions of intervals stated in Example 50.
    1. Let $B_r \subset {\mathbb R}^n$ denote the ball of radius $r > 0$ centered at the origin. Show that \begin{equation} \label{eqn:balls-gsa} \text{surf}_\gamma(B_r) = \frac{n}{2^{n/2} (n/2)!} r^{n-1} e^{-r^2/2}. \end{equation}
    2. Show that \eqref{eqn:balls-gsa} is maximized when $r = \sqrt{n-1}$. (In case $n = 1$, this should be interpreted as $r \to 0^+$.)
    3. Let $S(n)$ denote this maximizing value, i.e., the value of \eqref{eqn:balls-gsa} with $r = \sqrt{n-1}$. Show that $S(n)$ decreases from $\sqrt{\frac{2}{\pi}}$ to a limit of $\frac{1}{\sqrt{\pi}}$ as $n$ increases from $1$ to $\infty$.
    1. For $f \in L^2({\mathbb R}^n, \gamma)$, show that $\mathrm{L} f$ is defined, i.e., \[ \lim_{t \to 0} \frac{f - \mathrm{U}_{e^{-t}} f}{t} \] exists in $L^2({\mathbb R}^n,\gamma)$, if and only if $\sum_{\alpha \in {\mathbb N}^n} |\alpha|^2 \widehat{f}(\alpha)^2 < \infty$. (Hint: Proposition 37.)
    2. Formally justify Proposition 40.
    3. Let $f \in L^2({\mathbb R}^n, \gamma)$. Show that $\mathrm{U}_\rho f$ is in the domain of $\mathrm{L}$ for any $\rho \in (-1,1)$.

    Remark: It can be shown that the $\mathcal{C}^3$ hypothesis in Propositions 26 and 28 is not necessary (provided the derivatives are interpreted in the distributional sense); see, e.g., Bogachev [Bog98] (Chapter 1) for more details.

  12. This exercise is concerned with (a generalization of) the function appearing in Borell’s Isoperimetric Theorem.

    Definition 76 For $\rho \in [-1,1]$ we define the Gaussian quadrant probability function $\Lambda_\rho : [0,1]^2 \to [0,1]$ by \[ \Lambda_\rho(\alpha,\beta) = \mathop{\bf Pr}_{\substack{(\boldsymbol{z}, \boldsymbol{z}') \text{ $\rho$-correlated} \\ \text{standard Gaussians}}}[\boldsymbol{z} \leq t, \boldsymbol{z}' \leq t'], \] where $t$ and $t’$ are defined by $\Phi(t) = \alpha$, $\Phi(t’) = \beta$. This is a slight reparametrization of the bivariate Gaussian cdf. We also use the shorthand notation \[ \Lambda_\rho(\alpha) = \Lambda_\rho(\alpha, \alpha), \] which we encountered in Borell’s Isoperimetric Theorem (and also in Exercises 5.33 and 9.24, with a different, but equivalent, definition).

    1. Confirm the statement from Borell’s Isoperimetric Theorem, that for every $H \subseteq {\mathbb R}^n$ with $\mathrm{vol}_\gamma(H) = \alpha$ we have $\mathbf{Stab}_\rho[1_H] = \Lambda_\rho(\alpha)$.
    2. Verify the following formulas: \begin{align*} \Lambda_\rho(\alpha,\beta) &= \Lambda_\rho(\beta,\alpha), \\ \Lambda_0(\alpha,\beta) &= \alpha \beta, \\ \Lambda_1(\alpha, \beta) &= \min(\alpha, \beta), \\ \Lambda_{-1}(\alpha,\beta) &= \max(\alpha + \beta – 1, 0), \\ \Lambda_\rho(\alpha, 0) &= \Lambda_\rho(0, \alpha) = 0, \\ \Lambda_\rho(\alpha, 1) &= \Lambda_\rho(1, \alpha) = \alpha, \\ \Lambda_{-\rho}(\alpha, \beta) &= \alpha – \Lambda_{\rho}(\alpha, 1-\beta) = \beta – \Lambda_{\rho}(1-\alpha, \beta), \\ \Lambda_\rho(\tfrac{1}{2}, \tfrac{1}{2}) &= \tfrac{1}{2} – \tfrac{1}{2}\tfrac{\arccos \rho}{\pi}. \end{align*}
    3. Prove that $\Lambda_\rho(\alpha, \beta) \gtrless \alpha \beta$ according as $\rho \gtrless 0$, for all $0 < \alpha, \beta < 1$.
    4. Establish \[ \frac{d}{d\alpha} \Lambda_\rho(\alpha, \beta) = \Phi\left(\frac{t' - \rho t}{\sqrt{1-\rho^2}}\right), \quad \frac{d}{d\beta} \Lambda_\rho(\alpha, \beta) = \Phi\left(\frac{t - \rho t'}{\sqrt{1-\rho^2}}\right), \] where $t = \Phi^{-1}(\alpha)$, $t’ = \Phi^{-1}(\beta)$ as usual.
    5. Show that \[ |\Lambda_\rho(\alpha,\beta) - \Lambda_\rho(\alpha', \beta')| \leq |\alpha - \alpha'| + |\beta - \beta'|, \] and hence $\Lambda_\rho(\alpha)$ is a $2$-Lipschitz function of $\alpha$.
  13. Show that the general-$n$ case of Bobkov’s Inequality follows by induction from the $n = 1$ case.
  14. Let $f : \{-1,1\}^n \to \{-1,1\}$ and let $\alpha = \min\{\mathop{\bf Pr}[f = 1], \mathop{\bf Pr}[f=-1]\}$. Deduce $\mathbf{I}[f] \geq 4 \mathcal{U}(\alpha)^2$ from Bobkov’s Inequality. Show that this recovers the edge-isoperimetric inequality for the Boolean cube (Theorem 2.38) up to a constant factor. (Hint: For the latter problem, use Proposition 5.24.)
  15. Let $d_1, d_2 \in {\mathbb N}$. Suppose we take a simple random walk on ${\mathbb Z}$, starting from the origin and moving by $\pm 1$ at each step with equal probability. Show that the expected time it takes to first reach either $-d_1$ or $+d_2$ is $d_1d_2$.
  16. Prove Claim 54. (Hint: For the function $V_y(\tau)$ appearing in the proof of Bobkov’s Two-Point Inequality, you’ll want to establish that $V_y^{\prime \prime \prime}(0) = 0$ and that $V_y^{\prime \prime \prime \prime}(0) = \frac{2+10\mathcal{U}’(y)^2}{\mathcal{U}(y)^3} > 0$.)
  17. Prove Theorem 55. (Hint: Have the random walk start at $\boldsymbol{y}_0 = a \pm \rho b$ with equal probability, and define $\boldsymbol{z}_t = \|(\mathcal{U}(\boldsymbol{y}_t), \rho b, \tau \sqrt{t})\|$. You’ll need the full generality of Exercise 22.)
  18. Justify Remark 41 (in the general-volume context) by showing that Borell’s Isoperimetric Theorem for all functions in $K = \{f : {\mathbb R}^n \to [0,1] \mid \mathop{\bf E}[f] = \alpha\}$ can be deduced from the case of functions in $\partial K = \{ f : {\mathbb R}^n \to \{0,1\} \mid \mathop{\bf E}[f] = \alpha\}$. (Hint: As stated in the remark, the intuition is that $\sqrt{\mathbf{Stab}_\rho[f]}$ is a norm and that $K$ is a convex set whose extreme points are $\partial K$. To make this precise, you may want to use Exercise 1.)
  19. The goal of this exercise and Exercises 2729 is to give the proof of Borell’s Isoperimetric Theorem due to Mossel and Neeman [MN12]. In fact, their proof gives the following natural “two-set” generalization of the theorem (Borell’s original work [Bor85] proved something even more general):

    Two-Set Borell Isoperimetric Theorem Fix $\rho \in (0,1)$ and $\alpha, \beta \in [0,1]$. Then for any $A, B \subseteq {\mathbb R}^n$ with $\mathrm{vol}_\gamma(A) = \alpha$, $\mathrm{vol}_\gamma(B) = \beta$, \begin{equation} \label{eqn:two-set-borell1} \mathop{\bf Pr}_{\substack{(\boldsymbol{z}, \boldsymbol{z}’) \text{ $\rho$-correlated} \\ \text{$n$-dimensional Gaussians}}}[\boldsymbol{z} \in A, \boldsymbol{z}' \in B] \leq \Lambda_\rho(\alpha,\beta). \end{equation}

    By definition of $\Lambda_\rho(\alpha,\beta)$, equality holds if $A$ and $B$ are parallel halfspaces. Taking $\beta = \alpha$ and $B = A$ in this theorem gives Borell’s Isoperimetric Theorem as stated in Section 3 (in the case of range $\{0,1\}$, at least, which is equivalent by Exercise 25). It’s quite natural to guess that parallel halfspaces should maximize the “joint Gaussian noise stability” quantity on the left of \eqref{eqn:two-set-borell1}, especially in light of Remark 2 from Chapter 10.1 concerning the analogous Generalized Small-Set Expansion Theorem. Just as our proof of the Small-Set Expansion Theorem passed through the Two-Function Hypercontracitivity Theorem to facilitate induction, so too does the Mossel–Neeman proof pass through the following “two-function version” of Borell’s Isoperimetric Theorem:

    Two-Function Borell Isoperimetric Theorem Fix $\rho \in (0,1)$ and let $f, g \in L^2({\mathbb R}^n, \gamma)$ have range $[0,1]$. Then \[ \mathop{\bf E}_{\substack{(\boldsymbol{z}, \boldsymbol{z}') \text{ $\rho$-correlated} \\ \text{$n$-dimensional Gaussians}}}[\Lambda_\rho(f(\boldsymbol{z}),g(\boldsymbol{z}'))] \leq \Lambda_\rho\left(\mathop{\bf E}[f], \mathop{\bf E}[g]\right). \]

    1. Show that the Two-Function Borell Isoperimetric Theorem implies the Two-Set Borell Isoperimetric Theorem and the Borell Isoperimetric Theorem (for functions with range $[0,1]$). (Hint: You may want to use facts from Exercise 19.)
    2. Show conversely that the Two-Function Borell Isoperimetric Theorem (in dimension $n$) is implied by the Two-Set Borell Isoperimetric Theorem (in dimension $n+1$). (Hint: Given $f : {\mathbb R}^n \to [0,1]$, define $A \subseteq {\mathbb R}^{n+1}$ by $(z,t) \in A \iff f(z) \geq \Phi(t)$.)
    3. Let $\ell_1,\ell_2 : {\mathbb R}^n \to {\mathbb R}$ be defined by $\ell_i(z) = \langle a, z \rangle + b_i$ for some $a \in {\mathbb R}^n$, $b_1, b_2 \in {\mathbb R}$. Show that equality occurs in the Two-Function Borell Isoperimetric Theorem if $f(z) = 1_{\ell_1(z) \geq 0}$, $g(z) = 1_{\ell_2(z) \geq 0}$ or if $f(z) = \Phi(\ell_1(z))$, $g(z) = \Phi(\ell_2(z))$.
  20. Show that the inequality in the Two-Function Borell Isoperimetric Theorem “tensorizes” in the sense that if it holds for $n = 1$, then it holds for all $n$. Your proof should not use any property of the function $\Lambda_\rho$, nor any property of the $\rho$-correlated $n$-dimensional Gaussian distribution besides the fact that it’s a product distribution. (Hint: Induction by restrictions as in the proof of the Two-Function Hypercontractivity Induction Theorem from Chapter 9.4.)

Leave a Reply

  

  

  

You can use most standard LaTeX, as well as these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>