Chapter 11 exercises, continued

  1. Let $I_1, I_2 \subseteq {\mathbb R}$ be open intervals and let $\mathcal{F} : I_1 \times I_2 \to {\mathbb R}$ be $\mathcal{C}^2$. For $\rho \in {\mathbb R}$, define the matrix \[ H_\rho\mathcal{F} = (H\mathcal{F}) \circ \begin{bmatrix} 1 & \rho \\ \rho & 1 \end{bmatrix}, \] where $H \mathcal{F}$ denotes the Hessian of $\mathcal{F}$ and $\circ$ is the entrywise (Hadamard) product. We say that $\mathcal{F}$ is $\rho$-concave if $H_\rho \mathcal{F}$ is everywhere negative semidefinite. Note that the $\rho = 1$ case corresponds to the usual notion of concavity, and the $\rho = 0$ case corresponds to concavity separately along the two coordinates. The goal of this exercise is to show that the Gaussian quadrant probability $\Lambda_\rho$ function is $\rho$-concave for all $\rho \in (0,1)$.
    1. Extending Exercise 19(d), show that for any $\rho \in (-1,1)$, \begin{align*} \frac{d^2}{d\alpha^2} \Lambda_\rho(\alpha, \beta) &= -\frac{\rho}{\sqrt{1-\rho^2}} \cdot \frac{1}{\phi(t)} \cdot \phi\left(\frac{t’ – \rho t}{\sqrt{1-\rho^2}}\right), \end{align*} and deduce a similar formula for $\frac{d^2}{d\beta^2} \Lambda_\rho(\alpha, \beta)$.
    2. Show that \begin{align*} \frac{d^2}{d\alpha\,d\beta} \Lambda_\rho(\alpha, \beta) &= \frac{1}{\sqrt{1-\rho^2}} \cdot \frac{1}{\phi(t’)} \cdot \phi\left(\frac{t’ – \rho t}{\sqrt{1-\rho^2}}\right), \end{align*} and deduce a similar (in fact, equal) formula for $\frac{d^2}{d\beta\,d\alpha} \Lambda_\rho(\alpha, \beta)$.
    3. Show that $\det (H_\rho \Lambda_\rho) = 0$ on all of $(0,1)^2$.
    4. Show that if $\rho \in (0,1)$, then $\frac{d^2}{d\alpha^2} \Lambda_\rho,\ \frac{d^2}{d\beta^2} \Lambda_\rho < 0$ on $(0,1)^2$. Deduce that $\Lambda_\rho$ is $\rho$-concave.
  2. This exercise is devoted to Mossel and Neeman’s proof [MN12] of the Two-Function Borell Isoperimetric Theorem in the case $n = 1$. For another approach, see Exercise 30. By Exercise 27, this is sufficient to establish the case of general $n$. (Actually, the proof in this exercise works essentially verbatim in the general $n$ case, but we stick to $n = 1$ for simplicity.)
    1. More generally, we intend to prove that for $f, g : {\mathbb R} \to [0,1]$, \[ \lambda(\rho) = \mathop{\bf E}_{\substack{(\boldsymbol{z}, \boldsymbol{z}') \text{ $\rho$-correlated} \\ \text{standard Gaussians}}}[\Lambda_\rho(\mathrm{U}_\rho f(\boldsymbol{z}),\mathrm{U}_\rho g(\boldsymbol{z}'))] \] is a nonincreasing function of $0 < \rho < 1$ (cf. Theorem 55). Obtain the desired conclusion by taking $\rho \to 0+, 1^-$. (Hint: You’ll need Exercises 6 and 19(e).)
    2. Write $f_\rho = \mathrm{U}_\rho f$, $g_\rho = \mathrm{U}_\rho g$ for brevity, and write $\partial_i \Lambda_\rho$ ($i = 1,2$) for the partial derivatives of $\Lambda_\rho$. Also let $\boldsymbol{h}_1, \boldsymbol{h}_2$ denote independent standard Gaussians. Use the Chain Rule and Proposition 27 to establish \begin{align} \lambda’(\rho) = &\mathop{\bf E}[(\partial_1 \Lambda_\rho)(f_\rho(\boldsymbol{h}_1), g_\rho (\rho \boldsymbol{h}_1 + \sqrt{1-\rho^2} \boldsymbol{h}_2)) \cdot \mathrm{L} f_\rho (\boldsymbol{h}_1)] \label{eqn:first-mn-deriv}\\ {}+ &\mathop{\bf E}[(\partial_2 \Lambda_\rho)(f_\rho(\rho \boldsymbol{h}_2 + \sqrt{1-\rho^2} \boldsymbol{h}_1), g_\rho (\boldsymbol{h}_2)) \cdot \mathrm{L} g_\rho (\boldsymbol{h}_2)]. \label{eqn:second-mn-deriv} \end{align}
    3. Use Proposition 28 to show that the first expectation \eqref{eqn:first-mn-deriv} equals \[ \mathop{\bf E}[(\partial_{11} \Lambda_\rho f)(f_\rho, g_\rho) \cdot (f_\rho')^2 + \rho \cdot (\partial_{21} \Lambda_\rho f)(f_\rho, g_\rho) \cdot f_\rho' \cdot g_\rho'], \] where $f_\rho, f_\rho’$ are evaluated at $\boldsymbol{h}_1$ and $g_\rho, g’_\rho$ are evaluated at $\rho \boldsymbol{h}_1 + \sqrt{1-\rho^2}\boldsymbol{h}_2$. Give a similar formula for \eqref{eqn:second-mn-deriv}.
    4. Deduce that \[ \lambda'(\rho) = \mathop{\bf E}_{\substack{(\boldsymbol{z}, \boldsymbol{z}') \text{ $\rho$-correlated} \\ \text{standard Gaussians}}}\left[\begin{bmatrix} f_\rho'(\boldsymbol{z}) & g_\rho'(\boldsymbol{z}') \end{bmatrix} \cdot (H_\rho \Lambda_\rho)(f_\rho(\boldsymbol{z}), g_\rho(\boldsymbol{z}')) \cdot \begin{bmatrix} f_\rho'(\boldsymbol{z}) \\ g_\rho'(\boldsymbol{z}') \end{bmatrix}\right], \] where $H_\rho$ is as in Exercise 28, and that indeed $\lambda$ is a nonincreasing function.
    1. Suppose the Two-Function Borell Isoperimetric Theorem were to hold for $1$-bit functions, i.e., for $f, g : \{-1,1\} \to [0,1]$? Then the easy induction of Exercise 27 would extend the result to $n$-bit functions $f, g : \{-1,1\}^n \to [0,1]$; in turn, this would yield the Two-Function Borell Isoperimetric Theorem for $1$-dimensional Gaussian functions (i.e., Exercise 29), by the usual Central Limit Theorem argument. Show, however, that dictator functions provide a counterexample to a potential “$1$-bit Two-Function Borell Isoperimetric Theorem”.
    2. Nevertheless, the idea can be salvaged by proving a weakened version of the inequality for $1$-bit functions that has an “error term” that is a superlinear function of $f$ and $g$’s “influences”. Fix $\rho \in (0,1)$ and some small $\epsilon > 0$. Let $f, g : \{-1,1\} \to [\epsilon, 1-\epsilon]$. Show that \[ \mathop{\bf E}_{\substack{({\boldsymbol{x}}, {\boldsymbol{x}}') \\ \text{$\rho$-correlated}}}[\Lambda_\rho(f({\boldsymbol{x}}), g({\boldsymbol{x}}'))] \leq \Lambda_\rho(\mathop{\bf E}[f], \mathop{\bf E}[g]) + C_{\rho, \epsilon}\cdot (\mathop{\bf E}[|\mathrm{D}_1 f|^3] + \mathop{\bf E}[|\mathrm{D}_1 g|^3]), \] where $C_{\rho,\epsilon}$ is a constant depending only on $\rho$ and $\epsilon$. (Hint: Perform a $2$nd-order Taylor expansion of $\Lambda_\rho$ around $(\mathop{\bf E}[f], \mathop{\bf E}[g])$; in expectation, the quadratic term should be \[ \begin{bmatrix} \mathrm{D}_1 f & \mathrm{D}_1 g \end{bmatrix} \cdot (H_\rho \Lambda_\rho)(\mathop{\bf E}[f],\mathop{\bf E}[g]) \cdot \begin{bmatrix} \mathrm{D}_1 f \\ \mathrm{D}_1 g \end{bmatrix}. \] As in Exercise 29, show this quantity is nonpositive.)
    3. Extend the previous result by induction to obtain the following theorem of De, Mossel, and Neeman [DMN13]:

      Theorem 77 For each $\rho \in (0,1)$ and $\epsilon > 0$, there exists a constant $C_{\rho, \epsilon}$ such that the following holds: If $f, g : \{-1,1\}^n \to [\epsilon, 1-\epsilon]$, then \[ \mathop{\bf E}_{\substack{({\boldsymbol{x}}, {\boldsymbol{x}}') \\ \text{$\rho$-correlated}}}[\Lambda_\rho(f({\boldsymbol{x}}), g({\boldsymbol{x}}'))] \leq \Lambda_\rho(\mathop{\bf E}[f], \mathop{\bf E}[g]) + C_{\rho, \epsilon}\cdot (\Delta_n[f] + \Delta_n[g]). \] Here we using the following inductive notation: $\Delta_1[f] = \mathop{\bf E}[|f - \mathop{\bf E}[f]|^3]$, and \[ \Delta_n[f] = \mathop{\bf E}_{{\boldsymbol{x}}_n \sim \{-1,1\}}\left[\Delta_{n-1}[f_{ \mid {\boldsymbol{x}}_n}]\right] + \Delta_1[f^{\subseteq \{n\}}]. \]

    4. Prove by induction that $\Delta_n[f] \leq 8\sum_{i=1}^n \|\mathrm{D}_i f\|_3^3$.
    5. Suppose that $f,g \in L^2({\mathbb R}, \gamma)$ have range $[\epsilon, 1-\epsilon]$ and are $c$-Lipschitz. Show that for any $M \in {\mathbb N}^+$, the Two-Function Borell Isoperimetric Theorem holds for $f, g$ with an additional additive error of $O(M^{-1/2})$, where the constant in the $O(\cdot)$ depends only on $\rho$, $\epsilon$, and $c$. (Hint: Use $\text{BitsToGaussians}_{M}^{}$.)
    6. By an approximation argument, deduce the Two-Function Borell Isoperimetric Theorem for general $f, g \in L^2({\mathbb R}, \gamma)$ with range $[0,1]$; i.e., prove Exercise 29.
  3. Fix $0 < \rho < 1$ and suppose $f \in L^1({\mathbb R},\gamma)$ is nonnegative and satisfies $\mathop{\bf E}[f] = 1$. Note that $\mathop{\bf E}[\mathrm{U}_\rho f] = 1$ as well. The goal of this problem is to show that $\mathrm{U}_\rho f$ satisfies an improved Markov inequality: $\mathop{\bf Pr}[\mathrm{U}_\rho f > t] = O(\frac{1}{t \sqrt{\ln t}}) = o(\frac{1}{t})$ as $t \to \infty$. This gives a quantitative sense in which $\mathrm{U}_\rho$ is a “smoothing operator”: $\mathrm{U}_\rho f$ can never look too much like like a step function (the tight example for Markov’s inequality).
    1. For simplicity, let’s first assume $\rho = 1/\sqrt{2}$. Given $t > \sqrt{2}$, select $h > 0$ such that $\varphi(h) = t/\sqrt{\pi}$. Show that $h \sim \sqrt{2\ln t}$.
    2. Let $H = \{z : \mathrm{U}_\rho f(z) > t\}$. Show that if $H \subseteq (-\infty, -h] \cup [h, \infty)$, then we have $\mathop{\bf Pr}[\mathrm{U}_\rho f > t] \lesssim \frac{\sqrt{2/\pi}}{t \sqrt{\ln t}}$, as desired. (Hint: You’ll need $\overline{\Phi}(u) < \varphi(u)/u$. )
    3. Otherwise, we wish to get a contradiction. First, show that there exists $y \in (-h,h)$ and $\delta_0 > 0$ such that $\mathrm{U}_\rho f(z) > t$ for all $t \in (y-\delta_0, y+\delta_0)$. (Hint: You’ll need that $\mathrm{U}_\rho f$ is continuous; see Exercise 5.)
    4. For $0 < \delta < \delta_0$, define $g \in L^1({\mathbb R},\gamma)$ by $g(z) = \frac{1}{2\delta} 1_{(y-\delta, y+\delta)}$. Show that $0 \leq \mathrm{U}_\rho g \leq \frac{1}{\sqrt{\pi}}$ pointwise. (Hint: Why is $\mathrm{U}_\rho g(z)$ maximized at $\sqrt{2}y$?)
    5. Show that $\frac{1}{\sqrt{\pi}} \geq \langle f, \mathrm{U}_\rho g \rangle > t \mathop{\bf E}[g]$.
    6. Derive a contradiction by taking $\delta \to 0$, thereby showing that indeed $\mathop{\bf Pr}[\mathrm{U}_\rho f > t] \lesssim \frac{\sqrt{2/\pi}}{t \sqrt{\ln t}}$.
    7. Show that this result is tight by constructing an appropriate $f$.
    8. Generalize the above to show that for any fixed $0 < \rho < 1$ we have $\mathop{\bf Pr}[\mathrm{U}_\rho f > t] \lesssim \frac{1}{\sqrt{\pi(1-\rho^2)}}\frac{1}{t \sqrt{\ln t}}$.

  4. As described in Example 72, show that $\mathrm{SDPOpt}({\mathbb Z}_5) \geq \tfrac{1}{2} – \tfrac{1}{2} \cos \frac{4\pi}{5} =\frac58 + \frac{\sqrt{5}}{8}$.
  5. Prove Theorem 71.
  6. Consider the generalization of the Max-Cut CSP in which the variable set is $V$, the domain is $\{-1,1\}$, and each constraint is an equality of two literals, i.e., it’s of the form $b F(v) = b’ F(v’)$ for for some $v, v’ \in V$ and $b, b’ \in \{-1,1\}$. This CSP is traditionally called Max-E$2$-Lin. Given an instance $\mathcal{P}$, write $(\boldsymbol{v}, \boldsymbol{v}’, \boldsymbol{b}, \boldsymbol{b}’) \sim \mathcal{P}$ to denote a uniformly chosen constraint. The natural SDP relaxation (which can also be solved efficiently) is the following: \[ \begin{aligned} \text{maximize}& \quad \mathop{\bf E}_{(\boldsymbol{v},\boldsymbol{v}',\boldsymbol{b}, \boldsymbol{b}') \sim \mathcal{P}}\left[\tfrac{1}{2} + \tfrac{1}{2} \langle \boldsymbol{b}\vec{U}(\boldsymbol{v}), \boldsymbol{b}'\vec{U}(\boldsymbol{v}') \rangle\right]\\ \text{subject to}& \quad \vec{U} : V \to S^{n-1}. \end{aligned} \] Show that the Goemans–Williamson algorithm, when using this SDP, is a $(c_{\text{GW}} \beta, \beta)$-approximation algorithm for Max-E$2$Lin, and that it also has the same refined guarantee as in Theorem 71.
  7. This exercise builds on Exercise 34. Consider the following instance $\mathcal{P}$ of Max-E$2$-Lin: The variable set is ${\mathbb Z}_4$ and the constraints are \[ F(0) = F(1), \quad F(1) = F(2), \quad F(2) = F(3), \quad F(3) = -F(0). \]

    1. Show that $\mathrm{Opt}(\mathcal{P}) = \frac34$.
    2. Show that $\mathrm{SDPOpt}(\mathcal{P}) \geq \frac{1}{2} + \frac{1}{2\sqrt{2}}$. (Hint: Very similar to Exercise 32; you can use four unit vectors at $45^\circ$ angles in ${\mathbb R}^2$.)
    3. Deduce that $\mathrm{SDPOpt}(\mathcal{P}) = \frac{1}{2} + \frac{1}{2\sqrt{2}}$ and that this is an optimal SDP integrality gap for Max-E$2$Lin. (Cf. Remark 75.)
  8. In our proof of Theorem 73 it’s stated that showing the $\beta$-Noise Sensitivity Test is a $(\theta/\pi, \tfrac{1}{2} – \tfrac{1}{2} \cos \theta)$-Dictator-vs.-No-Notables test implies the desired UG-hardness of $(\theta/\pi +\delta, \tfrac{1}{2} – \tfrac{1}{2} \cos \theta)$-approximating Max-Cut (for any constant $\delta > 0$). There are two minor technical problems with this: First, the test can only actually be implemented when $\beta$ is a rational number. Second, even ignoring this, Theorem 7.41 only directly yields hardness of $(\theta/\pi +\delta, \tfrac{1}{2} – \tfrac{1}{2} \cos \theta – \delta)$-approximation. Show how to overcome both technicalities. (Hint: Continuity.)

  9. Use Corollary 59 (and 11.5.(7)) to show that in the setting of the Berry–Esseen Theorem, $|\|\boldsymbol{S}\|_1 -\sqrt{2/\pi}| \leq O(\gamma^{1/3})$. (Cf. Exercise 5.32.)
  10. The goal of this exercise is to prove Proposition 58.

    1. Reduce to the case $c = 1$.
    2. Reduce to the case $\eta = 1$. (Hint: Dilate the input by a factor of $\eta$.)
    3. Assuming henceforth that $c = \eta = 1$, we define $\widetilde{\psi}(s) = \mathop{\bf E}[\psi(s+\boldsymbol{g})]$ for $\boldsymbol{g} \sim \mathrm{N}(0,1)$ as suggested; i.e., $\widetilde{\psi} = \psi \ast \varphi$, where $\varphi$ is the Gaussian pdf. Show that indeed $\|\widetilde{\psi} – \psi\|_\infty \leq \sqrt{2/\pi} \leq 1$.
    4. To complete the proof we need to show that for all $s \in {\mathbb R}$ and $k \in {\mathbb N}^+$ we have $|\widetilde{\psi}^{(k)}(s)| \leq C_k$. Explain why, in proving this, we may assume $\psi(s) = 0$. (Hint: This requires $k \geq 1$.)
    5. Assuming $\psi(s) = 0$, show $|\widetilde{\psi}^{(k)}(s)| = |\psi \ast \varphi^{(k)}(s)| \leq C_k$. (Hint: Show that $\varphi^{(k)}(s) = p(s) \varphi(s)$ for some polynomial $p(s)$ and use the fact that Gaussians have finite absolute moments.)
  11. Establish the following multidimensional generalization of Proposition 58:

    Proposition 78 Let $\psi : {\mathbb R}^d \to {\mathbb R}$ be $c$-Lipschitz. Then for any $\eta > 0$ there exists $\widetilde{\psi}_\eta : {\mathbb R}^d \to {\mathbb R}$ satisfying $\|\psi – \widetilde{\psi}_\eta\|_\infty \leq c \sqrt{d} \eta$ and $\|\partial^\beta \widetilde{\psi}_\eta\|_\infty \leq C_{|\beta|} c \sqrt{d}/\eta^{|\beta|-1}$ for each multi-index $\beta \in {\mathbb N}^{d}$ with $|\beta| = \sum_i \beta_i \geq 1$, where $C_k$ is a constant depending only on $k$.

  12. In Exercise 38 we “mollified” a function $\psi$ by convolving it with the (smooth) pdf of a Gaussian random variable. It’s sometimes helpful to instead use a random variable with bounded support (but still with a smooth pdf on all of ${\mathbb R}$). Here we construct such a random variable. Define $b : {\mathbb R} \to {\mathbb R}$ by \[ b(x) = \begin{cases} \exp\left(-\tfrac{1}{1-x^2}\right) & \text{if $-1 < x < 1$,} \\ 0 & \text{else.} \end{cases} \]
    1. Verify that $b(x) \geq 0$ for all $x$ and that $b(-x) = b(x)$.
    2. Prove the following statement by induction on $k \in {\mathbb N}$: On $(-1,1)$, the $k$th derivative of $b$ at $x$ is of the form $p(x)(1-x^2)^{-2k}\cdot b(x)$, where $p(x)$ is a polynomial.
    3. Deduce that $b$ is a smooth ($\mathcal{C}^\infty$) function on ${\mathbb R}$.
    4. Verify that $C = \int_{-1}^{1} b(x)\,dx$ satisfies $0 < C < \infty$ and that we can therefore define a real random variable $\boldsymbol{y}$, symmetric and supported on $(-1,1)$, with the smooth pdf $\widetilde{b}(y) = b(y)/C$. Show also that for $k \in {\mathbb N}$, the numbers $c_k = \|\widetilde{b}^{(k)}\|_\infty$ are finite and positive, where $\widetilde{b}^{(k)}$ denotes the $k$th derivative of $\widetilde{b}$.
    5. Give an alternate proof of Exercise 38 using $\boldsymbol{y}$ in place of $\boldsymbol{g}$.
  13. Fix $u \in {\mathbb R}$, $\psi(s) = 1_{s \leq u}$, and $0 < \eta < 1/2$.
    1. Suppose we approximate $\psi$ by a smooth function $\widetilde{\psi}_\eta$ as in Exercise 38, i.e., we define $\widetilde{\psi}_\eta(s) = \mathop{\bf E}[\psi(s + \eta \boldsymbol{g})]$ for $\boldsymbol{g} \sim \mathrm{N}(0,1)$. Show that $\widetilde{\psi}_\eta$ satisfies the following properties:

      • $\widetilde{\psi}_\eta$ is a decreasing function with $\widetilde{\psi}_\eta(s) < \psi(s)$ for $s < u$ and $\widetilde{\psi}_\eta(s) > \psi(s)$ for $s > u$.
      • $|\widetilde{\psi}_\eta(s) – \psi(s)| \leq \eta$ provided $|s – u| \geq O(\eta \sqrt{\log(1/\eta)})$.
      • $\|\widetilde{\psi}_\eta^{(k)}\|_\infty \leq C_k/\eta^k$ for each $k \in {\mathbb N}$, where $C_k$ depends only on $k$.
    2. Suppose we instead approximate $\psi$ by the function $\widetilde{\psi}_\eta(s) = \mathop{\bf E}[\psi(s + \eta \boldsymbol{y})]$, where $\boldsymbol{y}$ is the random variable from Exercise 40. Show that $\widetilde{\psi}_\eta$ satisfies the following slightly nicer properties:
      • $\widetilde{\psi}_\eta$ is a nonincreasing function which agrees with $\psi$ on $(\infty, u-\eta]$ and on $[u+\eta, \infty)$.
      • $\widetilde{\psi}_\eta$ is smooth and satisfies $\|\widetilde{\psi}_\eta^{(k)}\|_\infty \leq C_k/\eta^{k}$ for each $k \in {\mathbb N}$, where $C_k$ depends only on $k$.
  14. Prove Corollary 61 by first proving \[ \mathop{\bf Pr}[\boldsymbol{S}_Y \leq u-2\eta] – O(\eta^{-3})\gamma_{XY} \leq \mathop{\bf Pr}[\boldsymbol{S}_X \leq u] \leq \mathop{\bf Pr}[\boldsymbol{S}_Y \leq u+2\eta] + O(\eta^{-3})\gamma_{XY}. \] (Hint: Obtain $\mathop{\bf Pr}[\boldsymbol{S}_X \leq u-\eta] \leq \mathop{\bf E}[\widetilde{\psi}_\eta(\boldsymbol{S}_X)] \approx \mathop{\bf E}[\widetilde{\psi}_\eta(\boldsymbol{S}_Y)] \leq \mathop{\bf Pr}[\boldsymbol{S}_Y \leq u+\eta]$ using properties from Exercise 41. Then replace $u$ with $u+\eta$ and also interchange $\boldsymbol{S}_X$ and $\boldsymbol{S}_Y$.)
    1. Fix $q \in {\mathbb N}$. Establish the existence of a smooth function $f_q : {\mathbb R} \to {\mathbb R}$ that is $0$ on $(-\infty, -\tfrac{1}{2}]$ and that agrees with some polynomial of degree exactly $q$ on $[\tfrac{1}{2}, \infty)$. (Hint: Induction on $q$; the base case $q = 0$ is essentially Exercise 41, and the induction step can be achieved by integration.)
    2. Deduce that for any prescribed sequence $a_0, a_1, a_2, \dots$ that is eventually constantly $0$, there is a smooth function $g : {\mathbb R} \to {\mathbb R}$ that is $0$ on $(-\infty, -\tfrac{1}{2}]$ and has $g^{(k)}(\tfrac{1}{2}) = a_k$ for all $k \in {\mathbb N}$.
    3. Fix a univariate polynomial $p : {\mathbb R} \to {\mathbb R}$. Show that there is a smooth function $\widetilde{\psi} : {\mathbb R} \to {\mathbb R}$ that agrees with $p$ on $[-1,1]$ and is identically $0$ on $(-\infty, -2] \cup [2, \infty)$.
  15. Establish Corollary 69.
  16. Prove Theorem 70.
    1. By following our proof of the $d = 1$ case and using the multivariate Taylor theorem, establish the following:

      Invariance Principle for Sums of Random Vectors Let $\vec{\boldsymbol{X}}_1, \dots, \vec{\boldsymbol{X}}_n$, $\vec{\boldsymbol{Y}}_1, \dots, \vec{\boldsymbol{Y}}_n$ be independent ${\mathbb R}^d$-valued random variables with matching means and covariance matrices; i.e., $\mathop{\bf E}[\vec{\boldsymbol{X}}_t] = \mathop{\bf E}[\vec{\boldsymbol{Y}}_t]$ and $\mathop{\bf Cov}[\vec{\boldsymbol{X}}_t] = \mathop{\bf Cov}[\vec{\boldsymbol{Y}}_t]$ for all $t \in [n]$. (Note that the $d$ individual components of a particular $\vec{\boldsymbol{X}}_t$ or $\vec{\boldsymbol{Y}}_t$ are not required to be independent.) Write $\vec{\boldsymbol{S}}_X = \sum_{t=1}^n \vec{\boldsymbol{X}}_t$ and $\vec{\boldsymbol{S}}_Y = \sum_{t=1}^n \vec{\boldsymbol{Y}}_t$. Then for any $\mathcal{C}^3$ function $\psi : {\mathbb R}^d \to {\mathbb R}$ satisfying $\|\partial^\beta \psi\|_\infty \leq C$ for all $|\beta| = 3$, \[ \left| \mathop{\bf E}[\psi(\vec{\boldsymbol{S}}_X)] – \mathop{\bf E}[\psi(\vec{\boldsymbol{S}}_Y)] \right| \leq C \gamma_{\vec{X}\vec{Y}}, \] where \[ \gamma_{\vec{X}\vec{Y}} = \sum_{\substack{\beta \in {\mathbb N}^d\\|\beta| = 3}} \frac{1}{\beta!} \sum_{t=1}^n \bigl(\mathop{\bf E}[|\vec{\boldsymbol{X}}_t^\beta|] + \mathop{\bf E}[|\vec{\boldsymbol{Y}}_t^\beta|]\bigr). \]

    2. Show that $\gamma_{\vec{X}\vec{Y}}$ satisfies \[ \gamma_{\vec{X}\vec{Y}} \leq \frac{d^2}{6}\sum_{t = 1}^n \sum_{i=1}^d \bigl(\mathop{\bf E}[|\vec{\boldsymbol{X}}_t^{3e_i}|] + \mathop{\bf E}[|\vec{\boldsymbol{Y}}_t^{3e_i}|] \bigr). \] Here $\vec{\boldsymbol{X}}_t^{3e_i}$ denotes the cube of the $i$th component of vector $\vec{\boldsymbol{X}}_t$, and similarly for $\vec{\boldsymbol{Y}}_t$. (Hint: $abc \leq \frac13(a^3+b^3+c^3)$ for $a,b,c\geq 0$.)
    3. Deduce multivariate analogues of the Variant Berry–Esseen Theorem, Remark 56, and Corollary 59 (using Proposition 78).
  17. Justify Remark 65. (Hint: You’ll need Exercise 10.29.)
    1. Prove the following:

      Multifunction Invariance Principle Let $F^{(1)}$, \dots, $F^{(d)}$ be formal $n$-variate multilinear polynomials each of degree at most $k \in {\mathbb N}$. Let $\vec{{\boldsymbol{x}}}_1, \dots, \vec{{\boldsymbol{x}}}_n$ and $\vec{\boldsymbol{y}}_1, \dots, \vec{\boldsymbol{y}}_n$ be independent ${\mathbb R}^d$-valued random variables such that $\mathop{\bf E}[\vec{{\boldsymbol{x}}}_t] = \mathop{\bf E}[\vec{\boldsymbol{y}}_t] = 0$ and $M_t = \mathop{\bf Cov}[\vec{{\boldsymbol{x}}}_t] = \mathop{\bf Cov}[\vec{\boldsymbol{y}}_t]$ for each $t \in [n]$. Assume each $M_t$ has all its diagonal entries equal to $1$ (i.e., each of the $d$ components of $\vec{{\boldsymbol{x}}}_t$ has variance $1$, and similarly for $\vec{\boldsymbol{y}}_t$). Further assume each component random variable $\vec{{\boldsymbol{x}}}_t^{(j)}$ and $\vec{\boldsymbol{y}}_t^{(j)}$ is $(2,3,\rho)$-hypercontractive ($t \in [n]$, $j \in [d]$). Then for any $\mathcal{C}^3$ function $\psi : {\mathbb R}^d \to {\mathbb R}$ satisfying $\|\partial^\beta \psi\|_\infty \leq C$ for all $|\beta| = 3$, \[ \left|\mathop{\bf E}[\psi(\vec{F}(\vec{{\boldsymbol{x}}}))] – \mathop{\bf E}[\psi(\vec{F}(\vec{\boldsymbol{y}}))]\right| \leq \tfrac{Cd^2}{3} \cdot (1/\rho)^{3k} \cdot \sum_{t=1}^n \sum_{j=1}^d \mathbf{Inf}_t[F^{(j)}]^{3/2}. \] Here we are using the following notation: If $\vec{\boldsymbol{z}} = (\vec{\boldsymbol{z}}_1, \dots, \vec{\boldsymbol{z}}_n)$ is a sequence of ${\mathbb R}^d$-valued random variables, $\vec{F}(\vec{\boldsymbol{z}})$ denotes the vector in ${\mathbb R}^d$ whose $j$th component is $F^{(j)}(\vec{\boldsymbol{z}}_1^{(j)}, \dots, \vec{\boldsymbol{z}}_n^{(j)})$.

      (Hint: Combine the proofs of the Basic Invariance Principle and the Invariance Principle for Sums of Random Vectors, Exercise 46. The only challenging part should be notation.)

    2. Show that if we further have $\mathop{\bf Var}[F^{(j)}] \leq 1$ and $\mathbf{Inf}_t[F^{(j)}] \leq \epsilon$ for all $j \in [d]$, $t \in [n]$, then \[ \left|\mathop{\bf E}[\psi(\vec{F}(\vec{{\boldsymbol{x}}}))] – \mathop{\bf E}[\psi(\vec{F}(\vec{\boldsymbol{y}}))]\right| \leq \tfrac{Cd^3}{3} \cdot k(1/\rho)^{3k} \cdot \epsilon^{1/2}. \]
    1. Prove the following:

      Invariance Principle in general product spaces
      Let $(\Omega, \pi)$ be a finite probability space, $|\Omega| = m \geq 2$, in which every outcome has probability at least $\lambda$. Suppose $f \in L^2(\Omega^n, \pi^{\otimes n})$ has degree at most $k$; thus, fixing some Fourier basis $\phi_0, \dots, \phi_{m-1}$ for $L^2(\Omega, \pi)$, we have \[ f = \sum_{\substack{\alpha \in {\mathbb N}^n_{<m} \\ \#\alpha \leq k}} \widehat{f}(\alpha)\phi_\alpha. \] Introduce indeterminates $x = (x_{i,j})_{i \in [n], j \in [m-1]}$ and let $F$ be the formal $(m-1)n$-variate polynomial of degree at most $k$ defined by \[ F(x) = \sum_{\#\alpha \leq k} \widehat{f}(\alpha) \prod_{i \in \mathrm{supp}(\alpha)} x_{i, \alpha_i}. \] Then for any $\psi : {\mathbb R} \to {\mathbb R}$ that is $\mathcal{C}^3$ and satisfies $\|\psi^{\prime \prime \prime}\|_\infty \leq C$ we have \[ \left|\mathop{\bf E}_{{\boldsymbol{x}} \sim \{-1,1\}^{(m-1)n}}[\psi(F({\boldsymbol{x}}))] – \mathop{\bf E}_{\boldsymbol{\omega} \sim \pi^{\otimes n}}[\psi(f(\boldsymbol{\omega}))]\right| \leq \tfrac{C}{3} \cdot (2\sqrt{2/\lambda})^k \cdot \sum_{i=1}^n \mathbf{Inf}_i[f]^{3/2}. \]

      (Hint: For $0 \leq t \leq n$, define the function $h_t \in L^2(\Omega^t \times \{-1,1\}^{(m-1)(n-t)}, \pi^{\otimes t} \otimes \pi_{1/2}^{\otimes (m-1)(n-t)})$ via \[ h_t(\boldsymbol{\omega}_1, \dots, \boldsymbol{\omega}_t, {\boldsymbol{x}}_{t+1,1}, \dots, {\boldsymbol{x}}_{n,m-1}) = \sum_{\#\alpha \leq k} \widehat{f}(\alpha) \prod_{\substack{i \in \mathrm{supp}(\alpha) \\ i \leq t}} \phi_{\alpha_i}(\boldsymbol{\omega}_i) \prod_{\substack{i \in \mathrm{supp}(\alpha) \\ i > t}} {\boldsymbol{x}}_{i,\alpha_i}. \] Express \[ h_{t} = \mathrm{E}_t h_{t} + \mathrm{L}_t h_{t} = \mathrm{E}_t h_t + \sum_{j = 1}^m D_j \cdot \phi_j(\boldsymbol{\omega}_t) \] where \[ D_j = \sum_{\alpha : \alpha_t = j} \widehat{f}(\alpha) \prod_{\substack{i \in \mathrm{supp}(\alpha) \\ i < t}} \phi_{\alpha_i}(\boldsymbol{\omega}_i) \prod_{\substack{i \in \mathrm{supp}(\alpha) \\ i > t}} {\boldsymbol{x}}_{i,\alpha_i}, \] and note that $h_{t-1} = \mathrm{E}_t h_t + \sum_{j = 1}^m D_j \cdot {\boldsymbol{x}}_{t,j}$.)

    2. In the setting of the previous theorem, show also that \[ \left|\mathop{\bf E}_{\boldsymbol{g} \sim \mathrm{N}(0,1)^{(m-1)n}}[\psi(F(\boldsymbol{g}))] – \mathop{\bf E}_{\boldsymbol{\omega} \sim \pi^{\otimes n}}[\psi(f(\boldsymbol{\omega}))]\right| \leq \tfrac{2C}{3} \cdot (2\sqrt{2/\lambda})^k \cdot \sum_{i=1}^n \mathbf{Inf}_i[f]^{3/2}. \] (Hint: Apply the Basic Invariance Principle in the form of Exercise 47. How can you bound the $(m-1)n$ influences of $F$ in terms of the $n$ influences of $f$?)
  18. Prove the following version of the General-Volume Majority Is Stablest Theorem in the setting of general product spaces:

    Theorem 79 Let $(\Omega, \pi)$ be a finite probability space in which each outcome has probability at least $\lambda$. Let $f \in L^2(\Omega^n, \pi^{\otimes n})$ have range $[0,1]$. Suppose that $f$ has no $(\epsilon, \frac{1}{\log(1/\epsilon)})$-notable coordinates. Then for any ${0 \leq \rho < 1}$, \[ \mathbf{Stab}_\rho[f] \leq \Lambda_\rho(\mathop{\bf E}[f]) + O\bigl(\tfrac{\log \log (1/\epsilon)}{\log(1/\epsilon)}\bigr) \cdot \tfrac{\log(1/\lambda)}{1-\rho}. \]

    (Hint: Naturally, you’ll need Exercise 49(b).)

2 comments to Chapter 11 exercises, continued

Leave a Reply

  

  

  

You can use most standard LaTeX, as well as these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>