Chapter 9 exercises

  1. For every $1 < b < B$ show that there is a $b$-reasonable random variable $\boldsymbol{X}$ such that $1+\boldsymbol{X}$ is not $B$-reasonable.
  2. For $k = 1$, improve the $9$ in the Bonami Lemma to $3$. More precisely, suppose $f : \{-1,1\}^n \to {\mathbb R}$ has degree at most $1$ and that ${\boldsymbol{x}}_1, \dots, {\boldsymbol{x}}_n$ are independent $3$-reasonable random variables satisfying $\mathop{\bf E}[{\boldsymbol{x}}_i] = \mathop{\bf E}[{\boldsymbol{x}}_i^3] = 0$. (E.g., the ${\boldsymbol{x}}_i$’s may be uniform $\pm 1$ bits.) Show that $f({\boldsymbol{x}})$ is also $3$-reasonable. (Hint: by direct computation, or by running through the Bonami Lemma proof with $k = 1$ more carefully.)
  3. Let $k$ be a positive multiple of $3$ and let $n \geq 2k$ be an integer. Define $f : \{-1,1\}^n \to {\mathbb R}$ by \[ f(x) = \sum_{\substack{S \subseteq [n] \\ |S| = k}} x^S. \]

    1. Show that \[ \mathop{\bf E}[f^4] \geq \frac{\binom{n}{k/3,\,k/3,\,k/3,\,k/3,\,k/3,\,k/3,\,n-2k}}{\binom{n}{k}^2} \mathop{\bf E}[f^2]^2, \] where the numerator of the fraction is a multinomial coefficient — specifically, the number of ways of choosing six disjoint size-$k/3$ subsets of $[n]$. (Hint: given such size-$k/3$ subsets, consider quadruples of size-$k$ subsets that hit each size-$k/3$ subset twice.)
    2. Using Stirling’s formula, show that \[ \lim_{n \to \infty} \frac{\binom{n}{k/3,\,k/3,\,k/3,\,k/3,\,k/3,\,k/3,\,n-2k}}{\binom{n}{k}^2} = \Theta(k^{-2} 9^k). \] Deduce the following lower bound for the Bonami Lemma: $\|f\|_4 \geq \Omega(k^{-1/2}) \cdot \sqrt{3}^k \|f\|_2$. (In fact, $\|f\|_4 = \Theta(k^{-1/4}) \cdot \sqrt{3}^k \|f\|_2$ and such an upper bound holds for all $f$ homogeneous of degree $k$; see Exercise 38(f).)
  4. Prove Corollary 6.
  5. Let $0 \leq \delta \leq \frac{1}{1600}$ and let $f$, $\ell$ be real numbers satisfying $|\ell^2 – 1| > 39\sqrt{\delta}$ and $|f| = 1$. Show that $|f – \ell|^2 \geq 169\delta$. (This is a loose estimate; stronger ones are possible.)
  6. Theorem 21 shows that the $(2,4)$-Hypercontractivity Theorem implies the Bonami Lemma. In this exercise you will show the reverse implication.

    1. Let $f : \{-1,1\}^n \to {\mathbb R}$. For a fixed $\delta \in (0,1)$, use the Bonami Lemma to show that \[ \|\mathrm{T}_{(1-\delta)/\sqrt{3}} f\|_4 \leq \sum_{k=0}^\infty (1-\delta)^k \|f^{=k}\|_2 \leq \tfrac{1}{\delta} \|f\|_2. \]
    2. For $g : \{-1,1\}^n \to {\mathbb R}$ and $d \in {\mathbb N}^+$, let $g^{\oplus d} : \{-1,1\}^{dn} \to {\mathbb R}$ be the function defined by $g^{\oplus d}(x^{(1)}, \dots, x^{(d)}) = g(x^{(1)})g(x^{(2)}) \cdots g(x^{(d)})$ (where each $x^{(i)} \in \{-1,1\}^n$). Show that $\|\mathrm{T}_\rho(g^{\oplus d})\|_p = \|\mathrm{T}_\rho g\|_p^d$ holds for every $p \in {\mathbb R}^+$ and $\rho \in [-1,1]$. Note the special case $\rho = 1$.
    3. Deduce from parts (a) and (b) that in fact $\|\mathrm{T}_{(1-\delta)/\sqrt{3}} f\|_4 \leq \|f\|_2$. (Hint: apply part (a) to $f^{\oplus d}$ for larger and larger $d$.)
    4. Deduce that in fact $\|\mathrm{T}_{1/\sqrt{3}} f\|_4 \leq \|f\|_2$; i.e., the $(2,4)$-Hypercontractivity Theorem follows from the Bonami Lemma. (Hint: take the limit as $\delta \to 0^+$.)
  7. Suppose we wish to show that $\|\mathrm{T}_\rho f\|_q \leq \|f\|_p$ for all $f : \{-1,1\}^n \to {\mathbb R}$. Show that it suffices to show this for all nonnegative $f$. (Hint: Exercise 2.29.)
  8. Fix $k \in {\mathbb N}$. The goal of this exercise is to show that “projection to degree $k$ is a bounded operator in all $L^p$ norms, $p > 1$”. Let $f : \{-1,1\}^n \to {\mathbb R}$.

    1. Let $q \geq 2$. Show that $\|f^{\leq k}\|_q \leq \sqrt{q-1}^k \|f\|_q$. (Hint: use Theorem 21 to show the stronger statement $\|f^{\leq k}\|_q \leq \sqrt{q-1}^k \|f\|_2$.)
    2. Let $1 < q \leq 2$. Show that $\|f^{\leq k}\|_q \leq (1/\sqrt{q-1})^k \|f\|_q$. (Hint: either give a similar direct proof using the $(p,2)$-Hypercontractivity Theorem, or explain how this follows from part (a) using the dual norm Proposition 19.)
  9. Let $\boldsymbol{X}$ be $(p,q,\rho)$-hypercontractive.
    1. Show that $c\boldsymbol{X}$ is $(p,q,\rho)$-hypercontractive for any $c \in {\mathbb R}$.
    2. Show that $\rho \leq \frac{\|\boldsymbol{X}\|_p}{\|\boldsymbol{X}\|_q}$.
  10. Let $\boldsymbol{X}$ be $(p,q,\rho)$-hypercontractive. (For simplicity you may want to assume $\boldsymbol{X}$ is a discrete random variable.)
    1. Show that $\mathop{\bf E}[\boldsymbol{X}]$ must be $0$. (Hint: Taylor expand $\|1 + \rho \epsilon \boldsymbol{X}\|_r$ to one term around $\epsilon = 0$; note that $\rho < 1$ by definition.)
    2. Show that $\rho \leq \sqrt{\frac{p-1}{q-1}}$. (Hint: Taylor expand $\|1 + \rho \epsilon \boldsymbol{X}\|_r$ to two terms around $\epsilon = 0$.)
    1. Suppose $\mathop{\bf E}[\boldsymbol{X}] = 0$. Show that $\boldsymbol{X}$ is $(q, q, 0)$-hypercontractive for all $q \geq 1$. (Hint: use monotonicity of norms to reduce to the case $q = 1$.)
    2. Show further that $\boldsymbol{X}$ is $(q, q, \rho)$-hypercontractive for all $0 \leq \rho < 1$. (Hint: write $(a + \rho \boldsymbol{X}) = (1-\rho) a + \rho(a+\boldsymbol{X})$ and employ the triangle inequality for $\| \cdot \|_q$.)
    3. Show that if $\boldsymbol{X}$ is $(p,q,\rho)$-hypercontractive then it is also $(p, q, \rho’)$-hypercontractive for all $0 \leq \rho’ < \rho$. (Hint: use the previous exercise along with Exercise 10(a).)
  11. Let $\boldsymbol{X}$ be a (nonconstant) $(2,4,\rho)$-hypercontractive random variable. The goal of this exercise is to show the following anticoncentration result: For all $\theta \in {\mathbb R}$ and $0 < t < 1$, \[ \mathop{\bf Pr}[|\boldsymbol{X} - \theta| > t \|\boldsymbol{X}\|_2] \geq (1-t^2)^2\rho^4. \]
    1. Reduce to the case $\|\boldsymbol{X}\|_2 = 1$.
    2. Letting $\boldsymbol{Y} = (\boldsymbol{X} – \theta)^2$, show that $\mathop{\bf E}[\boldsymbol{Y}] = 1+\theta^2$ and $\mathop{\bf E}[\boldsymbol{Y}^2] \leq (\rho^{-2} + \theta^2)^2$.
    3. Using the Paley–Zygmund inequality, show that \[ \mathop{\bf Pr}[|\boldsymbol{X} - \theta| > t] \geq \left(\frac{\rho^2(1-t^2) + \rho^2 \theta^2}{1+\rho^2\theta^2}\right)^2. \]
    4. Show that the right-hand side above is minimized for $\theta = 0$, thereby completing the proof.
  12. Let $m \in {\mathbb N}^+$ and let $f : \{-1,1\}^n \to [m]$ be “unbiased”, meaning $\mathop{\bf Pr}[f({\boldsymbol{x}}) = i] = \frac{1}{m}$ for all $i \in [m]$. Let $0 \leq \rho \leq 1$ and let $({\boldsymbol{x}}, \boldsymbol{y})$ be a $\rho$-correlated pair. Show that $\mathop{\bf Pr}[f({\boldsymbol{x}}) = f(\boldsymbol{y})] \leq (1/m)^{(1-\rho)/(1+\rho)}$. (More generally, you might show that this is an upper bound on $\mathbf{Stab}_\rho[f]$ for all $f : \{-1,1\}^n \to \triangle_m$ with $\mathop{\bf E}[f] = (\frac1m, \dots, \frac1m)$; see Exercise 8.29.)
    1. Let $f : \{-1,1\}^n \to {\mathbb R}$ have degree at most $k$. Prove that $\|f\|_2 \leq (1/\sqrt{p-1})^k \|f\|_p$ for any $1 \leq p \leq 2$ using the Hölder inequality strategy from our proof of the $(4/3,2)$-Hypercontractivity Theorem, together with Theorem 21.
    2. Verify that $\exp(\frac{2}{p}-1) < 1/\sqrt{p-1}$ for all $1 \leq p < 2$; i.e., the trickier Theorem 22 strictly improves on the bound from part (a).
  13. Prove Theorem 22 in full generality. (Hint: let $\theta$ be the solution of $\frac12 = \frac{\theta}{p} + \frac{1-\theta}{2+\epsilon}$. You will need to show that $\frac{1-\theta}{2\theta} = (\frac2p-1)\frac{1}{\epsilon} + (\frac1p – \frac12)$.)
  14. As mentioned, it’s possible to deduce the $(2,q)$-Hypercontractivity Theorem from the $n = 1$ case using induction by derivatives. From this one can also obtain the $(p,2)$-Hypercontractivity Theorem via Proposition 19. Employing the notation ${\boldsymbol{x}} = ({\boldsymbol{x}}’, {\boldsymbol{x}}_n)$, $\mathrm{T} = \mathrm{T}_{1/\sqrt{q-1}}$, $\boldsymbol{d} = \mathrm{D}_nf({\boldsymbol{x}}’)$, and $\boldsymbol{e} = \mathrm{E}_nf({\boldsymbol{x}}’)$, fill in details and justifications for the following proof sketch: \begin{multline*} \|\mathrm{T}_{1/\sqrt{q-1}} f\|_q^2 = \mathop{\bf E}_{{\boldsymbol{x}}’} \Bigl[\mathop{\bf E}_{{\boldsymbol{x}}_n}\bigl[|\mathrm{T} \boldsymbol{e} + (1/\sqrt{q-1}) {\boldsymbol{x}}_n \mathrm{T} \boldsymbol{d}|^q\bigr]\Bigr]^{2/q} \leq \mathop{\bf E}_{{\boldsymbol{x}}’} \bigl[((\mathrm{T} \boldsymbol{e})^2 + (\mathrm{T} \boldsymbol{d})^2)^{q/2}\bigr]^{2/q} \\ = \|(\mathrm{T} \boldsymbol{e})^2 + (\mathrm{T} \boldsymbol{d})^2\|_{q/2} \leq \|(\mathrm{T} \boldsymbol{e})^2\|_{q/2} + \|(\mathrm{T} \boldsymbol{d})^2\|_{q/2} = \|\mathrm{T} \boldsymbol{e}\|_{q}^2 + \|\mathrm{T} \boldsymbol{d}\|_{q}^2 \leq \|\boldsymbol{e}\|_2^2 + \|\boldsymbol{d}\|_2^2 = \|f\|_2^2. \end{multline*}
  15. Deduce the $p < 2 < q$ cases of the Hypercontractivity Theorem from the $(2,q)$- and $(p,2)$-Hypercontractivity Theorems. (Hint: use the semigroup property of $\mathrm{T}_\rho$, Exercise 2.27.)
  16. Let $f : \{-1,1\}^n \to \{0,1\}$ have $\mathop{\bf E}[f] = \alpha$.

    1. Show that $\mathbf{W}^{1}[f] \leq \frac{1}{\rho}(\alpha^{2/(1+\rho)} – \alpha^2)$ for any $0 < \rho \leq 1$.
    2. Deduce the sharp Level-1 Inequality $\mathbf{W}^{1}[f] \leq 2\alpha^2 \ln(1/\alpha)$. (Hint: take the limit $\rho \to 0^+$.)
  17. In this exercise you will prove the second statement of the Level-$k$ Inequalities.
    1. Show that choosing $k = k_\epsilon$ in the theorem yields \[ \mathbf{W}^{\leq k_\epsilon}[f] \leq \alpha^{2\epsilon – (2-2\epsilon)\ln(1/(1-\epsilon))}. \]
    2. Show that $2\epsilon – (2-2\epsilon)\ln(1/(1-\epsilon)) \geq \epsilon^2$ for all $0 \leq \epsilon \leq 1$.
  18. Show that the KKL Theorem fails for functions $f : \{-1,1\}^n \to [-1,1]$, even under the assumption $\mathop{\bf Var}[f] \geq \Omega(1)$. (Hint: $f(x) = \mathrm{trunc}_{[-1,1]}(\frac{x_1 + \cdots + x_n}{\sqrt{n}})$.)
    1. Show that $\mathcal{C} = \{f : \{-1,1\}^n \to \{-1,1\} \mid \mathbf{I}[f] \leq O(\sqrt{\log n})\}$ is learnable from queries to any constant error $\epsilon > 0$ in time $\mathrm{poly}(n)$. (Hint: Theorem 28.)
    2. Show that $\mathcal{C} = \{\text{monotone } f : \{-1,1\}^n \to \{-1,1\} \mid \mathbf{I}[f] \leq O(\sqrt{\log n})\}$ is learnable from random examples to any constant error $\epsilon > 0$ in time $\mathrm{poly}(n)$.
    3. Show that $\mathcal{C} = \{\text{monotone } f : \{-1,1\}^n \to \{-1,1\} \mid {\mathrm{DT}_{\mathrm{size}}}(f) \leq \mathrm{poly}(n)\}$ is learnable from random examples to any constant error $\epsilon > 0$ in time $\mathrm{poly}(n)$. (Hint: Exercise 8.39 and the OS Inequality.)
  19. Deduce the following generalization of the $(2,q)$-Hypercontractivity Theorem: Let $f : \{-1,1\}^n \to {\mathbb R}$, $q \geq 2$, and assume $0 \leq \rho \leq 1$ satisfies $\rho^\lambda \leq 1/\sqrt{q-1}$ for some $0 \leq \lambda \leq 1$. Then \[ \|\mathrm{T}_\rho f\|_q \leq \|\mathrm{T}_\rho f\|_2^{1-\lambda} \|f\|_2^\lambda. \] (Hint: show $\|\mathrm{T}_\rho f\|_q^2 \leq \sum_S (\rho^{2|S|}\widehat{f}(S)^2)^{1-\lambda} \cdot (\widehat{f}(S)^2)^{\lambda}$ and use Hölder.)
  20. Let $f : \{-1,1\}^n \to [-1,1]$, let $0 \leq \epsilon \leq 1$, and assume $q \geq 2+2\epsilon$. Show that \[ \|\mathrm{T}_{1-\epsilon} f\|_q^q \leq \|\mathrm{T}_{\frac{1}{\sqrt{1+2\epsilon}}} f\|_q^q \leq (\|f\|_2^2)^{1+\epsilon}. \]
  21. Recall the Gaussian quadrant probability $\Lambda_\rho(\mu)$ defined in Exercise 5.33 by $\Lambda_\rho(\mu) = \mathop{\bf Pr}[\boldsymbol{z}_1 > t, \boldsymbol{z}_2 > t]$, where $\boldsymbol{z}_1, \boldsymbol{z}_2$ are standard Gaussians with correlation $\mathop{\bf E}[\boldsymbol{z}_1\boldsymbol{z}_2] = \rho$ and $t$ is defined by $\overline{\Phi}(t) = \mu$. The goal of this exercise is to show that for fixed $0 < \rho < 1$ we have the estimate \begin{equation} \label{eqn:gaussquad-goal} \Lambda_\rho(\mu) = \widetilde{\Theta}(\mu^{\frac{2}{1+\rho}}) \end{equation} as $\mu \to 0$. In light of Exercise 5.33, this will show that the Small-Set Expansion Theorem for the $\rho$-stable hypercube graph is essentially sharp due to the example of Hamming balls of volume $\mu$.

    1. First let’s do an imprecise “heuristic” calculation. We have $\mathop{\bf Pr}[\boldsymbol{z}_1 > t] = \mathop{\bf Pr}[\boldsymbol{z}_1 \geq t] = \mu$ by definition. Conditioned on a Gaussian being at least $t$ it is unlikely to be much more than $t$, so let’s just pretend that $\boldsymbol{z}_1 = t$. Then the conditional distribution of $\boldsymbol{z}_2$ is $\rho t + \sqrt{1-\rho^2} \boldsymbol{y}$, where $\boldsymbol{y} \sim \mathrm{N}(0,1)$ is an independent Gaussian. Using the fact that $\overline{\Phi}(u) \sim \phi(u)/u$ as $u \to \infty$, deduce that $\mathop{\bf Pr}[\boldsymbol{z}_2 > t \mid \boldsymbol{z}_1 = t] = \widetilde{\Theta}(\mu^{\frac{1-\rho}{1+\rho}})$ and “hence” \eqref{eqn:gaussquad-goal} holds.
    2. Let’s now be rigorous. Recall that we are treating $0 < \rho < 1$ as fixed and letting $\mu \to 0$ (hence $t \to \infty$). Let $\phi_\rho(z_1,z_2)$ denote the joint pdf of $\boldsymbol{z}_1, \boldsymbol{z}_2$ so that \[ \Lambda_\rho(\mu) = \int_{t}^\infty \int_t^\infty \phi_{\rho}(z_1, z_2)\, dz_1\,dz_2. \] Derive the following similar-looking integral: \begin{equation} \label{eqn:gauss-quad-extra} \int_{t}^\infty \int_t^\infty (z_2 – \rho z_1)(z_1 – \rho t) \phi_{\rho}(z_1, z_2)\, dz_1\,dz_2 = \frac{(1-\rho^2)^{3/2}}{2\pi} \exp\left(-\frac{2}{1+\rho} \frac{t^2}{2}\right) \end{equation} and show that the right-hand side is $\widetilde{\Theta}(\mu^{\frac{2}{1+\rho}})$.
    3. Show that \[ \mathop{\bf Pr}\left[\boldsymbol{z}_1 > \tfrac{t-1}{\rho}\right] = \int_{\frac{t-1}{\rho}}^\infty \phi(z_1)\,dz_1 = \widetilde{\Theta}(\mu^{\frac{1}{\rho^2}}) = o(\mu^{\frac{2}{1+\rho}}). \]
    4. Deduce \eqref{eqn:gaussquad-goal}. (Hint: try to arrange that the extraneous factors $(z_2 – \rho)$, $(z_1 – \rho t)$ in \eqref{eqn:gauss-quad-extra} are both at least $1$.)
  22. Let $f : \{-1,1\}^n \to \{-1,1\}$, let $J \subseteq [n]$, and write $\overline{J} = [n] \setminus J$. Define the coalitional influence of $J$ on $f$ to be \[ \widetilde{\mathbf{Inf}}_J[f] = \mathop{\bf Pr}_{\boldsymbol{z} \sim \{-1,1\}^{\overline{J}}}[f_{J \mid \boldsymbol{z}} \text{ is not constant}]. \] Furthermore, for $b \in \{-1, +1\}$ define the coalitional influence towards $b$ of $J$ on $f$ to be \begin{align*} \widetilde{\mathbf{Inf}}_J^{\,b}[f] &= \mathop{\bf Pr}_{\boldsymbol{z} \sim \{-1,1\}^{\overline{J}}}[f_{J \mid \boldsymbol{z}} \text{ can be made $b$}] – \mathop{\bf Pr}[f = b] \\ &= \mathop{\bf Pr}_{\boldsymbol{z} \sim \{-1,1\}^{\overline{J}}}[f_{J \mid \boldsymbol{z}} \not \equiv -b] – \mathop{\bf Pr}[f = b]. \end{align*} For brevity, we’ll sometimes write $\widetilde{\mathbf{Inf}}_J^{\,\pm}[f]$ rather than $\widetilde{\mathbf{Inf}}_J^{\,\pm 1}[f]$.
    1. Show that for coalitions of size $1$ we have $\mathbf{Inf}_i[f] = \widetilde{\mathbf{Inf}}_{\{i\}}[f] = 2\widetilde{\mathbf{Inf}}_{\{i\}}^{\,\pm}[f]$.
    2. Show that $0 \leq \widetilde{\mathbf{Inf}}_J^{\,\pm}[f] \leq 1$.
    3. Show that $\widetilde{\mathbf{Inf}}_J[f] = \widetilde{\mathbf{Inf}}_J^{\,+}[f] + \widetilde{\mathbf{Inf}}_J^{\,-}[f]$.
    4. Show that if $f$ is monotone then \[ \widetilde{\mathbf{Inf}}^{\,b}_J[f] = \mathop{\bf Pr}[f_{\overline{J} \mid (b, \dots, b)} = b] – \mathop{\bf Pr}[f = b]. \]
    5. Show that $\widetilde{\mathbf{Inf}}_J[\chi_{[n]}] = 1$ for all $J \neq \emptyset$.
    6. Supposing we write $t = |J|/\sqrt{n}$, show that $\widetilde{\mathbf{Inf}}^{\,\pm}_J[\mathrm{Maj}_n] = \Phi(t) – \tfrac{1}{2} \pm o(1)$ and hence $\widetilde{\mathbf{Inf}}_J[\mathrm{Maj}_n] = 2\Phi(t) – 1 \pm o(1)$. Thus $\widetilde{\mathbf{Inf}}_J[\mathrm{Maj}_n] = o(1)$ if $|J| = o(\sqrt{n})$ and $\widetilde{\mathbf{Inf}}_J[\mathrm{Maj}_n] = 1 – o(1)$ if $|J| = \omega(\sqrt{n})$. (Hint: Central Limit Theorem.)
    7. Show that $\max \{\widetilde{\mathbf{Inf}}_J^{\mathsf{True}}[\mathrm{Tribes}_n] : |J| \leq \log n\} = 1/2 + \Theta(\frac{\log n}{n})$. On the other hand, show that $\max \{\widetilde{\mathbf{Inf}}_J^{\mathsf{False}}[\mathrm{Tribes}_n] : |J| \leq k\} \leq k \cdot O(\frac{\log n}{n})$. Deduce that for some positive constant $c$ we have $\max \{\widetilde{\mathbf{Inf}}_J[\mathrm{Tribes}_n] : |J| \leq c n/\log n\} \leq .51$. (Hint: refer to Proposition 4.12.)
  23. Show that the exponential dependence on $\mathbf{I}[f]$ in Friedgut’s Junta Theorem is necessary. (Hint: Tribes.)
  24. Let $f : \{-1,1\}^n \to \{-1,1\}$ be a monotone function with $\mathop{\bf Var}[f] \geq \delta > 0$, and let $0 < \epsilon < 1/2$ be given.

    1. Improve Proposition 27 as follows: Show that there exists $J \subseteq [n]$ with $|J| \leq O(\log \frac{1}{\epsilon \delta}) \cdot \frac{n}{\log n}$ such that $\mathop{\bf E}[f_{\overline{J} \mid (1, \dots, 1)}] \geq 1-\epsilon$. (Hint: how many bribes are required to move $f$’s mean outside the interval $[1-2\eta, 1-\eta]$?)
    2. Show that there exists $J \subseteq [n]$ with $|J| \leq O(\log \frac{1}{\epsilon \delta}) \cdot \frac{n}{\log n}$ such that $\widetilde{\mathbf{Inf}}_J[f] \geq 1 – \epsilon$. (Hint use Exercise 25(d) and take the union of two influential sets.)
  25. Let $f : \{-1,1\}^n \to \{-1,1\}$.
    1. Let $f^* : \{-1,1\}^n \to \{-1,1\}$ be the “monotonization” of $f$ as defined in Exercise 2.45. Show that $\widetilde{\mathbf{Inf}}_J^{\,b}[f^*] \leq \widetilde{\mathbf{Inf}}_J^{\,b}[f]$ for all $J \subseteq [n]$ and $b \in \{-1,1\}$, and hence also $\widetilde{\mathbf{Inf}}_J[f^*] \leq \widetilde{\mathbf{Inf}}_J[f]$.
    2. Let $\mathop{\bf Var}[f] \geq \delta > 0$ and let $0 < \epsilon < 1/2$ be given. Show that there exists $J \subseteq [n]$ with $|J| \leq O(\log \frac{1}{\epsilon \delta}) \cdot \frac{n}{\log n}$ such that $\widetilde{\mathbf{Inf}}_J[f] \geq 1 – \epsilon$. (Hint: combine part (a) with Exercise 27(b).)
  26. Establish the general-variance case of the KKL Edge-Isoperimetric Theorem. (Hint: you’ll need to replace Equation (2) in Section 9.6 with \[ 3 \sum_{|S| \geq 1} (1/3)^{|S|} \widehat{f}(S)^2 \geq 3 \mathop{\bf Var}[f] \cdot 3^{-\mathbf{I}[f]/\mathop{\bf Var}[f]}. \] Use the same convexity argument, but applied to the random variable $\boldsymbol{S}$ which takes on each outcome $\emptyset \neq S \subseteq [n]$ with probability $\widehat{f}(S)^2/\mathop{\bf Var}[f]$.)
  27. The goal of this exercise is to attain the best known constant factor in the statement of the KKL Theorem.

    1. By using Corollary 25 in place of Corollary 12, obtain the following generalization of the KKL Edge-Isoperimetric Theorem: For any (nonconstant) $f : \{-1,1\}^n \to \{-1,1\}$ and $0 < \delta < 1$, \[ \mathbf{MaxInf}[f] \geq \left(\tfrac{1+\delta}{1-\delta}\right)^{\frac{1}{\delta}} \left(\tfrac{1}{\mathbf{I}’[f]}\right)^{\frac{1}{\delta}} \cdot \left(\tfrac{1-\delta}{1+\delta}\right)^{\frac{1}{\delta}\mathbf{I}’[f]}, \] where $\mathbf{I}’[f]$ denotes $\mathbf{I}[f]/\mathop{\bf Var}[f]$. (Hint: write $\rho = \frac{1-\delta}{1+\delta}$.) Deduce that for any constant $C > e^2$ we have \[ \mathbf{MaxInf}[f] \geq \widetilde{\Omega}(C^{-\mathbf{I}’[f]}). \]
    2. More carefully, show that by taking $\delta = \frac{1}{2\mathbf{I}’[f]^{1/3}}$ we can achieve \[ \mathbf{MaxInf}[f] \geq \exp(-2\mathbf{I}’[f]) \cdot e^2 \cdot \left(\tfrac{1}{\mathbf{I}’[f]}\right)^{2\mathbf{I}’[f]^{1/3}} \cdot \exp(-\tfrac14 \mathbf{I}’[f]^{1/3}). \] (Hint: establish $\left(\tfrac{1-\delta}{1+\delta}\right)^{\frac{1}{\delta}} \geq \exp(-2-\delta^2)$ for $0 < \delta \leq 1/2$.)
    3. By distinguishing whether or not $\mathbf{I}’[f] \geq \tfrac{1}{2} (\ln n – \sqrt{\log n})$, establish the following form of the KKL Theorem: For any $f : \{-1,1\}^n \to \{-1,1\}$, \[ \mathbf{MaxInf}[f] \geq \tfrac{1}{2} \mathop{\bf Var}[f] \cdot \frac{\ln n}{n} (1-o_n(1)). \]
  28. Establish the claim in Remark 29.
  29. Show that if $f : \{-1,1\}^n \to \{-1,1\}$ is nonconstant then there exists $S \subseteq [n]$ with $0 < |S| \leq O(\mathbf{I}[f]/\mathop{\bf Var}[f])$ such that $\widehat{f}(S)^2 \geq \exp(-O(\mathbf{I}[f]^2/\mathop{\bf Var}[f]^2))$. (Hint: by mimicking Corollary 32‘s proof you should be able to establish the lower bound $\Omega(\mathop{\bf Var}[f]) \cdot \exp(-O(\mathbf{I}[f]^2/\mathop{\bf Var}[f]^2))$. To show that this quantity is also $\exp(-O(\mathbf{I}[f]^2/\mathop{\bf Var}[f]^2))$, use Theorem 2.38.)
  30. Let $f : \{-1,1\}^n \to \{-1,1\}$ be a nonconstant monotone function. Improve on Corollary 32 by showing that there exists $S \neq \emptyset$ with $\widehat{f}(S)^2 \geq \exp(-O(\mathbf{I}[f]/\mathop{\bf Var}[f]))$. (Hint: you can even get $|S| \leq 1$; use the Strengthened KKL Theorem and Proposition 2.20.)
  31. Let $f : \{-1,1\}^n \to {\mathbb R}$. Prove that $\|f\|_4 \leq \mathrm{sparsity}(\widehat{f})^{1/4} \|f\|_2$.
  32. Let $q = 2r$ be a positive even integer, let $\rho = 1/\sqrt{q-1}$, and let $f_1, \dots, f_r : \{-1,1\}^n \to {\mathbb R}$. Generalize the $(2,q)$-Hypercontractivity Theorem by showing that \[ \mathop{\bf E}\left[\prod_{i=1}^r (\mathrm{T}_\rho f_i)^2\right] \leq \prod_{i=1}^r \mathop{\bf E}[f_i^2]. \] (Hint: Hölder’s inequality.)
  33. In this exercise you will give a simpler, stronger version of Theorem 17 under the assumption that $q = 2r$ is a positive even integer.

    1. Using the idea of Proposition 16, show that if ${\boldsymbol{x}}$ is a uniformly random $\pm 1$ bit then ${\boldsymbol{x}}$ is $(2,q,\rho)$-hypercontractive if and only if $\rho \leq 1/\sqrt{q-1}$.
    2. Show the same statement for any random variable ${\boldsymbol{x}}$ satisfying $\mathop{\bf E}[{\boldsymbol{x}}^2] = 1$ and \[ \mathop{\bf E}[{\boldsymbol{x}}_i^{2j-1}] = 0, \quad \mathop{\bf E}[{\boldsymbol{x}}_i^{2j}] \leq (2r-1)^j \frac{\binom{r}{j}}{\binom{2r}{2j}}\quad \text{for all integers $1 \leq j \leq r$.} \]
    3. Show that none of the even moment conditions in part (b) can be relaxed.
  34. Let $q = 2r$ be a positive even integer and let $f : \{-1,1\}^n \to {\mathbb R}$ be homogeneous of degree $k \geq 1$ (i.e., $f = f^{=k}$). The goal of this problem is to improve slightly on the generalized Bonami Lemma, Theorem 21.
    1. Show that \begin{equation} \label{eqn:super-bonami-1} \mathop{\bf E}[f^q] = \sum \widehat{f}(S_1) \cdots \widehat{f}(S_k) \leq \sum |\widehat{f}(S_1)| \cdots |\widehat{f}(S_k)|, \end{equation} where the sum is over all tuples $S_1, \dots, S_k$ satisfying $S_1 \triangle \cdots \triangle S_k = \emptyset$.
    2. Let $G$ denote the complete $q$-partite graph over vertex sets $V_1, \dots, V_q$, each of cardinality $k$. Let $\mathcal{M}$ denote the set of all perfect matchings in $G$. Show that the right-hand side of \eqref{eqn:super-bonami-1} is equal to \begin{equation} \label{eqn:super-bonami-2} \frac{1}{(k!)^q}\sum_{M \in \mathcal{M}}\ \sum_{\ell : M \to [n]} |\widehat{f}(T_1(M,\ell))| \cdots |\widehat{f}(T_k(M,\ell))|, \end{equation} where $T_j(M,\ell)$ denotes $\bigcup \{\ell(e) : e \in M, e \cap V_j \neq \emptyset\}$.
    3. Show that \eqref{eqn:super-bonami-2} is equal to \begin{equation} \label{eqn:super-bonami-3} \frac{1}{(rk)! \cdot (k!)^q}\sum_{\overline{M} \in \overline{\mathcal{M}}}\ \sum_{i_1 = 1}^n \sum_{i_2 = 1}^n \cdots \sum_{i_{rk} = 1}^n |\widehat{f}(U_1(\overline{M},i_1, \dots, i_{rk}))| \cdots |\widehat{f}(U_k(\overline{M},i_1, \dots, i_{rk}))|, \end{equation} where $\overline{\mathcal{M}}$ is the set of ordered perfect matchings of $G$, and now $U_j(\overline{M}, i_1, \dots, i_{rk})$ denotes $\bigcup \{i_t : \overline{M}(t) \cap V_j \neq \emptyset\}$.
    4. Show that for any $\overline{M} \in \overline{\mathcal{M}}$ we have \begin{multline*} \sum_{i_1 = 1}^n \sum_{i_2 = 1}^n \cdots \sum_{i_{rk} = 1}^n |\widehat{f}(U_1(\overline{M},i_1, \dots, i_{rk}))| \cdots |\widehat{f}(U_k(\overline{M},i_1, \dots, i_{rk}))| \\ \leq \left(\sum_{j_1, \dots, j_k = 1}^n \widehat{f}(\{j_1, \dots, j_k\})^2\right)^r \end{multline*} (Hint: use Cauchy–Schwarz $rk$ times.)
    5. Deduce that $ \|f\|_q^q \leq \frac{1}{(rk)! \cdot (k!)^q} \cdot |\overline{\mathcal{M}}| \cdot (k!)^r \|f\|_2^{2r} $ and hence \[ \|f\|_q \leq \frac{|\mathcal{M}|^{1/q}}{\sqrt{k!}} \|f\|_2. \]
  35. The goal of this problem is to estimate $|\mathcal{M}|$ from Exercise 37 so as to give a concrete improvement on Theorem 21.
    1. Show that for $q = 4$, $k = 2$ we have $|\mathcal{M}| = 60$.
    2. Show that $|\mathcal{M}| \leq (qk-1)!!$. (Hint: show that $(qk-1)!!$ is the number of perfect matchings in the complete graph on $qk$ vertices.) Deduce $\|f\|_q \leq \sqrt{q}^k \|f\|_2$.
    3. Show that $|\overline{\mathcal{M}}| \leq (\frac{2r-1}{r})^{rk} (rk)!^2$, and thereby deduce \[ \|f\|_q \leq C_{q,k} \cdot \sqrt{q-1}^k \|f\|_2, \] where $C_{q,k} = \left(\frac{(rk)!}{k!^r r^{rk}}\right)^{1/q}$. (Hint: Suppose that the first $t$ edges of the perfect matching have been chosen; show that there are $(\frac{2r-1}{r})(rk – t)^2$ choices for the next edge. The worst case is if the vertices used up so far are spread equally among the $q$ parts.)
    4. Give a simple proof that $C_{q,k} \leq 1$, thereby obtaining Theorem 21.
    5. Show that in fact $C_{q,k} = \Theta(1) \cdot k^{-1/4 + 1/(2q)}$. (Hint: Stirling’s formula.)
    6. Can you obtain the improved estimate \[ \frac{|\mathcal{M}|^{1/q}}{\sqrt{k!}} = \Theta_q(1) \cdot k^{-1/4} \cdot \sqrt{q-1}^k? \] (Hint: first exactly count — then estimate — the number of perfect matchings with exactly $e_{ij}$ edges between parts $i$ and $j$. Then sum your estimate over a range of the most likely values for $e_{ij}$.)

12 comments to Chapter 9 exercises

Leave a Reply




You can use most standard LaTeX, as well as these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>