
Chapter 6 exercises
 Let $\boldsymbol{f}$ be chosen as in Proposition 1. Compute $\mathop{\bf Var}[\widehat{\boldsymbol{f}}(S)]$ for each $S \subseteq [n]$.
 Prove Fact 8.
 Show that any nonconstant $k$junta has $\mathbf{Inf}_i^{(1\delta)}[f] \geq (1/2 – \delta/2)^{k1}/k$ for at least one coordinate $i$.
 Let $\varphi : {\mathbb F}_2^n \to {\mathbb R}^{\geq 0}$ be an $\epsilon$biased density. For each $d \in {\mathbb N}^+$ show that the $d$fold convolution $\varphi^{* d}$ is an $\epsilon^d$biased density.

 Show that if $f : \{1,1\}^n \to {\mathbb R}$ has $\epsilon$small influences then it is $\sqrt{\epsilon}$regular.
 Show that for all even $n$ there exists $f : \{1,1\}^n \to \{1,1\}$ which is $2^{n/2}$regular but does not have $\epsilon$small influences for any $\epsilon < 1/2$.
 Show that there is a function $f : \{1,1\}^n \to \{1,1\}$ with $((1\delta)^{n1}, \delta)$small stable influences which is not $\epsilon$regular for any $\epsilon < 1$.
 Verify that the function $f(x) = x_0 \mathrm{Maj}_n(x_1, \dots, x_n)$ from Examples 10 satisfies $\mathbf{Inf}_0^{(1\delta)}[f] = \mathbf{Stab}_{1\delta}[\mathrm{Maj}_n]$ for $\delta \in (0,1)$, and thus does not have $(\epsilon, \delta)$small stable influences unless $\epsilon \geq 1 – \sqrt{\delta}$.
 Show that the function $f : \{1,1\}^{n+1} \to \{1,1\}$ from part (d) is $\frac{1}{\sqrt{n}}$regular.
 Suppose $f : \{1,1\}^n \to {\mathbb R}$ has $(\epsilon,\delta)$small stable influences. Show that $f$ is $(\eta,k)$regular for $\eta = \sqrt{\epsilon/(1\delta)^{k1}}$.
 Show that $f$ has $(\epsilon,1)$small stable influences if and only if $f$ is $(\sqrt{\epsilon}, 1)$regular.
 Let $f : \{1,1\}^n \to \{1,1\}$ be monotone. Show that if $f$ is $(\epsilon,1)$regular then $f$ is $\epsilon$regular and has $\epsilon$small influences.

 Let $f : \{1,1\}^n \to {\mathbb R}$. Let $(J,\overline{J})$ be a partition of $[n]$ and let $z \in \{1,1\}^{\overline{J}}$. For $\boldsymbol{z} \sim \{1,1\}^{\overline{J}}$ uniformly random, give a formula for $\mathop{\bf Var}_{\boldsymbol{z}}[\mathop{\bf E}[f_{J \mid \boldsymbol{z}}]]$ in terms of $f$’s Fourier coefficients. (Hint: direct application of Corollary 3.22.)
 Using the above formula and the probabilistic method, give an alternate proof of the second statement of Proposition 12.
 Let $\varphi : {\mathbb F}_2^n \to {\mathbb R}^{\geq 0}$ be the density corresponding to the uniform distribution on the support of $\mathrm{IP}_n : {\mathbb F}_2^n \to \{0,1\}$. Show that $\varphi$ is $\epsilon$biased for $\epsilon = 2^{n/2}/(12^{n/2})$, but not for smaller $\epsilon$.
 Prove Proposition 13.
 Compute the ${\mathbb F}_2$polynomial representation of the equality function $\mathrm{Equ}_n : \{0,1\}^n \to \{0,1\}$, defined by $\mathrm{Equ}_n(x) = 1$ if and only if $x_1 = x_2 = \cdots = x_n$.

 Let $f : \{0,1\}^n \to {\mathbb R}$ and let $q(x) = \sum_{S \subseteq [n]} c_S x^S$ be the (unique) multilinear polynomial representation of $f$ over ${\mathbb R}$. Show that $c_S = \sum_{R \subseteq S} (1)^{S – R} f(R)$, where we identify $R \subseteq [n]$ with its $0$$1$ indicator string. This formula is sometimes called Möbius inversion.
 Prove Proposition 21.
 (Cf. Lemma 3.5.) Let $f : {\mathbb F}_2^n \to {\mathbb F}_2$ be nonzero and suppose $\deg_{{\mathbb F}_2}(f) \leq k$. Show that $\mathop{\bf Pr}[f({\boldsymbol{x}}) \neq 0] \geq 2^{k}$. (Hint: as in the similar Exercise 3.4, use induction on $n$.)
 Let $f : \{1,1\}^n \to \{0,1\}$.
 Show that $\deg_{{\mathbb F}_2}(f) \leq \log(\mathrm{sparsity}(\widehat{f}))$. (Hint: you will need Exercise 3.6, Corollary 22, and Exercise 1.4.)
 Suppose $\widehat{f}$ is $2^{k}$granular. Show that $\deg_{{\mathbb F}_2}(f) \leq k$. (This is a stronger result than part (a), by Exercise 3.31.)
 Let $f : \{1,1\}^n \to \{1,1\}$ be bent, $n > 2$. Show that $\deg_{{\mathbb F}_2}(f) \leq n/2$. (Note that the upper bound $n/2+1$ follows from Exercise 12(b).)
 In this exercise you will prove Theorem 25.
 Suppose $p(x) = c_0 + c_S x^S + r(x)$ is a real multilinear polynomial over $x_1, \dots, x_n$ with $c_0, c_S \neq 0$, $S > \frac23n$, and $T > \frac23n$ for all monomials $x^T$ appearing in $r(x)$. Show that after expansion and multilinear reduction (meaning $x_i^2 \mapsto 1$), $p(x)^2$ contains the term $2c_0c_S x^S$.
 Deduce Theorem 25.
 In this exercise you will explore the sharpness of Siegenthaler’s Theorem and Theorem 25.
 For all $n$ and $k < n1$, find an $f : \{0,1\}^n \to \{0,1\}$ which is $k$resilient and has $\deg_{{\mathbb F}_2}(f) = nk1$.
 For all $n \geq 3$, find an $f : \{0,1\}^n \to \{0,1\}$ which is $1$storder correlation immune and has $\deg_{{\mathbb F}_2}(f) = n1$.
 For all $n$ divisible by $3$, find a biased $f : \{0,1\}^n \to \{0,1\}$ which is $(\frac23n – 1)$thorder correlation immune.
 Prove Proposition 27.
 Bent functions come in pairs: show that if $f : {\mathbb F}_2^n \to \{1,1\}$ is bent then $2^{n/2} \widehat{f}$ is also a bent function (with domain ${\widehat{{\mathbb F}_2^n}}$).
 Extend Proposition 29 to show that if $\pi$ is any permutation on ${\mathbb F}_2^n$ then $f(x,y) = \mathrm{IP}_{2n}(x, \pi(y)) g(y)$ is bent.
 Dickson’s Theorem says the following: Any polynomial $p : {\mathbb F}_2^n \to {\mathbb F}_2$ of degree at most $2$ can be expressed as \begin{equation} \label{eqn:dickson} p(x) = \ell_0(x) + \sum_{j = 1}^k \ell_j(x) \ell’_j(x), \end{equation} where $\ell_0$ is an affine function and $\ell_1, \ell_1′, \dots, \ell_k, \ell_k’$ are linearly independent linear functions. Here $k$ depends only on $p$ and is called the “rank” of $p$. Show that for $n$ even, $g : {\mathbb F}_2^n \to \{1,1\}$ defined by $g(x) = \chi(p(x))$ is bent if and only if $k = n/2$, if and only if $g$ arises from $\mathrm{IP}_n$ as in Proposition 28.
 Without appealing to Dickson’s Theorem, prove that the complete quadratic $x \mapsto \sum_{1 \leq i < j \leq n} x_i x_j$ can be expressed as in \eqref{eqn:dickson}, with $k = \lfloor n/2 \rfloor$. (Hint: induction on $n$, with different steps depending on the parity of $n$.)
 Define $\mathrm{mod}_3 : \{1,1\}^n \to \{0,1\}$ by $\mathrm{mod}_3(x) = 1$ if and only if $\sum_{j=1}^n x_i$ is divisible by $3$. Derive the Fourier expansion \[ \mathrm{mod}_3(x) = \tfrac13 + \tfrac23 (1/2)^n \sum_{\substack{S \subseteq [n] \\ S \text{ even}}} (1)^{(S \text{ mod } 4)/2} \sqrt{3}^{S} x^S \] and conclude that $\mathrm{mod}_3$ is $\frac23(\frac{\sqrt{3}}{2})^n$regular. (Hint: consider $\prod_{j=1}^n (\tfrac{1}{2} + \frac{\sqrt{3}}{2})x_j$.)
 In Theorem 30, show that given $\boldsymbol{r}, \boldsymbol{s}$ any fixed bit $\boldsymbol{y}_i$ can be obtained in deterministic $\mathrm{poly}(\ell)$ time.

 Slightly modify the construction in Theorem 30 to obtain a $(2^{t} – 2^{\ell})$biased density. (Hint: arrange for $p_\gamma$ to have degree at most $n1$.)
 Since ${\mathbb F}_{2^\ell}$ is a dimension$\ell$ vector space over ${\mathbb F}_2$, it has some basis $v_1, \dots, v_\ell$. Suppose we modify the construction in Theorem 30 so that $\varphi$ is a density on ${\mathbb F}_2^{n\ell}$, with $\boldsymbol{y}_{ij} = \langle \mathrm{enc}(v_j \boldsymbol{r}^i), \mathrm{enc}(\boldsymbol{s}) \rangle$ for $i \in [n], j \in [\ell]$. Show that $\varphi$ remains $2^{t}$biased.
 Fix $\epsilon \in (0,1)$ and $n \in {\mathbb N}$. Let $A \subseteq {\mathbb F}_2^n$ be a randomly chosen multiset in which $\lceil C n/\epsilon^2 \rceil$ elements are included, independently and uniformly. Show that if $C$ is a large enough constant then $A$ is $\epsilon$biased except with probability at most $2^{n}$.
 Consider the problem of computing the matrix multiplication $C = AB$, where $A, B \in {\mathbb F}_2^{n \times n}$. There is an algorithm [Sto10,Vas12] for solving this problem in time $O(n^{\omega})$, where $\omega < 2.373$; however, the algorithm is very complicated. Suppose you are given $A$, $B$, and the outcome $C’$ of running this algorithm; you want to test that indeed $C’ = AB$.
 Give an algorithm using $n$ random bits and time $O(n^2)$ with the following property: if $C’ = AB$ then the algorithm “accepts” with probability $1$; if $C’ \neq AB$ then the algorithm “accepts” with probability at most $1/2$. (Hint: compute $C’{\boldsymbol{x}}$ and $AB{\boldsymbol{x}}$ for a random ${\boldsymbol{x}} \in {\mathbb F}_2^n$.)
 Show how to reduce the number of random bits used to $O(\log n)$ at the expense of making the false acceptance probability $2/3$, while keeping the running time $O(n^2)$. (You may use the fact that in Theorem 30, the time required to compute $\boldsymbol{y}$ given $\boldsymbol{r}$ and $\boldsymbol{s}$ is $n \cdot \mathrm{polylog}(\ell)$.)
 Simplify the exposition and analysis of Theorem 32 and Corollary 33 in the case of $k=2$, and show that you can take $m$ to be one less (i.e., $m = \ell$).
 Consider the matrix $H’ \in {\mathbb F}_n^{k \times n}$ constructed in Theorem 32, and suppose we delete all rows corresponding to even (nonzero) powers of the $\alpha_j$’s. Show that $H’$ retains the property that any sum of at most $k$ columns of $H’$ is nonzero in ${\mathbb F}_n^k$. (Hint: prove and use that $(\sum_j \beta_j)^2 = \sum_j \beta_j^2$ for any sequence of $\beta_j \in {\mathbb F}_n$.) Deduce that the cardinality of $A$ in Corollary 33 can be decreased to $2(2n)^{\lfloor k/2 \rfloor}$.
 Let $A \subseteq {\mathbb F}_2^n$ be a multiset and suppose that the probability density $\phi_A$ is $k$wise independent. In this exercise you will prove the lower bound $A \geq \Omega(n^{\lfloor k/2 \rfloor})$ (for $k$ constant).
 Suppose $\mathcal{F} \subseteq 2^{[n]}$ is a collection of subsets of $[n]$ such that $S \cup T \leq k$ for all $S,T \in \mathcal{F}$. For each $S \in \mathcal{F}$ define $\chi_S^A \in \{1,1\}^{A} \subseteq {\mathbb R}^{A}$ to be the real vector with entries indexed by $A$ whose $a$th entry is $a^{S} = \prod_{i \in S} a_i$. Show that the set of vectors $\{\frac{1}{\sqrt{A}} \chi_S^A : S \in \mathcal{F}\}$ is orthonormal and hence $A \geq \mathcal{F}$.
 Show that we can find $\mathcal{F}$ satisfying $\mathcal{F} \geq \sum_{j = 0}^{k/2} \binom{n}{j}$ if $k$ is even and $\mathcal{F} \geq \sum_{j = 0}^{(k1)/2} \binom{n}{j} + \binom{n1}{(k1)/2}$ if $k$ is odd.
 Let $\mathcal{C}$ be a class of functions ${\mathbb F}_2^n \to {\mathbb R}$ which is closed under translation; i.e., $f^{+z} \in \mathcal{C}$ whenever $f \in \mathcal{C}$ and $z \in {\mathbb F}_2^n$. An example is the class of functions of ${\mathbb F}_2$degree at most $d$. Show that if $\psi$ is a density which $\epsilon$fools $\mathcal{C}$ then $\psi * \varphi$ also $\epsilon$fools $\mathcal{C}$ for any density $\varphi$.
 Fix an integer $\ell \geq 1$. In this exercise you will generalize Exercise 3.42 by showing how to exactly learn ${\mathbb F}_2$polynomials of degree at most $\ell$.
 Fix $p : {\mathbb F}_2^n \to {\mathbb F}_2$ with $\deg_{{\mathbb F}_2}(p) \leq \ell$ and suppose that ${\boldsymbol{x}}^{(1)}, \dots, {\boldsymbol{x}}^{(m)} \sim {\mathbb F}_2^n$ are drawn uniformly and independently from ${\mathbb F}_2^n$. Assume that $m \geq C \cdot 2^\ell (n^\ell + \log(1/\delta))$ for $0 < \delta \leq 1/2$ and $C$ a sufficiently large constant. Show that except with probability at most $\delta$, the only $q : {\mathbb F}_2^n \to {\mathbb F}_2$ with $\deg_{{\mathbb F}_2}(q) \leq \ell$ which satisfies $q({\boldsymbol{x}}^{(i)}) = p({\boldsymbol{x}}^{(i)})$ for all $i \in [m]$ is $q = p$. (Hint: Exercise 6.11 with $qp$.)
 Show that the concept class of all polynomials ${\mathbb F}_2^n \to {\mathbb F}_2$ of degree at most $\ell$ can be learned from random examples only, with error $0$, in time $O(n)^{3\ell}$. (Remark: as in Exercise 3.42, since the key step is solving a linear system, the learning algorithm can also be done in $O(n)^{\omega \ell}$ time, assuming matrix multiplication can be done in $O(n^\omega)$ time.)
 Extend this learning algorithm so that in running time $O(n)^{3\ell}\cdot \log(1/\delta)$ it achieves success probability at least $1 – \delta$. (Hint: similar to Exercise 3.39.)
 In this exercise you will prove Lemma 37.
 Give a $\mathrm{poly}(n, 2^k)\cdot \log(1/\delta)$time learning algorithm which, given random examples from a $k$junta ${\mathbb F}_2^n \to {\mathbb F}_2$, determines (except with probability at most $\delta$) if $f$ is a constant function, and if so, which one.
 Given access to random examples from a $k$junta $f : {\mathbb F}_2^n \to {\mathbb F}_2$, let $P \subseteq [n]$ be a set of relevant coordinates for $f$ and let $z \in {\mathbb F}_2^P$. Show how to obtain $M$ independent random examples from the $(kP)$junta $f_{\overline{P} \mid z}$ in time $\mathrm{poly}(n, 2^k)\cdot M \cdot \log(1/\delta)$ (except with probability at most $\delta$).
 Complete the proof of Lemma 37. (Hint: build a depth$k$ decision tree for $f$.)

 Improve the bound in Lemma 38 to $\hat{\lVert} f \hat{\rVert}_1 \epsilon – \widehat{f}(\emptyset)\epsilon$ and the bound in Corollary 39 to $\hat{\lVert} f \hat{\rVert}_1^2 \epsilon – \f\_2^2\epsilon$.
 Improve the bound in Theorem 44 to $\sqrt{\theta^2 – \epsilon}/\sqrt{1\epsilon}$.
 Improve on Theorem 44 by a factor of roughly $2$ in the case of acceptance probability near $1$. Specifically, show that if $f$ passes the Derandomized BLR Test with probability $1\delta$ then there exists $\gamma^* \in {\widehat{{\mathbb F}_2^n}}$ with $\widehat{f}(\gamma^*) \geq \sqrt{12\delta \epsilon}/\sqrt{1\epsilon}$.
 Fix an integer $k \in {\mathbb N}^+$. Let $(f_s)_{s \in \{0,1\}^k}$ be a collection of functions indexed by length$k$ binary sequences, each $f_s : {\mathbb F}_2^n \to {\mathbb R}$. Define the $k$th Gowers “inner product” $\langle (f_s)_s \rangle_{U^k} \in {\mathbb R}$ by \[ \langle (f_s)_s \rangle_{U^k} = \mathop{\bf E}_{{\boldsymbol{x}}, \boldsymbol{y}_1, \dots, \boldsymbol{y}_k}\left[\prod_{s \in \{0,1\}^k} f_s({\boldsymbol{x}} + \mathop{{\textstyle \sum}}_{i : s_i = 1} \boldsymbol{y}_i)\right], \] where the $k+1$ random vectors ${\boldsymbol{x}}_, \boldsymbol{y}_1, \dots, \boldsymbol{y}_k$ are independent and uniformly distributed on ${\mathbb F}_2^n$. Define the $k$th Gowers norm of a function $f : {\mathbb F}_2^n \to {\mathbb R}$ by \[ \f\_{U^k} = \langle (f, f, \dots, f) \rangle_{U^k}^{1/2^{k}}, \] where $(f, f, \dots, f)$ denotes that all $2^k$ functions in the collection equal $f$. (You will later verify that $\langle (f, f, \dots, f) \rangle_{U^k}$ is always nonnegative.)
 Check that $\langle f_0, f_1 \rangle_{U^1} = \mathop{\bf E}[f_0]\mathop{\bf E}[f_1]$ and therefore $\f\_{U^1}^2 = \mathop{\bf E}[f]^2$.
 Check that \[ \langle f_{00}, f_{10}, f_{01}, f_{11} \rangle_{U^2} = \sum_{\gamma \in {\widehat{{\mathbb F}_2^n}}} \widehat{f_{00}}(\gamma)\widehat{f_{10}}(\gamma)\widehat{f_{01}}(\gamma)\widehat{f_{11}}(\gamma) \] and therefore $\f\_{U^2}^4 = \hat{\lVert} f \hat{\rVert}_4^4$.
 Show that \begin{equation} \label{eqn:gowersinter} \langle (f_s)_s \rangle_{U^k} = \mathop{\bf E}_{\boldsymbol{y}_1, \dots, \boldsymbol{y}_{k1}}\left[\mathop{\bf E}_{{\boldsymbol{x}}} \left[\prod_{s : s_k = 0} f_s({\boldsymbol{x}} + \mathop{{\textstyle \sum}}_{i : s_i = 1} \boldsymbol{y}_i)\right]\cdot \mathop{\bf E}_{{\boldsymbol{x}}’} \left[\prod_{s : s_k = 1} f_{s}({\boldsymbol{x}}' + \mathop{{\textstyle \sum}}_{i : s_i = 1} \boldsymbol{y}_i)\right]\right], \end{equation} where ${\boldsymbol{x}}’$ is independent of ${\boldsymbol{x}}, \boldsymbol{y}_1, \dots, \boldsymbol{y}_{k1}$ and uniformly distributed.
 Show that $\langle (f, f, \dots, f) \rangle_{U^k}$ is always nonnegative, as promised.
 Using \eqref{eqn:gowersinter} and Cauchy–Schwarz, show that \[ \langle (f_s)_s \rangle_{U^k} \leq \sqrt{\langle (f_{(s_1, \dots, s_{k1}, 0)})_S \rangle_{U^k}} \sqrt{\langle (f_{(s_1, \dots, s_{k1}, 1)})_s \rangle_{U^k}}. \]
 Show that \begin{equation} \label{eqn:gowerstriangle} \langle (f_s)_s \rangle_{U^k} \leq \prod_{s \in \{0,1\}^k} \f_s\_{U^k}. \end{equation}
 Fixing $f : {\mathbb F}_2^n \to {\mathbb R}$, show that $\f\_{U^k} \leq \f\_{U^{k+1}}$. (Hint: consider $(f_s)_{s \in \{0,1\}^{k+1}}$ defined by $f_s = f$ if $s_{k+1}=0$ and $f_s = 1$ if $s_{k+1} = 1$.)
 Show that $\\cdot\_{U^k}$ satisfies the triangle inequality and is therefore a seminorm. (Hint: first show that \[ \f_0 + f_1\_{U^k}^{2^k} = \sum_{S \subseteq \{0,1\}^k} \langle (f_{\boldsymbol{1}[s \in S]})_{s \in \{0,1\}^k} \rangle_{U^k} \] and then use \eqref{eqn:gowerstriangle}.)
 Show that $\\cdot\_{U^k}$ is in fact a norm for all $k \geq 2$; i.e., $\f\_{U^k} = 0 \Rightarrow f = 0$.


In problem 19 (Dickson’s Theorem) should it be sum instead of product?
Great catch, thanks!