The fact that the Fourier characters $\chi_{\gamma} : {\mathbb F}_2^n \to \{-1,1\}$ form a group isomorphic to ${\mathbb F}_2^n$ is not a coincidence; the analogous result holds for any finite abelian group and is a special case of the theory of Pontryagin duality in harmonic analysis. We will see further examples of this in Chapter 8.

Regarding spectral structure, Karpovsky **[Kar76]** proposed $\mathrm{sparsity}(\widehat{f})$ as a measure of complexity for the function $f$. Brandman’s thesis **[Bra87]** (see also **[BOH90]**) is an early work connecting decision tree and subcube partition complexity to Fourier analysis. The notation introduced for restrictions in Section 3 is not standard; unfortunately there is no standard notation. The uncertainty principle from Exercise 14 dates back to **[MS73]**. The result of Exercise 12 is due to Green and Sanders **[GS08]**, with inspiration from Saeki **[Sae68]**. The main result of **[GS08]** is the sophisticated theorem that any $f : {\mathbb F}_2^n \to \{0,1\}$ with $\hat{\lVert} f \hat{\rVert}_1 \leq s$ can be expressed as $\sum_{i = 1}^L \pm 1_{H_i}$, where $L \leq 2^{2^{\mathrm{poly}(s)}}$ and each $H_i \leq {\mathbb F}_2^n$.

Theorem 4 is due to Nisan and Szegedy **[NS94]**. That work also showed a nontrivial kind of converse to the first statement in Proposition 16: any $f : \{-1,1\}^n \to \{-1,1\}$ is computable by a decision tree of depth at most $\mathrm{poly}(\deg(f))$. The best upper bound currently known is $2\deg(f)^3$ due to Midrijānis **[Mid04]**. Nisan and Szegedy also gave the example in Exercise 27 showing the dependence cannot be linear.

The field of computational learning theory was introduced by Valiant in 1984 **[Val84]**; for a good survey with focus on learning under the uniform distribution, see the thesis of Jackson **[Jac95]**. Linial, Mansour, and Nisan **[LMN93]** pioneered the Fourier approach to learning, developing the Low-Degree Algorithm. We present their strong results on constant-depth circuits in the next chapter. The noise sensitivity approach to the Low-Degree Algorithm is from **[KOS04]**. Corollary 33 is due to Bshouty and Tamon **[BT96]** who also gave certain matching lower bounds. Goldreich and Levin’s work is from **[GL89]**. Besides its applications to cryptography and learning, it is important in coding theory and complexity as a *local list-decoding algorithm* for the Hadamard code. The Kushilevitz–Mansour algorithm is from **[KM93]**; they also are responsible for the results of Exercise 36(b) and 37. The results of Exercise 31 and 36(c) are from **[GOS+11]**.

Midrijanis has improved the Nisan-Smolensky upper bound to ${\mathrm{DT}}(f) \leq 2deg(f)^3$ (http://arxiv.org/abs/quant-ph/0403168)

Thanks LY, keep ‘em coming!

Typo: “they also also”.

Thanks!