Mansour’s Conjecture is from **[Man94]**. Even the weaker version would imply that the Kushilevitz–Mansour algorithm learns the class of $\mathrm{poly}(n)$-size DNF with any constant error, using queries, in time $\mathrm{poly}(n)$. In fact, this learning result was subsequently obtained in a celebrated work of Jackson **[Jac97]**, using a different method (which begins with Exercise 4). Nevertheless, the Mansour Conjecture remains important for learning theory since Gopalan, Kalai, and Klivans **[GKK08]** have shown that it implies the same learning result in the more challenging and realistic model of “agnostic learning”. Theorems 24 and 25 are also due to Mansour **[Man95]**.

The method of random restrictions dates back to Subbotovskaya **[Sub61]**. Håstad’s Switching Lemma (from **[Has87]**) is the culmination of a line of work due to Furst, Saxe, and Sipser **[FSS84]**, Ajtai **[Ajt83]**, and Yao **[Yao85]**. Linial, Mansour, and Nisan proved Lemma 21, which allowed them to deduce the LMN Theorem and its consequences. **[LMN89,LMN93]** An additional cryptographic application of the LMN Theorem is found in **[GR00a]**. The strongest lower bound currently known for approximately computing parity in $\mathsf{AC}^0$ is due to Impagliazzo, Matthews, and Paturi **[IMP12]** and also independently to Håstad.

Theorem 20 and its generalization Theorem 30 are due to Boppana **[Bop97]**; Linial, Mansour, and Nisan had given the weaker bound $O(\log s)^d$. Exercise 10 is due to Amano **[Ama11]**, and Exercise 15 is due to Talagrand **[Tal96]**.

## Leave a Reply