# Chapter 4 exercises

1. Show that every function $f : \{0,1\}^n \to \{0,1\}$ can be represented by a DNF formula of size at most $2^n$ and width at most $n$.
2. Suppose we have a certain CNF computing $f : \{0,1\}^n \to \{0,1\}$. Switch ANDs with ORs in the CNF. Show that the result is a DNF computing the boolean dual $f^\dagger : \{0,1\}^n \to \{0,1\}$.
3. A DNF formula is said to be monotone if its terms contain only unnegated variables. Show that monotone DNFs compute monotone functions and that any monotone function can be computed by a monotone DNF, but that a non-monotone DNF may compute a monotone function.
4. Let $f : \{-1,1\}^n \to \{-1,1\}$ be computable by a DNF of size $s$.
1. Show there exists $S \subseteq [n]$ with $|S| \leq \log(s) + O(1)$ and $|\widehat{f}(S)| \geq \Omega(1/s)$. (Hint: use Proposition 9 and Exercise 3.29.)
2. Let $\mathcal{C}$ be the concept class of functions $: \{-1,1\}^n \to \{-1,1\}$ computable by DNF formulas of size at most $s$. Show that $\mathcal{C}$ is learnable using queries with error $\tfrac{1}{2} – \Omega(1/s)$ in time $\mathrm{poly}(n, s)$. (Such a result, with error bounded away from $\tfrac{1}{2}$, is called weak learning.)
5. Verify Proposition 12.
6. Verify Proposition 14.
7. For each $n$ which is an input length for $\mathrm{Tribes}_n$, show that there exists a function $f : \{-1,1\}^n \to \{-1,1\}$ which is truly unbiased ($\mathop{\bf E}[f] = 0$) and has $\mathbf{Inf}_i[f] \leq O\bigl(\frac{\log n}{n}\bigr)$ for all $i \in [n]$.
8. Suppose $f : \{-1,1\}^n \to \{-1,1\}$ is computed by a read-once DNF (meaning no variable is involved in more than one term) in which all terms have width exactly $w$. Compute $\hat{\lVert} f \hat{\rVert}_1$ exactly. Deduce that $\hat{\lVert} \mathrm{Tribes}_n \hat{\rVert}_1 = 2^{\frac{n}{\log n}(1\pm o(1))}$ and that there are $n$-variable width-$2$ DNFs with Fourier $1$-norm $\Omega(\sqrt{3/2}^n)$.
9. Give a direct (Fourier-free) proof of Corollary 18 (hint: condition on whether $i \in \boldsymbol{J}$).
10. Tighten the constant factor on $\log s$ in Theorem 20 as much as you can (avenues of improvement include the argument in Lemma 19, the choice of $\delta$, and Exercise 14).
11. Prove Lemma 23.
1. Show that the parity function $\chi_{[n]} : \{-1,1\}^n \to \{-1,1\}$ can be computed by a DNF (or a CNF) of size $2^{n-1}$.
2. Show that the bound $2^{n-1}$ above is exactly tight. (Hint: show that every term must have width exactly $n$.)
3. Show that there is a depth-$3$ circuit of size $O(n^{1/2}) \cdot 2^{n^{1/2}}$ computing $\chi_{[n]}$. (Hint: break up the input into $n^{1/2}$ blocks of size $n^{1/2}$ and use part (a) twice. How can you compress the result from depth $4$ to depth $3$?)
4. More generally, show there is a depth-$d$ circuit of size $O(n^{1-1/(d-1)}) \cdot 2^{n^{1/(d-1)}}$ computing $\chi_{[n]}$.
12. Show that computing $\mathrm{Tribes}_{w,s}$ by a CNF formula requires size at least $w^s$.
13. Let $f : \{\mathsf{True}, \mathsf{False}\}^n \to \{\mathsf{True}, \mathsf{False}\}$ be computable by a CNF $C$ of width $w$. In this exercise you will show that $\mathbf{I}[f] \leq w$.

Consider the following randomized algorithm which tries to produce an input ${\boldsymbol{x}} \in f^{-1}(\mathsf{True})$. First choose a random permutation $\boldsymbol{\pi} \in S_n$. Then for $i = 1, \dots, n$: If the single-literal clause $x_{\boldsymbol{\pi}(i)}$ appears in $C$ then set $x_{\boldsymbol{\pi}(i)} = \mathsf{True}$, syntactically simplify $C$ under this setting, and say that coordinate $\boldsymbol{\pi}(i)$ is “forced”. Similarly, if the single-literal clause $\overline{x}_{\boldsymbol{\pi}(i)}$ appears in $C$ then set $x_{\boldsymbol{\pi}(i)} = \mathsf{False}$, syntactically simplify $C$, and say that $\boldsymbol{\pi}(i)$ is “forced”. If neither holds, set ${\boldsymbol{x}}_{\pi(i)}$ uniformly at random. If $C$ ever contains two single-literal clauses $x_j$ and $\overline{x}_j$, the algorithm “gives up” and outputs ${\boldsymbol{x}} = \bot$.

1. Show that if ${\boldsymbol{x}} \neq \bot$ then $f({\boldsymbol{x}}) = \mathsf{True}$.
2. For $x \in f^{-1}(\mathsf{True})$ let $p(x) = \mathop{\bf Pr}[{\boldsymbol{x}} = x]$. For $j \in [n]$ let $\boldsymbol{I}_j$ be the indicator random variable for the event that coordinate $j \in [n]$ is forced. Show that $p(x) = \mathop{\bf E}[\prod_{j=1}^n (1/2)^{1-\boldsymbol{I}_j}]$.
3. Deduce $2^n p(x) \geq 2\sum_{j=1}^n \mathop{\bf E}[\boldsymbol{I}_j]$.
4. Show that for every $x$ with $f(x) = \mathsf{True}$, $f(x^{\oplus j}) = \mathsf{False}$ it holds that $\mathop{\bf E}[\boldsymbol{I}_j \mid {\boldsymbol{x}} = x] \geq 1/w$.
5. Deduce $\mathbf{I}[f] \leq w$.
14. Given boolean variables $x_1, \dots, x_n$, a “random monotone term of width $w \in {\mathbb N}^+$” is defined to be the logical AND of $x_{\boldsymbol{i}_{1}}, \dots, x_{\boldsymbol{i}_{w}}$, where $\boldsymbol{i}_1, \dots, \boldsymbol{i}_w$ are chosen independently and uniformly at random from $[n]$. (If the $\boldsymbol{i}_j$’s are not all distinct then the resulting term will in fact have width strictly less than $w$.) A “random monotone DNF of width $w$ and size $s$” is defined to be the logical OR of $s$ independent random monotone terms. For this exercise we assume $n$ is a sufficiently large perfect square, and we let $\boldsymbol{\varphi}$ be a random monotone DNF of width $\sqrt{n}$ and size $2^{\sqrt{n}}$.
1. Fix an input $x \in \{-1,1\}^n$ and define $u = (\sum_{i=1}^n x_i)/\sqrt{n} \in [-\sqrt{n}, \sqrt{n}]$. Let $\boldsymbol{T}_j$ be the event that the $j$th term of $\boldsymbol{\varphi}$ is made $1$ (logical $\mathsf{False}$) by $x$. Compute $\mathop{\bf Pr}[\boldsymbol{T}_j]$ and $\mathop{\bf Pr}[\boldsymbol{\varphi}(x) = 1]$, and show that the latter is at least $10^{-9}$ assuming $|u| \leq 2$.
2. Let $\boldsymbol{U}_j$ be the event that the $j$th term of $\boldsymbol{\varphi}$ has exactly one $1$ on input $x$. Show that $\mathop{\bf Pr}[\boldsymbol{U}_j \mid \boldsymbol{V}_j] \geq \Omega(w 2^{-w})$ assuming $|u| \leq 2$.
3. Suppose we condition on $\boldsymbol{\varphi}(x) = 1$; i.e., $\cup_{j} \boldsymbol{V}_j$. Argue that the events $\boldsymbol{U}_j$ are independent. Further, argue that for the $\boldsymbol{U}_j$’s which do occur, the indices of their uniquely-$1$ variables are independent and uniformly random among the $1$’s of $x$.
4. Show that $\mathop{\bf Pr}[\mathrm{sens}_{\boldsymbol{\varphi}}(x) \geq c \sqrt{n} \mid \boldsymbol{\varphi}(x) = 1] \geq 1- 10^{-10}$ for $c > 0$ a sufficiently small constant.
5. Show that $\mathop{\bf Pr}_{{\boldsymbol{x}}}[|(\sum_{i=1}^n {\boldsymbol{x}}_i)/\sqrt{n}| \leq 2] \geq \Omega(1)$.
6. Deduce that there exists a monotone function $f : \{-1,1\}^n \to \{-1,1\}$ with the property that $\mathop{\bf Pr}_{{\boldsymbol{x}}}[\mathrm{sens}_f({\boldsymbol{x}}) \geq c' \sqrt{n}] \geq c’$ for some universal constant $c’ > 0$.
7. Both $\mathrm{Maj}_n$ and the function $f$ from the previous exercise have average sensitivity $\Theta(\sqrt{n})$. Contrast the “way” in which this occurs for the two functions.
15. In this exercise you will prove the Baby Switching Lemma with constant $3$ in place of $5$. Let $\phi = T_1 \vee T_2 \vee \cdots \vee T_s$ be a DNF of width $w \geq 1$ over variables $x_1, \dots, x_n$ . We may assume $\delta \leq 1/3$ else the theorem is trivial.
1. Suppose $R = (J \mid z)$ is a “bad” restriction, meaning that $\phi_{J \mid z}$ is not a constant function. Let $i$ be minimal such that $(T_i)_{J \mid z}$ is neither constantly $\mathsf{True}$ or $\mathsf{False}$, and let $j$ be minimal such that $x_j$ or $\overline{x}_j$ appears in this restricted term. Show there is a unique restriction $R’ = (J \setminus \{j\} \mid z’)$ extending $R$ which does not falsify $T_i$.
2. Suppose we enumerate all bad restrictions $R$, and for each we write down the associated $R’$ as in (a). Show that no restriction is written more than $w$ times.
3. If $(\boldsymbol{J} \mid \boldsymbol{z})$ is a $\delta$-random restriction and $R$ and $R’$ are as in (a), show that $\mathop{\bf Pr}[(\boldsymbol{J} \mid \boldsymbol{z}) = R] = \frac{2\delta}{1-\delta} \mathop{\bf Pr}[(\boldsymbol{J} \mid \boldsymbol{z}) = R']$.
4. Complete the proof by showing $\mathop{\bf Pr}[(\boldsymbol{J} \mid \boldsymbol{z}) \text{ is bad}] \leq 3\delta w$.
16. In this exercise you will prove Theorem 30. Say that a “$(d, w, s’)$-circuit” is a depth-$d$ circuit with width at most $w$ and with at most $s’$ nodes at layers $2$ (not $1$) through $d$.
1. Show by induction on $d \geq 2$ that any $f : \{-1,1\}^n \to \{-1,1\}$ computable by a $(d,w,s’)$-circuit satisfies $\mathbf{I}[f] \leq w O(\log s’)^{d-2}$.
2. Deduce Theorem 30.

### 12 comments to Chapter 4 exercises

• Yu  Zhao

In 16(a), you have a redundant i at the end of the sentence.

• Thanks

• Yu  Zhao

I think here in 16(a), it is better to use T_1 instead of T_i in the definition of 1(a). T_i makes me feel confused that i is not fixed.

• I’m not sure what you mean — I think we need to have i depend on R.

• The link to Lemma 23 in question 11 goes to the wrong page.

• Thanks, fixed!

• Noam Lifshitz

In exercise 15 (Ex 18 in the book) is it true that $V_j=T_j$?

• Yes, thanks!

• Ohad Klein

I think there is a bug in ex 4 (also in the book): take for example $f=x_1\wedge\ldots\wedge x_n$. It has size 1, but you evidently can’t find a big Fourier coefficient in it. (The problem is that $f$ might be too close to the $0$ function)

• Ohad Klein

Oops, my bad.

• Ohad Klein

In 15c (18c in the book), I think it should be $\cap_j{V_j}$ instead of $\cup_j{V_j}$

• Yep, thanks!