# §4.3: Random restrictions

In this section we describe the method of applying random restrictions. This is a very “Fourier-friendly” way of simplifying a boolean function.

As motivation, let’s consider the problem of bounding total influence for size-$s$ DNFs. One plan is to use the results from Section 1: size-$s$ DNFs are $.01$-close to width-$O(\log s)$ DNFs, which in turn have total influence $O(\log s)$. This suggests that size-$s$ DNFs themselves have total influence $O(\log s)$. To prove this though we’ll need to reverse the steps of the plan: instead of truncating DNFs to a fixed width and arguing that a random input is unlikely to notice, we’ll first pick a random (partial) input and argue that this is likely to make the width small.

Let’s formalize the notion of a random partial input, or restriction:

Definition 15 For $\delta \in [0,1]$, we say that $\boldsymbol{J}$ is a $\delta$-random subset of $N$ if it is formed by including each element of $N$ independently with probability $\delta$. We define a $\delta$-random restriction on $\{-1,1\}^n$ to be a pair $(\boldsymbol{J} \mid \boldsymbol{z})$, where first $\boldsymbol{J}$ is chosen to be a $\delta$-random subset of $[n]$ and then $\boldsymbol{z} \sim \{-1,1\}^{\overline{\boldsymbol{J}}}$ is chosen uniformly at random. We say that coordinate $i \in [n]$ is free if $i \in \boldsymbol{J}$ and is fixed if $i \notin \boldsymbol{J}$. An equivalent definition is that each coordinate $i$ is (independently) free with probability $\delta$ and fixed to $\pm 1$ with probability $(1-\delta)/2$ each.

Given $f : \{-1,1\}^n \to {\mathbb R}$ and a random restriction $(\boldsymbol{J} \mid \boldsymbol{z})$, we can form the restricted function $f_{\boldsymbol{J} \mid \boldsymbol{z}} : \{-1,1\}^{\boldsymbol{J}} \to {\mathbb R}$ as usual. However it’s inconvenient that the domain of this function depends on the random restriction. Thus when dealing with random restriction we usually invoke the following convention:

Definition 16 Given $f : \{-1,1\}^n \to {\mathbb R}$, $I \subseteq [n]$, and $z \in \{-1,1\}^{\overline{I}}$, we may identify the restricted function $f_{I \mid z} : \{-1,1\}^I \to {\mathbb R}$ with its extension $f_{I \mid z} : \{-1,1\}^n \to {\mathbb R}$ in which the input coordinates $\{-1,1\}^{\overline{I}}$ are ignored.

As mentioned, random restrictions interact nicely with Fourier expansions:

Proposition 17 Fix $f : \{-1,1\}^n \to {\mathbb R}$ and $S \subseteq [n]$. Then if $(\boldsymbol{J} \mid \boldsymbol{z})$ is a $\delta$-random restriction on $\{-1,1\}^n$, $\mathop{\bf E}[\widehat{f_{\boldsymbol{J} \mid \boldsymbol{z}}}(S)] = \mathop{\bf Pr}[S \subseteq \boldsymbol{J}] \cdot \widehat{f}(S) = \delta^{|S|} \widehat{f}(S),$ and $\mathop{\bf E}[\widehat{f_{\boldsymbol{J} \mid \boldsymbol{z}}}(S)^2] = \sum_{U \subseteq [n]} \mathop{\bf Pr}[U \cap \boldsymbol{J} = S]\cdot\widehat{f}(U)^2 = \sum_{U \supseteq S} \delta^{|S|} (1-\delta)^{|U \setminus S|} \widehat{f}(U)^2,$ where we are treating $f_{\boldsymbol{J} \mid \boldsymbol{z}}$ as a function $\{-1,1\}^n \to {\mathbb R}$.

Proof: Suppose first that $J \subseteq [n]$ is fixed. When we think of restricted functions $f_{J \mid \boldsymbol{z}}$ as having domain $\{-1,1\}^n$, Corollary 3.22 may be stated as saying that for any $S \subseteq [n]$, \begin{align*} \mathop{\bf E}_{\boldsymbol{z} \sim \{-1,1\}^{\overline{J}}}[\widehat{f_{J \mid \boldsymbol{z}}}(S)] &= \widehat{f}(S) \cdot \boldsymbol{1}_{S \subseteq J},\\ \mathop{\bf E}_{\boldsymbol{z} \sim \{-1,1\}^{\overline{J}}}[\widehat{f_{J \mid \boldsymbol{z}}}(S)^2] &= \sum_{U \subseteq [n]} \widehat{f}(U)^2 \cdot \boldsymbol{1}_{U \cap J = S}. \end{align*} The proposition now follows by taking the expectation over $\boldsymbol{J}$. $\Box$

Corollary 18 Fix $f : \{-1,1\}^n \to {\mathbb R}$ and $i \in [n]$. If $(\boldsymbol{J} \mid \boldsymbol{z})$ is a $\delta$-random restriction then $\mathop{\bf E}[\mathbf{Inf}_i[f_{\boldsymbol{J} \mid \boldsymbol{z}}]] = \delta \mathbf{Inf}_i[f]$. Hence also $\mathop{\bf E}[\mathbf{I}[f_{\boldsymbol{J} \mid \boldsymbol{z}}]] = \delta \mathbf{I}[f]$.

Proof: We have \begin{multline*} \mathop{\bf E}[\mathbf{Inf}_i[f_{\boldsymbol{J} \mid \boldsymbol{z}}]] = \mathop{\bf E}\left[\sum_{S \ni i} \widehat{f_{\boldsymbol{J} \mid \boldsymbol{z}}}(S)^2\right] = \sum_{S \ni i} \sum_{U \subseteq [n]} \mathop{\bf Pr}[U \cap \boldsymbol{J} = S] \widehat{f}(U)^2 \\ = \sum_{U \subseteq [n]} \mathop{\bf Pr}[U \cap \boldsymbol{J} \ni i] \widehat{f}(U)^2 = \sum_{U \ni i} \delta \widehat{f}(U)^2 = \delta \mathbf{Inf}_i[f], \end{multline*} where the second equality used Proposition 17. $\Box$

(Proving Corollary 18 via Proposition 17 is a bit more elaborate than necessary; see the exercises for a simpler method.)

Corollary 18 lets us bound the total influence of a function $f$ by bounding the (expected) total influence of a random restriction of $f$. This is useful if $f$ is computable by a DNF formula of small size, since a random restriction is very likely to make this DNF have small width. This is a consequence of the following lemma:

Lemma 19 Let $T$ be a DNF term over $\{-1,1\}^n$ and fix $w \in {\mathbb N}^+$. Let $(\boldsymbol{J} \mid \boldsymbol{z})$ be a $(1/2)$-random restriction on $\{-1,1\}^n$. Then $\mathop{\bf Pr}[\text{width}(T_{\boldsymbol{J} \mid \boldsymbol{z}}) \geq w] \leq (3/4)^w$.

Proof: We may assume the initial width of $T$ is at least $w$, as otherwise its restriction under $(\boldsymbol{J} \mid \boldsymbol{z})$ cannot have width at least $w$. Now if any literal appearing in $T$ is fixed to $\mathsf{False}$ by the random restriction, the restricted term $T_{\boldsymbol{J} \mid \boldsymbol{z}}$ will be constantly $\mathsf{False}$ and thus have width $0 < w$. Each literal is fixed to $\mathsf{False}$ with probability $1/4$; hence the probability no literal in $T$ is fixed to $\mathsf{False}$ is at most $(3/4)^w$. $\Box$

We can now bound the total influence of small DNF formulas.

Theorem 20 Let $f : \{-1,1\}^n \to \{-1,1\}$ be computable by a DNF of size $s$. Then $\mathbf{I}[f] \leq O(\log s)$.

Proof: Let $(\boldsymbol{J} \mid \boldsymbol{z})$ be a $(1/2)$-random restriction on $\{-1,1\}^n$. Let $\boldsymbol{w} = {\mathrm{DNF}_{\mathrm{width}}}(f_{\boldsymbol{J} \mid \boldsymbol{z}})$. By a union bound and Lemma 19 we have that $\mathop{\bf Pr}[\boldsymbol{w} \geq w] \leq s (3/4)^w$. Hence \begin{multline*} \mathop{\bf E}[\boldsymbol{w}] = \sum_{w = 1}^\infty \mathop{\bf Pr}[\boldsymbol{w} \geq w] \leq 3 \log s + \sum_{w > 3 \log s} s (3/4)^w \\ \leq 3 \log s + 4s (3/4)^{3 \log s} \leq 3\log s + 4/s^{0.2} = O(\log s). \end{multline*} From Proposition 7 we obtain $\mathop{\bf E}[\mathbf{I}[f_{\boldsymbol{J} \mid \boldsymbol{z}}]] \leq 2 \cdot O(\log s) = O(\log s)$. And so from Corollary 18 we conclude $\mathbf{I}[f] = 2\mathop{\bf E}[\mathbf{I}[f_{\boldsymbol{J} \mid \boldsymbol{z}}]] \leq O(\log s)$. $\Box$

### 6 comments to §4.3: Random restrictions

• igor

A minor typo in the third line: Section 1 should be Section 4.1 (the link is correct).

• I guess it’s Section 1 of the “current chapter”… Should be clearer in the final pdf… Thanks.

• bireswar

Definition 15: N-> [n] , included-> including

• Fixed included. N is supposed to just represent a general set there.

Thanks!

• Magnus Find

Reference to 3.21 should probably be 3.22

• Yep, thanks Magnus!