# §2.2: Influences and derivatives

Given a voting rule $f : \{-1,1\}^n \to \{-1,1\}$ it’s natural to try to measure the “influence” or “power” of the $i$th voter. One can define this to be the “probability that the $i$th vote affects the outcome”.

Definition 12 We say that coordinate $i \in [n]$ is pivotal for $f : \{-1,1\}^n \to \{-1,1\}$ on input $x$ if $f(x) \neq f(x^{\oplus i})$. Here we have used the notation $x^{\oplus i}$ for the string $(x_1, \dots, x_{i-1}, -x_i, x_{i+1}, \dots, x_n)$.

Definition 13 The influence of coordinate $i$ on $f : \{-1,1\}^n \to \{-1,1\}$ is defined to be the probability that $i$ is pivotal for a random input: $\mathbf{Inf}_i[f] = \mathop{\bf Pr}_{{\boldsymbol{x}} \sim \{-1,1\}^n}[f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})].$

Influences can be equivalently defined in terms of “geometry” of the Hamming cube:

Fact 14 For $f : \{-1,1\}^n \to \{-1,1\}$, the influence $\mathbf{Inf}_i[f]$ equals the fraction of dimension-$i$ edges in the Hamming cube which are boundary edges. Here $(x,y)$ is a dimension-$i$ edge if $x$ and $y$ agree in all coordinates but the $i$th; it is a boundary edge if $f(x) \neq f(y)$.

Examples 15 For the $i$th dictator function $\chi_i$ we have that coordinate $i$ is pivotal for every input $x$; hence $\mathbf{Inf}_i[\chi_i] = 1$. On the other hand, if $j \neq i$ then coordinate $j$ is never pivotal; hence $\mathbf{Inf}_j[\chi_i] = 0$ for $j \neq i$. Note that the same two statements are true about the negated-dictator functions. For the constant functions $\pm 1$, all influences are $0$. For the $\mathrm{OR}_n$ function, coordinate $1$ is pivotal for exactly two inputs, $(-1, 1, 1, \dots, 1)$ and $(1, 1, 1, \dots, 1)$; hence $\mathbf{Inf}_1[\mathrm{OR}_n] = 2^{1-n}$. Similarly, $\mathbf{Inf}_i[\mathrm{OR}_n] = \mathbf{Inf}_i[\mathrm{AND}_n] = 2^{1-n}$ for all $i \in [n]$. The $\mathrm{Maj}_3$ is depicted in Figure 2; the points where it’s $+1$ are coloured grey and the points where it’s $-1$ are coloured white. Its boundary edges are highlighted in black; there are $2$ of them in each of the $3$ dimensions. Since there are $4$ total edges in each dimension, we conclude $\mathbf{Inf}_i[\mathrm{Maj}_3] = 2/4 = 1/2$ for all $i \in $. For majority in higher dimensions, $\mathbf{Inf}_i[\mathrm{Maj}_n]$ equals the probability that among $n-1$ random bits, exactly half of them are $1$. This is roughly $\frac{\sqrt{2/\pi}}{\sqrt{n}}$ for large $n$ — we will see this in the exercises and in a future chapter.

Influences can also be defined more “analytically” by introducing the derivative operators.

Definition 16 The $i$th (discrete) derivative operator $\mathrm{D}_i$ maps the function $f : \{-1,1\}^n \to {\mathbb R}$ to the function $\mathrm{D}_i f : \{-1,1\}^n \to {\mathbb R}$ defined by $\mathrm{D}_i f (x) = \frac{f(x^{(i\mapsto 1)}) - f(x^{(i \mapsto -1)})}{2}.$ Here we have used the notation $x^{(i \mapsto b)} = (x_1, \dots, x_{i-1}, b, x_{i+1}, \dots, x_n)$. Notice that $\mathrm{D}_if(x)$ does not actually depend on $x_i$. The operator $\mathrm{D}_i$ is a linear operator: i.e., $\mathrm{D}_i(f+g) = \mathrm{D}_i f + \mathrm{D}_i g$.

If $f : \{-1,1\}^n \to \{-1,1\}$ is boolean-valued then \begin{equation} \mathrm{D}_if(x) = \begin{cases} 0 & \text{if coordinate $i$ is not pivotal for $x$,} \\ \pm 1 & \text{if coordinate $i$ is pivotal for $x$.} \end{cases} \label{eqn:derivative-of-boolean} \end{equation} Thus $\mathrm{D}_if(x)^2$ is the $0$-$1$ indicator for whether $i$ is pivotal for $x$ and we conclude that $\mathbf{Inf}_i[f] = \mathop{\bf E}[\mathrm{D}_if({\boldsymbol{x}})^2]$. We take this formula as a definition for the influences of real-valued boolean functions.

Definition 17 We generalize Definition 13 to functions $f : \{-1,1\}^n \to {\mathbb R}$ by defining the influence of coordinate $i$ on $f$ to be $\mathbf{Inf}_i[f] = \mathop{\bf E}_{{\boldsymbol{x}} \sim \{-1,1\}^n}[\mathrm{D}_if({\boldsymbol{x}})^2] = \|\mathrm{D}_i f\|_2^2.$

The discrete derivative operators are quite analogous to the usual partial derivatives. For example, $f : \{-1,1\}^n \to {\mathbb R}$ is monotone if and only if $\mathrm{D}_i f(x) \geq 0$ for all $i$ and $x$. Further, $\mathrm{D}_i$ acts like formal differentiation on Fourier expansions:

Proposition 18 Let $f : \{-1,1\}^n \to {\mathbb R}$ have the multilinear expansion $f(x) = \sum_{S \subseteq [n]} \widehat{f}(S)\,x^S$. Then \begin{equation} \label{eqn:deriv-formula} \mathrm{D}_i f(x) = \sum_{\substack{S \subseteq [n] \\ S \ni i} } \widehat{f}(S)\,x^{S \setminus \{i\}}. \end{equation}

Since $\mathrm{D}_i$ is a linear operator, the proof follows immediately from the observation that $\mathrm{D}_i x^S = \begin{cases} x^{S \setminus \{i\}} & \text{if S \ni i,} \\ 0 & \text{if S \not \ni i.} \end{cases}$

By applying Parseval’s Theorem to the Fourier expansion \eqref{eqn:deriv-formula}, we obtain a Fourier formula for influences:

Theorem 19 For $f : \{-1,1\}^n \to {\mathbb R}$ and $i \in [n]$, $\mathbf{Inf}_i[f] = \sum_{S \ni i} \widehat{f}(S)^2.$

In other words, the influence of coordinate $i$ on $f$ equals the sum of $f$’s Fourier weights on sets containing $i$. This is another good example of being able to “read off” an interesting combinatorial property of a boolean function from its Fourier expansion. In the special case that $f : \{-1,1\}^n \to \{-1,1\}$ is monotone there is a much simpler way to read off its influences: they are the degree-$1$ Fourier coefficients. In what follows, we write $\widehat{f}(i)$ in place of $\widehat{f}(\{i\})$.

Proposition 20 If $f : \{-1,1\}^n \to \{-1,1\}$ is monotone then $\mathbf{Inf}_i[f] = \widehat{f}(i)$.

Proof: By monotonicity, the $\pm 1$ in \eqref{eqn:derivative-of-boolean} is always $1$; i.e., $\mathrm{D}_if(x)$ is the $0$-$1$ indicator that $i$ is pivotal for $x$. Hence $\mathbf{Inf}_i[f] = \mathop{\bf E}[\mathrm{D}_i f] = \widehat{\mathrm{D}_if}(\emptyset) = \widehat{f}(i)$, where the third equality used Proposition 18. $\Box$

This formula allows us a neat proof that for any $2$-candidate voting rule that is monotone and transitive-symmetric, all of the voters have small influence:

Proposition 21 Let $f : \{-1,1\}^n \to \{-1,1\}$ be transitive-symmetric and monotone. Then $\mathbf{Inf}_i[f] \leq 1/\sqrt{n}$ for all $i \in [n]$.

Proof: Transitive-symmetry of $f$ implies that $\widehat{f}(i) = \widehat{f}(i’)$ for all $i, i’ \in [n]$ (using Exercise 1.29(a)); thus by monotonicity, $\mathbf{Inf}_i[f] = \widehat{f}(i) = \widehat{f}(1)$ for all $i \in [n]$. But by Parseval, $1 = \sum_S \widehat{f}(S)^2 \geq \sum_{i=1}^n \widehat{f}(i)^2 = n \widehat{f}(1)^2$; hence $\widehat{f}(1) \leq 1/\sqrt{n}$. $\Box$

This bound is slightly improved in the exercises.

The derivative operators are very convenient for functions defined on $\{-1,1\}^n$ but they are less natural if we think of the Hamming cube as $\{\mathsf{True}, \mathsf{False}\}^n$; for the more general domains we’ll look at in later chapters they don’t even make sense. We end this section by introducing some useful definitions that will generalize better later on.

Definition 22 The $i$th expectation operator $\mathrm{E}_i$ is the linear operator on functions $f : \{-1,1\}^n \to {\mathbb R}$ defined by $\mathrm{E}_i f (x) = \mathop{\bf E}_{{\boldsymbol{x}}_i}[f(x_1, \dots, x_{i-1}, {\boldsymbol{x}}_i, x_{i+1}, \dots, x_n)].$

Whereas $\mathrm{D}_i f$ isolates the part of $f$ depending on the $i$th coordinate, $\mathrm{E}_i f$ isolates the part not depending on the $i$th coordinate. In the exercises you are asked to verify the following:

Proposition 23 For $f : \{-1,1\}^n \to {\mathbb R}$,

• $\displaystyle \mathrm{E}_i f (x) = \frac{f(x^{(i \mapsto 1)}) + f(x^{(i \mapsto -1)})}{2}$,
• $\displaystyle \mathrm{E}_i f (x) = \sum_{S \not \ni i} \widehat{f}(S)\,x^{S}$,
• $\displaystyle f(x) = x_i \mathrm{D}_i f(x) + \mathrm{E}_i f(x)$.

Note that in the decomposition $f = x_i \mathrm{D}_i f + \mathrm{E}_i f$, neither $\mathrm{D}_i f$ nor $\mathrm{E}_i f$ depends on $x_i$. This decomposition is very useful for proving facts about boolean functions by induction on $n$.

Finally, we will also define an operator very similar to $\mathrm{D}_i$ called the $i$th Laplacian:

Definition 24 The $i$th directional Laplacian operator $\mathrm{L}_i$ is defined by $\mathrm{L}_i f = f - \mathrm{E}_i f.$ Notational warning: many authors use the negated definition, $\mathrm{E}_i f – f$.

In the exercises you are asked to verify the following:

Proposition 25 For $f : \{-1,1\}^n \to {\mathbb R}$,

• $\displaystyle \mathrm{L}_i f (x) = \frac{f(x)- f(x^{\oplus i})}{2}$,
• $\displaystyle \mathrm{L}_i f (x) = x_i \mathrm{D}_i f(x) = \sum_{S \ni i} \widehat{f}(S)\,x^{S}$,
• $\displaystyle \langle f, \mathrm{L}_i f \rangle = \langle \mathrm{L}_i f, \mathrm{L}_i f \rangle = \mathbf{Inf}_i[f]$.

### 10 comments to §2.2: Influences and derivatives

• syz

line before Def 22: chpaters -> chapters

• Ryan O'Donnell

Fixed, thanks!

• Albert

In the proof of Proposition 20, the equality that uses Proposition 18 is the third, not the second.

• Ryan O'Donnell

Thanks Albert, fixed.

• Jon Awbrey

See the following pages for another approach to boolean derivatives.

Differential Logic : Introduction

Differential_Logic_and_Dynamic_Systems

• Jon Awbrey

There’s another exposition of differential logic, leading up to the logical analogue of Taylor series, at the following address:

Differential Propositional Calculus

• Yi Zhang

I wonder what happens with cross derivatives, for example how to express $D_i(D_j f(x))$ in terms of the Fourier coefficients.

• Yi Zhang

I got it now!!

• Johan Sivertsen

Line after Thm. 19 should be the sum of f’s \emph{squared} Fourier weights.

• Avishay Yanay

In the equation of theorem 9, the sum limits are $S\subseteq [n], i\in S$ rather than $i\in S$.