In this section we study the *tribes* DNF formulas, which serve as an important examples and counterexamples in analysis of boolean functions. Perhaps the most notable feature of the tribes function is that (for a suitable choice of parameters) it is essentially unbiased and yet all of its influences are quite tiny.

Recall from Chapter 2.1 that the function $\mathrm{Tribes}_{w,s} : \{-1,1\}^{sw} \to \{-1,1\}$ is defined by its width-$w$, size-$s$ DNF representation: \begin{multline*} \mathrm{Tribes}_{w,s}(x_1, \dots, x_w, \dots, x_{(s-1)w+1}, \dots, x_{sw}) \\ = (x_1 \wedge \cdots \wedge x_w)\ \vee\ \cdots\ \vee\ (x_{(s-1)w+1} \wedge \cdots \wedge x_{sw}). \end{multline*} (We are using the notation where $-1$ represents logical $\mathsf{True}$ and $1$ represents logical $\mathsf{False}$.)

Fact 10$\mathop{\bf Pr}_{\boldsymbol{x}}[\mathrm{Tribes}_{w,s}({\boldsymbol{x}}) = -1] = 1 – (1-2^{-w})^s$.

The most interesting setting of parameters makes this probability as close to $1/2$ as possible (a slightly different choice than the one in Exercise 2.11):

Definition 11For $w \in {\mathbb N}^+$, let $s = s_w$ be the largest integer such that $1 – (1-2^{-w})^s \leq 1/2$. Then for $n = n_w = sw$ we define $\mathrm{Tribes}_n : \{-1,1\}^n \to \{-1,1\}$ to be $\mathrm{Tribes}_{w,s}$. Note this is only defined only for certain $n$: $1$, $4$, $15$, $40$, $\dots$

Here $s \approx \ln(2) 2^w$, hence $n \approx \ln(2) w 2^w$ and therefore $w \approx \log n – \log \ln n$ and $s \approx n/\log n$. A slightly more careful accounting (see the exercises) yields:

Proposition 12For the $\mathrm{Tribes}_n$ function as in Definition 11:

- $s = \ln(2) 2^w – \Theta_w(1)$;
- $n = \ln(2) w 2^w – \Theta(w)$, thus $n_{w+1} = (2 + o(1)) n_w$;
- $w = \log n – \log \ln n + o_n(1)$, and $2^w = \frac{n}{\ln n} (1+o_n(1))$;
- $\mathop{\bf Pr}[\mathrm{Tribes}_{n}({\boldsymbol{x}}) = -1] = 1/2 – O\left(\frac{\log n}{n}\right)$.

Thus with this setting of parameters $\mathrm{Tribes}_n$ is essentially unbiased. Regarding its influences:

Proposition 13$\mathbf{Inf}_i[\mathrm{Tribes}_n] = \frac{\ln n}{n}(1\pm o(1))$ for each $i \in [n]$ and hence $\mathbf{I}[\mathrm{Tribes}_n] = (\ln n)(1\pm o(1))$.

*Proof:* Thinking of $\mathrm{Tribes}_n = \mathrm{Tribes}_{w,s}$ as a voting rule, voter $i$ is pivotal if and only if: a) all other voters in $i$’s “tribe” vote $-1$ ($\mathsf{True}$); b) all other tribes produce the outcome $1$ ($\mathsf{False}$). The probability of this is indeed \[ 2^{-(w-1)} \cdot (1-2^{-w})^{s-1} = \tfrac{2}{2^w - 1} \cdot \mathop{\bf Pr}[\mathrm{Tribes}_n = 1] = \tfrac{\ln n}{n}(1\pm o(1)), \] where we used Fact 10 and then Proposition 12. $\Box$

Thus if we are interested in (essentially) unbiased voting rules in which every voter has small influence, $\mathrm{Tribes}_n$ is a much stronger example than $\mathrm{Maj}_n$ where each voter has influence $\Theta(1/\sqrt{n})$. You may wonder if the maximum influence can be even *smaller* than $\Theta\bigl(\frac{\ln n}{n}\bigr)$ for unbiased voting rules. Certainly it can’t be smaller than $\frac{1}{n}$, since the Poincaré inequality says that $\mathbf{I}[f] \geq 1$ for unbiased $f$. In fact the famous KKL Theorem shows that the $\mathrm{Tribes}_n$ example is tight up to constants:

Kahn–Kalai–Linial (KKL) TheoremFor any $f : \{-1,1\}^n \to \{-1,1\}$, \[ \max_{i \in [n]} \{\mathbf{Inf}_i[f] \} \geq \mathop{\bf Var}[f] \cdot \Omega\Bigl(\frac{\log n}{n}\Bigr). \]

We prove the KKL Theorem in Chapter 10.

We conclude this section by recording a formula for the Fourier coefficients of $\mathrm{Tribes}_{w,s}$. The proof is an exercise.

Proposition 14Suppose we index the Fourier coefficients of $\mathrm{Tribes}_{w,s} \{-1,1\}^{sw} \to \{-1,1\}$ by sets $T = (T_1, \dots, T_s) \subseteq [sw]$, where $T_i$ is the intersection of $T$ with the $i$th “tribe”. Then \[ \widehat{\mathrm{Tribes}_{w,s}}(T) = \begin{cases} 2(1-2^{-w})^s - 1 & \text{if $T = \emptyset$,} \\ 2(-1)^{k + |T|}2^{-kw} (1-2^{-w})^{s-k} & \text{if $k = \#\{i : T_i \neq \emptyset\} > 0$.} \end{cases} \]

## Leave a Reply