In this section we will collect some applications of the General Hypercontractivity Theorem, including generalizations of the facts from Section 9.5.

We begin by bounding the $q$-norms of low-degree functions. The proof is essentially the same as that of Theorem 9.21; see the exercises.

Theorem 20In the setting of the General Hypercontractivity Theorem, if $f$ has degree at most $k$ then \[ \|f\|_q \leq (\sqrt{q-1} \cdot \lambda^{1/q-1/2})^k \|f\|_2. \]

Next we turn to an analogue of Theorem 9.22, getting a relationship between the $2$-norm and the $1$-norm for low-degree functions. The proof (an exercise) needs $(2,q,\rho)$-hypercontractivity with $q$ tending to $2$, so to get the most elegant statement requires appealing to the sharp bound from Theorem 17:

Theorem 21In the setting of the General Hypercontractivity Theorem, if $f$ has degree at most $k$ then \[ \|f\|_2 \leq c(\lambda)^k \|f\|_1, \quad \text{where } c(\lambda) = \sqrt{\tfrac{1-\lambda}{\lambda}}^{1/(1-2\lambda)}. \] We have $c(\lambda) \sim 1/\sqrt{\lambda}$ as $\lambda \to 0$, $c(\lambda) \to e$ as $\lambda \to \tfrac{1}{2}$, and in general, $c(\lambda) \leq e/\sqrt{2\lambda}$.

Just as in Chapter 9.5 we obtain (exercise) the following as a corollary:

Theorem 22In the setting of the General Hypercontractivity Theorem, if $f$ is a nonconstant function of degree at most $k$ then \[ \mathop{\bf Pr}_{{\boldsymbol{x}} \sim \pi^{\otimes n}}\bigl[f({\boldsymbol{x}}) > \mathop{\bf E}[f]\bigr] \geq \tfrac14 (e^2/2\lambda)^{-k} \geq (15/\lambda)^{-k}. \]

Extending Theorem 9.23, the concentration bound for degree-$k$ functions, is straightforward (see the exercises). We again get that the probability of exceeding $t$ standard deviations decays like $\exp(-\Theta(t^{2/k}))$, though the constant in the $\Theta(\cdot)$ is linear in $\lambda$:

Theorem 23In the setting of the General Hypercontractivity Theorem, if $f$ has degree at most $k$ then for any $t \geq \sqrt{2e/\lambda}^{k}$, \[ \mathop{\bf Pr}_{{\boldsymbol{x}} \sim \pi^{\otimes n}}[|f({\boldsymbol{x}})| \geq t \|f\|_2] \leq \lambda^k \exp\left(-\tfrac{k}{2e} \lambda t^{2/k}\right). \]

Next, we give a generalization of the Small-Set Expansion Theorem, the proof being left for the exercises.

Theorem 24Let $(\Omega, \pi)$ be a finite probability space, $|\Omega| \geq 2$, in which every outcome has probability at least $\lambda$. Let $A \subseteq \Omega^n$ have “volume” $\alpha$; i.e., suppose $\mathop{\bf Pr}_{{\boldsymbol{x}} \sim \pi^{\otimes n}}[{\boldsymbol{x}} \in A] = \alpha$. Let $q \geq 2$. Then for any \[ 0 \leq \rho \leq \tfrac{1}{q-1} \cdot \lambda^{1 - 2/q} \] (or even $\rho$ as large as the square of the quantity in Theorem 17) we have \[ \mathbf{Stab}_{\rho}[1_A] = \mathop{\bf Pr}_{\substack{{\boldsymbol{x}} \sim \pi^{\otimes n} \\ \boldsymbol{y} \sim N_{\rho}({\boldsymbol{x}})}}[{\boldsymbol{x}} \in A, \boldsymbol{y} \in A] \leq \alpha^{2-2/q}. \]

Similarly, we can generalize Corollary 9.25, bounding the stable influence of a coordinate by a power of the usual influence:

Theorem 25In the setting of Theorem 24, if $f : \Omega \to \{-1,1\}$ then \[ \rho \mathbf{Inf}_i^{(\rho)}[f] \leq \mathbf{Inf}_i[f]^{2-2/q}. \] for all $i \in [n]$. In particular, by selecting $q = 4$ we get \begin{equation} \label{eqn:gen-stab-infl-bound4} \sum_{S \ni i} (\sqrt{\lambda}/3)^{|S|} \|f^{=S}\|_2^2 \leq \mathbf{Inf}_i[f]^{3/2}. \end{equation}

*Proof:* Applying the General Hypercontractivity Theorem to $\mathrm{L}_i f$ and squaring we get \[ \|\mathrm{T}_{\sqrt{\rho}} \mathrm{L}_i f\|^2_2 \leq \|\mathrm{L}_i f\|_{q'}^2. \] By definition, the left-hand side is $\rho \mathbf{Inf}_i^{(\rho)}[f]$. The right-hand side is $(\|\mathrm{L}_i f\|_{q’}^{q’})^{2-2/q}$, and $\|\mathrm{L}_i f\|_{q’}^{q’} \leq \mathbf{Inf}_i[f]$ by Exercise 8.9½. $\Box$

The KKL Edge-Isoperimetic Theorem in this setting now follows by an almost verbatim repetition of the proof from Chapter 9.6.

KKL Isoperimetric Theorem for general product space domainsIn the setting of the General Hypercontractivity Theorem, suppose $f$ has range $\{-1,1\}$ and is nonconstant. Let $\mathbf{I}’[f] = \mathbf{I}[f]/\mathop{\bf Var}[f] \geq 1$. Then \[ \mathbf{MaxInf}[f] \geq \tfrac{1}{\mathbf{I}’[f]^2}\cdot (9/\lambda)^{-\mathbf{I}’[f]}. \] As a consequence, $\mathbf{MaxInf}[f] \geq \Omega(\frac{1}{\log(1/\lambda)}) \cdot \mathop{\bf Var}[f] \cdot \frac{\log n}{n}$.

*Proof:* The proof is essentially identical to the one in Chapter 9.6, but using \eqref{eqn:gen-stab-infl-bound4} from Theorem 25. Summing this inequality over all $i \in [n]$ yields \begin{equation} \label{eqn:gen-KKL} \sum_{S \subseteq [n]} |S| (\sqrt{\lambda}/3)^{|S|} \|f^{=S}\|_2^2 \leq \sum_{i=1}^n \mathbf{Inf}_i[f]^{3/2} \leq \mathbf{MaxInf}[f]^{1/2} \cdot \mathbf{I}[f]. \end{equation} On the left-hand side above we will drop the factor of $|S|$ for $|S| > 0$. We also introduce the set-valued random variable $\boldsymbol{S}$ defined by $\mathop{\bf Pr}[\boldsymbol{S} = S] = \|f^{=S}\|_2^2/\mathop{\bf Var}[f]$ for $S \neq \emptyset$. Note that $\mathop{\bf E}[|\boldsymbol{S}|] = \mathbf{I}’[f]$. Thus \[ \text{LHS}\eqref{eqn:gen-KKL} \geq \mathop{\bf Var}[f] \cdot \mathop{\bf E}_{\boldsymbol{S}}[(\sqrt{\lambda}/3)^{|\boldsymbol{S}|}] \geq \mathop{\bf Var}[f] \cdot (\sqrt{\lambda}/3)^{\mathop{\bf E}[|\boldsymbol{S}|]} = \mathop{\bf Var}[f] \cdot (\sqrt{\lambda}/3)^{\mathbf{I}’[f]}, \] where we used that $s \mapsto (\sqrt{\lambda}/3)^s$ is convex. The first statement of the theorem now follows after rearrangement. As for the second statement, there is some universal $c > 0$ such that \[ \mathbf{I}'[f] \leq c \cdot \tfrac{1}{\log(1/\lambda)} \cdot \log n \quad\implies\quad \tfrac{1}{\mathbf{I}’[f]^2}\cdot (9/\lambda)^{-\mathbf{I}’[f]} = O(1/\lambda)^{-\mathbf{I}’[f]} \geq \frac{1}{\sqrt{n}}, \] say, in which case our lower bound for $\mathbf{MaxInf}[f]$ is $\frac{1}{\sqrt{n}} \gg \frac{\log n}{n}$. On the other hand, \[ \mathbf{I}'[f] \geq c \cdot \tfrac{1}{\log(1/\lambda)} \cdot \log n \quad\implies\quad \mathbf{I}[f] \geq \Omega(\tfrac{1}{\log(1/\lambda)}) \cdot \mathop{\bf Var}[f] \cdot \log n, \] in which case even the average influence of $f$ is $\Omega(\frac{1}{\log(1/\lambda)}) \cdot \mathop{\bf Var}[f] \cdot \frac{\log n}{n}$. $\Box$

Similarly, essentially no extra work is required to generalize Theorem 9.28 and Friedgut’s Junta Theorem to general product space domains; see the exercises. For example, we have:

Friedgut’s Junta Theorem for general product space domainsIn the setting of the General Hypercontractivity Theorem, if $f$ has range $\{-1,1\}$ and $0 < \epsilon \leq 1$, then $f$ is $\epsilon$-close to a $(1/\lambda)^{O(\mathbf{I}[f]/\epsilon)}$-junta $h : \Omega^n \to \{-1,1\}$ (i.e., $\mathop{\bf Pr}_{{\boldsymbol{x}} \sim \pi^{\otimes n}}[f({\boldsymbol{x}}) \neq h({\boldsymbol{x}})] \leq \epsilon$).

We conclude this section by establishing “sharp thresholds” — in the sense of Chapter 8.4 — for monotone transitive-symmetric functions with critical probability in the range $[1/n^{o(1)}, 1-1/n^{o(1)}]$. Let $f : \{-1,1\}^n \to \{-1,1\}$ be a nonconstant monotone function and define the (strictly increasing) curve $F : [0,1] \to [0,1]$ by $F(p) = \mathop{\bf Pr}_{{\boldsymbol{x}} \sim \pi_p^{\otimes n}}[f({\boldsymbol{x}}) = -1]$. Recall that the critical probability $p_c$ is defined to be the value such that $F(p_c) = 1/2$; equivalently, such that $\mathop{\bf Var}[f^{(p_c)}] = 1$. Recall also the Margulis–Russo Formula which says that \[ \frac{d}{dp} F(p) = \frac{1}{\sigma^2} \cdot \mathbf{I}[f^{(p)}], \] where \[ \sigma^2 = \sigma^2(p) = \mathop{\bf Var}_{\pi_p}[{\boldsymbol{x}}_i] = 4p(1-p) = \Theta(\min(p, 1-p)). \]

Remark 26Since we will not be concerned with constant factors, it’s helpful in the following discussion to mentally replace $\sigma^2$ with $\min(p, 1-p)$. In fact it’s even more helpful to always assume $p \leq 1/2$ and replace $\sigma^2$ with $p$.

Now suppose $f$ is a transitive-symmetric function; e.g., a graph property. This means that all of its influences are the same; i.e., \[ \mathbf{Inf}_i[f^{(p)}] = \mathbf{MaxInf}[f^{(p)}] = \frac{1}{n} \mathbf{I}[f^{(p)}] \] for all $i \in [n]$. It thus follows from the KKL Theorem for general product spaces that \[ \mathbf{I}[f^{(p)}] \geq \Omega\bigl(\tfrac{1}{\log(1/\min(p,1-p))}\bigr) \cdot \mathop{\bf Var}[f^{(p)}] \cdot \log n; \] hence \begin{equation} \label{eqn:every-mono-fcn-sharp-thresh} \frac{d}{dp} F(p) \geq \mathop{\bf Var}[f^{(p)}] \cdot \Omega\bigl(\tfrac{1}{\sigma^2 \ln(e/\sigma^2)}\bigr) \cdot \log n. \end{equation} (As mentioned in Remark 26, assuming $p \leq 1/2$ you can read $\sigma^2 \ln(e/\sigma^2)$ as $p \log(1/p)$.)

If we take $p = p_c$ in inequality \eqref{eqn:every-mono-fcn-sharp-thresh} we conclude that $F(p)$ has a large derivative at its critical probability: $F’(p_c) \geq \Omega(\tfrac{1}{p_c \log(1/p_c)}) \cdot \log n$, assuming $p_c \leq 1/2$. In particular if $\log(1/p_c) \ll \log n$ — that is, $p_c > 1/n^{o(1)}$ — then $F’(p_c) = \omega(\tfrac{1}{p_c})$. This suggests that $f$ has a “sharp threshold”; i.e., $F(p)$ jumps from near $0$ to near $1$ in an interval of the form $p_c(1\pm o(1))$. However largeness of $F’(p_c)$ is not quite enough to establish a sharp threshold (see Exercise 8.26); we need to have $F’(p)$ large *throughout* the range of $p$ near $p_c$ where $\mathop{\bf Var}[f^{(p)}]$ is large. Happily, inequality \eqref{eqn:every-mono-fcn-sharp-thresh} provides precisely this.

Remark 27Even if we are only concerned about monotone functions $f$ with $p_c = 1/2$, we still need the KKL Theorem for general product spaces to establish a sharp threshold. Though $F’(1/2) \geq \Omega(\log n)$ can be derived using just the uniform-distribution KKL Theorem from Chapter 9.6, we also need to know that $F’(p) \geq \Omega(\log n)$ continues to hold for $p = 1/2 \pm O(1/\log n)$.

Making the above ideas precise, we can establish the following result of Friedgut and Kalai **[FK96b]** (cf. Exercises 8.24, 8.25):

Theorem 28Let $f : \{-1,1\}^n \to \{-1,1\}$ be a nonconstant, monotone, transitive-symmetric function and let $F : [0,1] \to [0,1]$ be the strictly increasing function defined by $F(p) = \mathop{\bf Pr}_{{\boldsymbol{x}} \sim \pi_p^{\otimes n}}[f({\boldsymbol{x}}) = -1]$. Let $p_c$ be the critical probability such that $F(p_c) = 1/2$ and assume without loss of generality that $p_c \leq 1/2$. Fix $0 < \epsilon < 1/4$ and let \[ \eta = B \log(1/\epsilon) \cdot \frac{\log(1/p_c)}{\log n}, \] where $B > 0$ is a certain universal constant. Then assuming $\eta \leq 1/2$, \[ F(p_c\cdot (1-\eta)) \leq \epsilon, \qquad F(p_c\cdot(1+\eta)) \geq 1 - \epsilon. \]

*Proof:* Let $p$ be in the range $p_c\cdot (1\pm\eta)$. By the assumption $\eta \leq 1/2$ we also have $\tfrac12 p_c \leq p \leq \tfrac32 p_c \leq \tfrac34$. It follows that the quantity $\sigma^2\ln(e/\sigma^2)$ in the KKL corollary \eqref{eqn:every-mono-fcn-sharp-thresh} is within a universal constant factor of $p_c\log(1/p_c)$. Thus for all $p$ in the range $p_c\cdot (1\pm\eta)$ we obtain \[ F'(p) \geq \mathop{\bf Var}[f^{(p)}] \cdot \Omega\bigl(\tfrac{1}{p_c\log(1/p_c)}\bigr) \cdot \log n. \] Using $\mathop{\bf Var}[f^{(p)}] = 4 F(p)(1-F(p))$, the definition of $\eta$, and a suitable choice of $B$, this is equivalent to \begin{equation} \label{eqn:fk-ineq} F’(p) \geq \frac{2\ln(1/2\epsilon)}{\eta p_c} F(p)(1-F(p)). \end{equation} We now show that \eqref{eqn:fk-ineq} implies that $F(p_c-\eta p_c) \leq \epsilon$ and leave the implication $F(p_c +\eta p_c) \geq 1-\epsilon$ as an exercise. For $p \leq p_c$ we have $1-F(p) \geq 1/2$ and hence \[ F'(p) \geq \frac{\ln(1/2\epsilon)}{\eta p_c} F(p) \quad\implies\quad \frac{d}{dp} \ln F(p) = \frac{F'(p)}{F(p)} \geq \frac{\ln(1/2\epsilon)}{\eta p_c}. \] It follows that \[ \ln F(p_c - \eta p_c) \leq \ln F(p_c) - \ln(1/2\epsilon) = \ln(1/2) - \ln(1/2\epsilon) = \ln \epsilon; \] i.e., $F(p_c-\eta p_c) \leq \epsilon$ as claimed. $\Box$

This proof establishes that every monotone transitive-symmetric function with critical probability at least $1/n^{o(1)}$ (and at most $1 – 1/n^{o(1)}$) has a sharp threshold. Unfortunately, the restriction on the critical probability can’t be removed. The simplest example illustrating this is the logical OR function $\mathrm{OR}_n : \{\mathsf{True}, \mathsf{False}\}^{n} \to \{\mathsf{True}, \mathsf{False}\}$ (equivalently, the graph property of containing an edge), which has critical probability $p_c \sim \frac{\ln 2}{n}$. Even though $\mathrm{OR}_n$ is transitive-symmetric, it has constant total influence at its critical probability, $\mathbf{I}[\mathrm{OR}_n^{(p_c)}] \sim 2\ln2$. Indeed $\mathrm{OR}_n$ doesn’t have a sharp threshold; i.e., it’s not true that $\mathop{\bf Pr}_{\pi_p}[\mathrm{OR}_n({\boldsymbol{x}}) = \mathsf{True}] = 1-o(1)$ for $p = p_c(1+o(1))$. For example, if ${\boldsymbol{x}}$ is drawn from the $(2p_c)$-biased distribution we still just have $\mathop{\bf Pr}[\mathrm{OR}_n({\boldsymbol{x}}) = \mathsf{True}] \approx 3/4$. On the other hand, most “interesting” monotone transitive-symmetric functions *do* have a sharp threshold; in Section 5 of this chapter we’ll derive a more sophisticated method for establishing this.

## Leave a Reply