**[Editor's note -- just a reminder that these daily updates are almost entirely thanks to Li-Yang Tan.]**

The first speaker of the day was Sergey Bobkov, who spoke about concentration on the cube and its relationship with various isoperimetric problems, including highlights of his own work.

Suppose we endow the continuous unit cube $[0,1]^n$ with the scaled Lebesgue measure $\mathbf{P}$; we will be interested in concentration inequalities like the following: for all measurable $A\subseteq \mathbb{R}^n$ of density $\mathbf{P}(A) \ge 1/2$ and $r > 0$,

\begin{equation}\mathbf{P}(A + rB_2) \ge 1-e^{-cr^2}, \label{bobkov}\end{equation}

where $c > 0$ is a universal constant ($c$ can be taken to be $\pi$ in this case), $B_2$ is the open Euclidean ball of unit radius centered at the origin, and $+$ is the Minkowski sum. (The analogous statement is *false* for the discrete cube $\{0,1\}^n$, though Talagrand has shown that it holds when $A$ or $\overline{A}$ is convex.) Here $A+rB_2$ is the enlargement of $A$ that one obtains by taking every point in $x\in A$ and also including its $r$-neighborhood of all $y\in A$ at distance $\| x-y\|_2 < r$ from $a$, and so (\ref{bobkov}) says that $A$ along with its $r$-neighborhood fills up almost all of $[0,1]^n$. Concentration inequalities like theses are well-known to be intimately related to various isoperimetric problems: measures $\mu$ that satisfy the Poincaré and Log-Sobolev inequalities enjoy good concentration properties, and implications in the reverse direction have been established as well. (Incidentally, Sergey mentions that the isoperimetric problem of determining the sets $A\subseteq [0,1]^n$ of measure $1/2$ that minimize $\mathbf{P}(A+rB_2)$ remains open.)

Sergey presented two concentration inequalities of this flavor with respect to other measures and notions of enlargement/neighborhood, both of which appear in a 2010 paper of his. First, if $A\subseteq [0,1]^n$ is *permutation invariant* (i.e. if $x\in A$ then $(x_{\pi(1)},\ldots,x_{\pi(n)})\in A$ for all permutations $\pi \in S_n)$ and $\mathbf{P}(A) \ge 1/2$, then for all $r > 0$,

\[ \mathbf{P}(A + rB_\infty) \ge 1-e^{-cnr^2}, \]

where again $c > 0$ is a universal constant and now $B_\infty$ is the unit $\ell_\infty$-ball. Note that since

\[ A + \frac{r}{\sqrt{n}} B_\infty \subset A + rB_2, \]

this is a sharpening of (\ref{bobkov}) for permutation invariant sets $A$. An equivalent functional formulation is the following: let $f:[0,1]^n\to\mathbb{R}$ be permutation invariant and suppose $|f(x)-f(y)| \le \|x-y\|_\infty$ for all $x,y\in[0,1]^n$. Then for all $r > 0$,

\[ \mathbf{P}\{|f-\mathbb{E}[f]|> r\}\le 2e^{-cnr^2}.\]

The second inequality concerns subsets of the simplex

\[ \Delta_n := \{ y \in \mathbb{R}^n_+\colon y_1 + \cdots + y_n \le 1 \}, \]

endowed with the uniform measure $\mu_n$, and states the following: for all $r > 0$ and $A\subseteq \Delta_n$ with $\mu_n(A) \ge 1/2$ and $r > 0$, we have that

\begin{equation} \mu_n(A+rB_1) \ge 1-e^{-cnr^2}. \label{bobkov2} \end{equation}

This inequality was proved by Arias-de-Reyna and Villa with a weaker bound of $1-ne^{-cnr^2}$, and the unnecessary dependence on $n$ was subsequently removed by Schechtman. Both these proofs are based on Talagrand’s isoperimetric theorem for the product exponential measure; Sergey gives a direct inductive proof in his paper. (A result of Sodin establishes an incomparable inequality $\mu_n(A+rB_2) \ge 1-e^{-cnr}$. Since $B_2\subset \sqrt{n}B_1$ this implies that $\mu_n(A+rB_1) \ge e^{c\sqrt{n}r}$, but this is a weaker statement than than (\ref{bobkov2}).)

Our next speaker was Almut Burchard, who spoke about polarization, symmetrization, and rearrangement inequalities. (For an introduction to all of these ideas, see Almut’s excellent survey.) The setting for these ideas can be either the sphere, Euclidean space, or hyperbolic space. Almut used Euclidean space as her running example but I’ll use the sphere here. Let $A$ be a subset of (the surface of) the sphere (in any dimension). Its *(symmetric decreasing) rearrangement* is the set $A^*$ which is a spherical cap (centered at the north pole, say) with the same volume as $A$. More generally, for a function $f : S^n \to \mathbb{R}$ you can define its rearrangement $f^* : S^n \to \mathbb{R}$ to be function which: a) is constant on all circles centered at the north pole; b) decreases as you move away from the north pole; c) is equimeasurable to $f$; i.e., for each $t$, the volume of points where $f > t$ is the same as the volume of points where $f^* > t$.

There are many inequalities which say that certain functionals of $f$ (or of multiple functions) increase or decrease when you change $f$ to its rearrangement $f^*$. For example, it’s possible to show that the perimeter of $A$ is always at least the perimeter of $A^*$ — this gives the the Isoperimetric Inequality on the sphere. More generally, you can show that for any $0 < \rho < 1$, the “spherical $\rho$-noise stability” of $A$ is always at most the spherical $\rho$-noise stability of $A^*$. This is like “Borell’s Isoperimetric Theorem”, except for the sphere. In fact, it’s strictly better because you can very straightforwardly deduce Borell’s Theorem (in Gaussian space) by projecting the sphere down to a lower dimension. (See the appendix of this paper.)

How do you show such rearrangement inequalities? All you can do is show that a *much simpler* symmetrization operation, called *polarization* appropriately increases or decreases your functional. Then you can get the result for the full rearrangement straightforwardly just by “polarizing many times, in every possible way” (roughly speaking!). And what is polarization? It’s super-simple: Pick any great circle $H$ on the sphere, dividing it into the hemisphere $H^+$ containing the north pole and the hemisphere $H^-$ containing the south pole. Now “polarizing” $A$ across $H$ just means that for each point of $A$ in $H^-$, you move it to its reflection across $H$ in $H^+$ — provided that point is not already occupied by $A$. More generally, if $f : S^n \to \mathbb{R}$, its polarization across $H$ is defined by looking at each pair of mirror-image points $x^\pm \in H^\pm$ and switching $f$’s values on these two points so that $f(x^+) \geq f(x^-)$ holds.

OK, so this reduces your job to showing that any polarization improves your functional (say, increases your noise stability). How do you show that? Essentially, you just have to show that each little two-point switch only helps you. Using this, one can get such basic results as the following: For any functions $u, v$,

\[

\int \int u(x) v(y) K(\mathrm{dist}(x,y)) dx dy \leq \int \int u^*(x) v^*(y) K(\mathrm{dist}(x,y)) dx dy

\]

whenever $K$ is a decreasing “kernel” function. (The fact that $K$ is decreasing is all you need to show that the two-point switches always work in your favor.) The above inequality holds, e.g., in Euclidean space (Riesz ’30s) and on the sphere (Baernstein-Taylor, ’70s). “Borell’s Isoperimetric Inequality on the Sphere” is automatic from it, even in the $2$-function or $2$-set version, just because $\rho$-noisy pairs of points $x, y$ are more likely to be close together than far apart (i.e., the appropriate kernel function is indeed decreasing).

Almut spoke about how these kinds of results can be used to show various results about Brownian motion. For example, in a work of hers with Schmuckenschlager, she shows that if $A$ is any subset of the sphere and $t \in \mathbb{R}^+$ is any time, then the probability a Brownian motion on the sphere will exit $A$ by time $t$ is at least the probability that it will exit the cap $A^*$ by time $t$. (This generalizes Borell’s similar result in Gaussian space.)

Almut also described versions of these rearrangement inequalities with more than two functions. One nice one (proved by Draghici using a reduction to functions on $\{-1,1\}^n$, and then strengthened by Almut and Hajaiej) is the following: Let $F : \mathbb{R}^n \to \mathbb{R}$ be supermodular. Then for any functions $u_1, \dots, u_n$ and any decreasing kernels $K_{ij}$ ($1 \leq i < j \leq n$) it holds that the following only increases when the $u_i$’s are replaced by their rearrangements $u_i^*$:

\[

\int \cdots \int F(u_1(x_1), \dots, u_n(x_n)) \prod_{i < j} K_{ij}(\mathrm{dist}(x_i,x_j))dx_1 \cdots dx_n.

\]

Next up was Alexandra Kolla, who presented joint work with Aram Harrow and Leonard Schulman on a maximal inequality for spherical means on the hypercube. She began with the following motivating example: suppose a criminal on the run in $\{0,1\}^n$ would like to choose a vertex among the $2^n$ many to reside, and there are cops located on a certain number of these vertices. Needless to say the criminal wouldn’t want to reside on a vertex on which a cop is located; presumably he would want to be as far away from as many cops as possible. To quantify this, let us say that a vertex $v$ is safe if for every radius $r$, the fraction of cops located at distance $r$ away from $v$ is a small fraction. The question is: how many safe spots are there in $\{0,1\}^n$ as a function of the density of cops?

Let $f: \{0,1\}^n \to \{0,1\}$, which we should think of as an indicator for the vertices on which a cop is located. For each $k \in [n]$, consider the $k$-th spherical means operator

\[ (S_kf)(x) := \mathop{\mathbb{E}}_{\mathrm{dist}(x,\mathbf{y}) = k}[f(\boldsymbol{y})],\]

or in words, the fraction of the ${n\choose k}$ many vertices $y$ at distance $k$ from $x$ on which a cop is located. As the title of their paper promises, Aram, Alexandra, and Leonard prove a maximal inequality for these spherical means operators. More precisely, if we define

\[ (M_S f)(x) := \max_{0\le k\le n} (S_kf)(x), \]

then their main theorem states that there exists a universal constant $A_H$ such that $\| M_S \|_{2\to 2} \le A_H$. The key point is that $A_H$ is independent of the dimension $n$. Equivalently, for all $n \in \mathbb{N}$ and $f: \{0,1\}^n \to \{0,1\}$ we have that

\[ \| M_Sf \|_2 \le A_H \| f \|_2. \]

Therefore, assuming the overall density of cops (i.e. $\|f \|_2^2 = \mathbb{E}[f(\boldsymbol{x})]$) is sufficiently small, there is a large fraction of vertices that are relatively safe for the criminal.

The proof proceeds via spectral approximation. Note that the distribution induced by the $S_k$ operator (i.e. uniform over all vertices at a fixed distance $k$ away from $x$) is not a product distribution, which makes it less amendable to analysis using spectral techniques. We will therefore use as a proxy for $S_k$ the $T_\rho$ noise operator with rate $\rho = 1-(2k)/n$, where each coordinate is flipped with probability $k/n$. Intuitively these two operators should be somewhat similar, since $S_k(x)$ operator corresponds to flipping a random subset of *exactly* $k$ coordinates of $x$, whereas the $T_\rho$ operator corresponds to flipping $k$ coordinates *in expectation*. And indeed this intuition can be made precise by comparing the eigenstructures of $S_k$ and $T_\rho$: the spectrum of $T_\rho$ can be analyzed via Fourier analysis, and that of $S_k$ is determined by the Krawtchouck polynomials.

Alexandra concluded by mentioning recent follow-up work with Greenbalt, Krause, and Schulman establishing similar maximal inequalities for $\mathbb{Z}_{m+1}^n$, the Cartesian product of $(m+1)$-cliques, for operators that average over a ball of radius $k$ instead of a sphere of radius $k$. (The analogous inequalities for the sphere appear to be harder.) The eigenstructure of these operators are again given by the Krawtchouk polynomials, though in this case the analysis is significantly more challenging.

For more details, see this blog post or watch Alexandra’s talk at the Simons Institute.

Our fourth speaker was Joe Neeman, who discussed some very nice recent work of his on noise stability. The *Gaussian noise stability *of a set $A\subseteq \mathbb{R}^n$ is the quantity $\Pr[\mathbf{X}\in A, \mathbf{Y}\in A]$, where $\mathbf{X}$ and $\mathbf{Y}$ are a pair of $\rho$-correlated Gaussians. While there are numerous interesting questions concerning Gaussian noise stability, Joe began his talk by addressing the basic one of how large this quantity can be relative to the Gaussian measure $\gamma_n(A)$ of $A$. A fundamental result in this direction is Borell’s Isoperimetric Theorem, which states that halfspaces are the most noise stable. More precisely, if $B = \{x\cdot a \le b\}$ is a halfspace such that $\gamma_n(B) = \gamma_n(A)$ (by rotational invariance of the Gaussian all such halfspaces are essentially one and the same), then

\[ \Pr[\mathbf{X}\in A, \mathbf{Y}\in A] \le \Pr[\mathbf{X}\in B, \mathbf{Y}\in B]. \]

The discrete analogue of Borell’s theorem for the hypercube (of which Borell’s theorem may be viewed as the “Gaussian special case”) is the Majority Is Stablest theorem of our three organizers Elchanan, Ryan, and Krzysztof: if the origin-centered Hamming ball $B = \big\{\sum x_i \le b\big\}$ satisfies $|B| = |A|$ then

\[ \Pr[\mathbf{X}\in A, \mathbf{Y}\in A] \le \Pr[\mathbf{X}\in B, \mathbf{Y}\in B] + \tau_A, \]

where now $\mathbf{X}, \mathbf{Y} \in \{-1,1\}^n$ are $\rho$-correlated bitstrings (i.e. $\mathbf{X}$ and $\mathbf{Y}$ are each uniform and $\mathbb{E}[\mathbf{X}_i\mathbf{Y}_j] = \delta_{ij}\cdot \rho$), and the error term $\tau_A$ depends on the maximum influence of $\mathbb{1}_A$.

To state these theorems in a slightly different language, we define

\[ J(a,b;\rho) := \Pr[\mathbf{X}\in B_1, \mathbf{Y}\in B_2], \]

where $B_1$ and $B_2$ are parallel halfspaces with $\gamma_n(B_1) = a$ and $\gamma_n(B_2) = b$, and $\mathbf{X},\mathbf{Y}$ form a pair of $\rho$-correlated Gaussians. (For brevity we write $J(a,b)$ when the parameter $\rho$ is clear from context.) In this notation, (the two-function generalization of) Borell’s theorem may be stated as

\[ \Pr[\mathbf{X}\in A_1, \mathbf{Y}\in A_2] \le J(\gamma_n(A_1), \gamma_n(A_2))\quad \text{for all $A_1,A_2 \subseteq \mathbb{R}^n$}, \]

and likewise, the Mossel-O’Donnell-Oleszkiewicz theorem as

\[ \Pr[\mathbf{X} \in A_1, \mathbf{Y} \in A_2] \le J(|A_1|/2^n, |A_2|/2^n) + \tau. \]

It may seem slightly strange that the $J(\cdot,\cdot)$ function, defined in terms of the Gaussian distribution, appears here in the case when $\mathbf{X},\mathbf{Y}\in \{-1,1\}^n$; however, note that this is in fact okay because of the central limit theorem. A recent result Joe and Elchanan’s states the following functional inequality: if $f,g:\mathbb{R}^n\to [0,1]$, and $K : [0,1]^2\to\mathbb{R}$ is a function satisfying the second derivative condition

\[ \left(\begin{array}{cc} K_{xx} & \rho K_{xy} \\ \rho K_{xy} & K_{yy} \end{array}\right) \le 0,\]

then $\mathbb{E}[K(f(\mathbf{X}),g(\mathbf{Y}))] \le K(\mathbb{E}f,\mathbb{E}g)$. To see that this implies Borell’s theorem, we simply check that the derivatives of $J$ do in fact satisfy the condition above, and that $J(0,0)=J(0,1)=J(1,0)=0$ and $J(1,1)=1$, and therefore

\[ \mathbb{E}[J(\mathbb{1}_{A_1}(\mathbf{X}),\mathbb{1}_{A_2}(\mathbf{Y})] = \Pr[\mathbf{X}\in A_1,\mathbf{Y}\in A_2] \le J(\gamma_n(A),\gamma_n(B)). \]

A key advantage of Joe and Elchanan’s new proof of Borell’s inequality is that it allows them to establish both *uniqueness* and *stability*: halfspaces are uniquely the most noise stable sets, and furthermore, sets that are almost optimally noise stable are close to some halfspace. In exciting follow-up work with Anindya De, Joe and Elchanan extended this technique to give a new proof of the Majority Is Stablest theorem by induction on dimension. Yet again this approach has a key advantage: in this case this new “discrete proof” can be shown to be captured by low-degree SOS proofs (see Boaz’s talk yesterday), and as a consequence, rules out the Khot-Vishnoi MaxCut instance as a candidate integrality gap instance for the SOS/Lasserre hierarchy.

Joe concluded with a couple of intriguing directions and open problems. First, instead of just two correlated random variables $\mathbf{X}$ and $\mathbf{Y}$, we may consider $k$ of them $\mathbf{X}^1,\ldots,\mathbf{X}^k$. In this case the analogous question would be that of determining the sets $A_1,\ldots,A_k\subseteq \mathbb{R}^n$ that maximize the probability

\[ \Pr[\mathbf{X}^1 \in A_1,\ldots,\mathbf{X}^k \in A_k]. \]

Second, let $\mathbf{X}$ and $\mathbf{Y}$ be $\rho$-correlated Gaussians as in Borell’s isoperimetric theorem. Can we determine the *three* sets $A_1,A_2$, and $A_3$ that partition $\mathbb{R}^n$ and maximize the probability

\[ \Pr[\exists\, i \colon \mathbf{X}\in A_i, \mathbf{Y}\in A_i],\]

subject to volume constraints $\gamma_n(A_i) = a_i$? Joe and Elchanan have made progress on this question in very recent work with Steven Heilman.

For more details check out these videos of Joe and Elchanan speaking at the Simons Institute.

The final scheduled speaker of the workshop was Yuval Peres, who spoke about joint work with Yael Dekel and Ori Gurel-Gurevich on the planted clique problem, an important unresolved problem in average-case complexity. (The worst-case complexity of determining the size of the maximum clique in an *arbitrary* graph is well-known to be NP-complete, and assuming $\mathsf{P}\ne \mathsf{NP}$, this quantity is even hard to approximate.) Suppose we are given a graph $G \leftarrow \mathcal{G}(n,1/2)$ and are asked the find large clique in $G$; a straightforward probabilistic argument shows that the largest clique has size $(2+o(1))\log n$ with high probability. A simple greedy algorithm which runs in polynomial time finds a clique of size $(1+o(1))\log n$, and an exhaustive search running in time $n^{O(\log n)}$ and will successfully find the largest clique. Despite our best efforts we have not been able to get the best of both these worlds, finding a clique of size say $1.01\log n$ in polynomial time.

Given this state of affairs Jerrum and Kučera introduced the planted clique problem, where the algorithm designer’s job is made seemingly easier with the promise that a clique of size $k = n^{\epsilon}$ for some constant $\epsilon > 0$ is planted somewhere in $G \leftarrow \mathcal{G}(n,1/2)$ — now can we find it efficiently? (Note that the size of this planted clique is *huge* compared to the one of size $2\log n$ that “shows up naturally”.) Kučera noted that when $k \ge c_0\sqrt{n\log n}$ the clique vertices are the ones with the highest degrees w.h.p., and so a simple algorithm that makes a pass over the vertex set recovers it. Alon, Krivelevich and Sudakov subsequently improved on this with a spectral algorithm that works when $k\ge c\sqrt{n}$. This remains the best we can do, and designing algorithms that find (or even simply detect the existence of) planted cliques of size $k = o(\sqrt{n})$ is an outstanding open problem.

As an alternative to the spectral algorithm of Alon et al., Feige and Ron proposed a simple and elegant algorithm that also works for $k \ge c\sqrt{n}$: in a sentence, it proceeds by iteratively removing the vertex of lowest degree (i.e. weeding out the losers instead of keeping the winners like in Kučera’s algorithm.) The one downside is that their analysis only yielded an error bound of $\Theta(1)$, where this constant depends on $c$; ideally we would like the error probability to tend to zero as $n \to \infty$. Yuval and his coauthors remedied this with an algorithm that has error probability $\exp(-n^{\epsilon})$ for some $\epsilon > 0$. Their algorithm is equally simple: first pick a random subset $S$ of vertices by independently including each one with probability $1/4$, and let $V^*$ be the vertices in $V \setminus S$ of degree at least $n/8$ into $S$. It can be shown that the density of the hidden clique within $V^*$ has increased significantly, and so by iterating this pruning procedure sufficiently many times we would be able to run Kučera’s simple “take the vertices of highest degrees” algorithm.

Hi,

Reading the first part of the blog post, it is obvious how to get the equivalent functional formulation of the inequality

$$\mathbb{P}\{A+r B_\infty\} \geq 1-e^{-cnr^2}$$

for $A$ permutation invariant such that $\mathbb{P}A \geq \frac{1}{2}$? (in particular, why isn’t there any such “density requirement” on $f$?

Hi Clément, I think one way to translate isoperimetric statements for subsets $A\subseteq [0,1]^n$ into large deviation inequalities for functions $f:[0,1]^n\to \mathbb{R}$ is to take $A_f := \{x \in [0,1]^n \colon f(x) \le M_f\}$, where $M_f$ is the median of $f$. (This way $A_f$ satisfies $\mathbf{P}(A_f) \ge 1/2$ by definition.)