Our first speaker on Thursday was Tali Kaufman, who talked about ongoing work with Irit Dinur exploring the connection between locally testable codes and expanders.

A typical way to define an error-correcting code $C \subseteq \{0,1\}^n$ is to specify a collection of “local constraints” (e.g., ${\mathbb F}_2$-linear constraints over sets of $q$ coordinates); then the code is the set of strings satisfying all constraints. For “LDPC” codes of this form, there seems to be a close connection between the graph/hypergraph formed by the constraints and the distance of the code; e.g., we know we can get good codes by taking the constraint graph to be an expander. Tali considered the extent to which this philosophy can extend to $\rho$-*robust* codes, meaning those with the property that $\mathrm{dist}(x,C) \geq \gamma$ implies $x$ violates at least a $\rho\gamma$ fraction of the constraints. Tali speculated about the precise connection between expansion and robustness, and also stated a new theorem in one direction, which I’ll roughly state: If $G$ is the constraint graph of a $\rho$-robust code (with linear constraints) and distance $\delta$, then subject to some minor nondegeneracy conditions, $G$ must be a small-set expander: every set $S$ of density at most $\delta$ (or so) has expansion at least $\rho$ (or so).

Next up was Parikshit Gopalan who talked on his new work with Barak, Hastad, Meka, Raghavendra, and Steurer on the “Short Code”. The motivation for the work is the Small Set Expansion Conjecture (related to the UGC), which states that $\forall \delta > 0, \exists \epsilon > 0$ such that given an undirected graph with a vertex set of volume at most $\delta$ and expansion at most $\epsilon$, it’s $\mathsf{NP}$-hard to find a vertex set of volume at most $\delta$ and expansion at most $1-\epsilon$. The Arora–Barak–Steurer algorithm for this problem runs in time roughly $2^{n^{\Theta(\epsilon)}}$ — if this could be made truly subexponential then the Small Set Expansion Conjecture would be essentially disproved. And the algorithm *would* run in truly subexponential time if one could establish the following hypothesis: the adjacency operator of any small-set expander has at most $n^{o(1)}$ eigenvalues that are at least $1-\epsilon$. Parikshit’s work is in the direction of refuting the hypothesis. It shows how to construct a graph with at least quasipolynomially many eigenvalues above $1-\epsilon$. The construction is based on locally testable codes of **constant** distance (not constant fraction!). In the simplest case, it can be viewed as a “derandomization” of the noisy-hypercube graph; a key idea is that the proof of small-set expansion for the noisy-hypercube graph (which relies on hyperconractivity) requires only $k$-wise independence, not full independence.

The morning closed with another open problems session, which I will blog about later.

In the afternoon, Guy Kindler told us about (in his words) “some new results, some interesting results, and a little bit which is both new and interesting.” More specifically, suppose we “reparameterize” noise sensitivity for Gaussian functions $f : {\mathbb R}^n \to \{-1,1\}$ by defining $\mathbf{RS}_f(\theta) = \mathbf{Pr}[f(\mathbf{X}) \neq f(\mathbf{Y})]$, where $\mathbf{X}, \mathbf{Y}$ are $\cos(\theta)$-correlated standard Gaussians. Guy showed a three-line proof that $\mathbf{RS}_f(\theta)$ is always a subadditive function of $\theta$. From this one can immediately deduce “Borell’s isoperimetric inequality” for sets of volume $1/2$ (and hence $.8787$-UG-hardness for Max-Cut, close to the optimal $.8786$-hardness). Guy also mentioned that from this, another very short proof lets one conclude “Bourgain’s Junta Theorem” for Gaussian functions: if $f : {\mathbb R}^n \to \{-1,1\}$ then $f$ has Hermite weight at least $\Omega(1/\sqrt{k})$ above degree $k$. One can further deduce Bourgain’s Junta Theorem in the boolean domain, using Invariance.

The final talk of the day was by David Ellis, who told us about his work (with Yuval Filmus and Ehud Friedgut) on triangle-intersecting families. Here is the problem: What is the largest collection $\mathcal{F}$ of $n$-vertex graphs such that every $G,H \in \mathcal{F}$ share a triangle? In 1976 Simonovits and Sós made the natural conjecture that such a family can contain at most a $1/8$ fraction of graphs; this would be sharp by considering the family of all graphs containing one fixed triangle. The best previous upper bound was $1/4$, but David’s work proves the conjecture. Actually, David proves something slightly stronger — that every “odd-cycle-agreeing” family has density at most $1/8$, where odd-cycle-agreeing means that for each pair $G, H$ in the family, the complement of $G \oplus H$ (this is the “agreeing” part) should contain an odd cycle. One can view the problem as looking for the largest independent set in a certain explicit Cayley graph, and David’s solution involves constructing an explicit dual solution to a linear programming relaxation, arising from the Hoffman–Delsarte bound. David left us with a very intriguing open problem: what is the largest density of a $P_3$-intersecting family ($P_3$ is the path of length $3$)? It is *not* the family of graphs containing a fixed $P_3$, but the conjecture is that the maximum density is still $1/2 – \Omega(1)$.

## Leave a Reply