Today was our last day. The first speaker was Jelani Nelson, who talked about applications of “FT (Fourier transform)-mollification” in CS Theory. The basic idea behind this technique (which I think dates back to Berry’s proof of the Berry–Esseen Theorem) is to smooth out functions $g : {\mathbb R} \to {\mathbb R}$ by convolving them with the Fourier transform of a compactly supported Dirac-delta-like function. This has the feature of nearly preserving $g$, making it smooth, and most importantly, making its $k$-th-order derivatives quite small (at most $2^{O(k)}$ everywhere).

As a canonical example, Jelani described an alternate way to prove a 2009 result of Diakonikolas, Gopalan, Jaiswal, Servedio, and Viola, that $O(1/\epsilon^2)$-wise independence fools linear threshold functions. The rough idea is that one wants to compare $\mathbf{E}[g(a_0 + a_1 \mathbf{x}_1 + \cdots + a_n \mathbf{x}_n)]$ to $\mathbf{E}[g(a_0 + a_1 \mathbf{z}_1 + \cdots + a_n \mathbf{z}_n)]$, where the $\mathbf{x}_i$’s are bits, the $\mathbf{z}_i$’s are Gaussians, and $g : {\mathbb R} \to {\mathbb R}$ is the indicator of $[0,\infty)$. One can do this using ``Invariance'' tools, but a difficulty is that $g$ is not a smooth function. Here is exactly where FT-mollifying $g$ helps, and the error that arises has to do with the size of the mollified function's derivatives. Carefully controlling this error was the key to Jelani's followup work (with Diakonikolas and Kane) in showing that $\mathrm{poly}(1/\epsilon^2)$-wise dependence also fools degree-$2$ polynomial threshold functions.

Our next speaker was Johan Håstad, who told us about certain attempts to improve the known $11/12$-factor NP-hardness for Max-2Lin2 (the boolean CSP where the constraints are equality and disequality). It was a bit of a tragicomic story. In 2001 he had the following idea: do a Label-Cover reduction where $\mathbf{x} \in \{-1,1\}^K$ and $\mathbf{y} \in \{-1,1\}^L$ are uniform, and $\mathbf{z}_i = \begin{cases} \mathbf{y}_i & \textrm{w.p. } p \\ \mathbf{x}_{\pi(i)} & \textrm{w.p. } 1-p \end{cases}$. Then test $g(\mathbf{z}) = f(\mathbf{x})$ with probability $q$ and $g(\mathbf{z}) = g(\mathbf{y})$ with probability $1-q$. (Here $f$ and $g$ are folded.) (Maybe throw in a little noise, too.) It turns out to be natural to ensure $p/(2-p) = q/(1-q)$; then by comparing the probability that matching dictators and non-matching dictators pass one is led to set $q = \sqrt{6}/2 - 1$. If one could indeed show non-matching dictators were worst, this would yield $.9160$-hardness, which is better than $11/12 \approx .9166$. But try as he might, Johan couldn't do the analysis. This proved to be for good reason, since in 2007 he noticed that a dictator and a non-matching Majority-of-3 do better! If you then change $q$ to $1/4$ so that Majority-of-3 and dictator do the same, it seems you get hardness... $11/12$. Johan never analyzed this test, but leaves it as an open problem. He also described a somewhat similar Label-Cover reduction (appearing in a recent paper of mine with John Wright) which also disappointingly yields only $11/12$-hardness. Johan ended with one more open problem: why do we get stuck on $11/12$? [Ryan: perhaps there is a poly-time or subexponential-time $11/12$-algorithm!]

The last talk of the morning was by Shubhangi Saraf. She described the general philosophy behind several of her recent works using the “Method of (Polynomial) Multiplicities”, works that led to high-rate locally decodable codes, nearly-optimal Kakeya Set bounds, etc. Shubhangi began by reminding us of the many, many uses of polynomials over finite fields in theoretical computer science, especially in the design of combinatorial objects with nice properties. She also reminded us of what’s so nice about polynomials: a) a linear relationship between their coefficients and evaluations; b) the Schwartz–Zippel (de Millo–Lipton) Lemma; c) other properties that are so obvious that they are taken for granted! At a high level, the Method of Multiplicities involves not just looking at the zeroes of polynomials, but also their multiplicity. To study this, one needs to introduce the (Hasse) derivatives of polynomials over finite fields. Here for $P \in {\mathbb F}[X_1, \dots, X_n]$, the $\mathbf{i}$th derivative (for multiindex $\mathbf{i} \in {\mathbb N}^n$) is defined by $P^{(\mathbf{i})}(X) =$ coeff of $Z^{\mathbf{i}}$ in $P(X+Z)$. Then we say that $P(X)$ vanishes at $a$ with multiplicity at least $s$ if and only if $P^{(\mathbf{i})}(a) = 0$ for all $\mathbf{i}$ with $\mathbf{i}_1 + \cdots + \mathbf{i}_n < s$. The key tool in the Method of Multiplicities is the multiplicity generalization of Schwartz--Zippel, proved by Shubhangi with Dvir, Kopparty, and Sudan: If $P$ has degree $d$ and $S \subseteq {\mathbb F}$, then $\mathbf{E}_{\mathbf{a} \sim S^n}[\mathrm{mult}(P,\mathbf{a})] \leq d/|S|$. The great thing about this lemma is that it's meaningful even when $d > |{\mathbb F}|$, unlike the usual Schwartz–Zippel Lemma.

Rather appropriately, the final talk of the symposium was by the mighty Gil Kalai. He spoke about a long-running project with Jeff Kahn to classify functions on the $p$-biased discrete cube with near-minimal total influence. Gil started off by sketching an analytic proof of a $p$-biased version of Harper’s edge-isoperimetric inequality: If $f : \{0,1\}^n \to \{0,1\}$ has $\mu_p[f] = \delta$ and $\mathbf{Inf}[f] = k$ (where here Gil defines $p$-biased total influence as the $p$-biased measure of the boundary vertices), then $pk \geq \delta \log_p(\delta)$. Gil and Jeff’s major open problem is to characterize the set of functions for which this isoperimetric inequality is tight to within $O(\log(1/p))$. The motivation here is that it could lead to some progress on the following “fairly absurd conjecture” (Gil’s words): “For the property of $G(n,p)$ containing a copy of $H$ (whose size may depend on $n$), the following inequality holds: $p_c(H) \leq O(\log n) p_E(H)$, where $p_c$ is the critical probability, and $p_E$ is the minimum $p$ such that you expect at least $1/2$ a copy of $H’$ in $G(n,p)$ for each $H’ \subseteq H$.” In the direction of Gil and Jeff’s open problem, for constant $p$ and $\delta$ bounded away from $0$ and $1$ we have Friedgut’s Junta Theorem. For subconstant $p$ and $\delta$ bounded away from $0$ and $1$, we now have Hatami’s Theorem (discussed on Wednesday). But Gil and Jeff ask about the range of subconstant $p$ and subconstant $\delta$.

## Leave a Reply