# §11.4: Gaussian surface area and Bobkov’s Inequality

This section is devoted to studying the Gaussian Isoperimetric Inequality. This inequality is a special case of the Borell Isoperimetric Inequality (and hence also a special case of the General-Volume Majority Is Stablest Theorem); in particular, it’s the special case arising from the limit $\rho \to 1^{-}$.

Restating Borell’s theorem using rotation sensitivity we have that for any $A \subseteq {\mathbb R}^n$, if $H \subseteq {\mathbb R}^n$ is a halfspace with the same Gaussian volume as $A$ then for all $\epsilon$, $\mathbf{RS}_A(\epsilon) \geq \mathbf{RS}_H(\epsilon).$ Since $\mathbf{RS}_A(0) = \mathbf{RS}_H(0) = 0$, it follows that $\mathbf{RS}_A'(0^+) \geq \mathbf{RS}_H'(0^+).$ (Here we are considering the one-sided derivatives at $0$, which can be shown to exist, though $\mathbf{RS}_A’(0^+)$ may equal $+\infty$; see the notes at the end of this chapter.) As will be explained shortly, $\mathbf{RS}_A’(0^+)$ is precisely $\sqrt{2/\pi} \cdot \text{surf}_\gamma(A)$, where $\text{surf}_\gamma(A)$ denotes the “Gaussian surface area” of $A$. Therefore the above inequality is equivalent to the following:

Gaussian Isoperimetric Inequality Let $A \subseteq {\mathbb R}^n$ have $\mathrm{vol}_\gamma(A) = \alpha$ and let $H \subseteq {\mathbb R}^n$ be any halfspace with $\mathrm{vol}_\gamma(H) = \alpha$. Then $\text{surf}_\gamma(A) \geq \text{surf}_\gamma(H)$.

Remark 47 As shown in Proposition 49 below, the right-hand side in this inequality is equal to $\mathcal{U}(\alpha)$, where $\mathcal{U}$ is the Gaussian isoperimetric function, encountered earlier in Definition 5.23 and defined by $\mathcal{U} = \varphi \circ \Phi^{-1}$.

Let’s now discuss the somewhat technical question of how to properly define $\text{surf}_\gamma(A)$, the Gaussian surface area of a set $A$. Perhaps the most natural definition would be to equate it with the Gaussian Minkowski content of the boundary $\partial A$ of $A$, \begin{equation} \label{eqn:heuristic-gsa} \gamma^+(\partial A) = \liminf_{\epsilon \to 0^+} \frac{\mathrm{vol}_\gamma(\{z : \mathrm{dist}(z, \partial A) < \epsilon/2\})}{\epsilon}. \end{equation} (Relatedly, one might also consider the surface integral over $\partial A$ of the Gaussian pdf $\varphi$.) Under the “official” definition of $\text{surf}_\gamma(A)$ we give below in Definition 48, we’ll indeed have $\text{surf}_\gamma(A) = \gamma^+(\partial A)$ whenever $A$ is sufficiently nice — say, a disjoint union of closed, full-dimensional, convex sets. However, the Minkowski content definition is not a good one in general because it’s possible to have $\gamma^+(\partial A_1) \neq \gamma^+(\partial A_2)$ for some sets $A_1$ and $A_2$ that are equivalent up to measure $0$. (For more information, see an exercise and the notes at the end of this chapter.) As mentioned above, one “correct” definition is $\text{surf}_\gamma(A) = \sqrt{\pi/2} \cdot \mathbf{RS}_A’(0^+)$. This definition has the advantage of being insensitive to measure-$0$ changes to $A$. To connect this unusual-looking definition with Minkowski content, let’s heuristically interpret $\mathbf{RS}_A’(0^+)$. We start by thinking of it as $\frac{\mathbf{RS}_A(\epsilon)}{\epsilon}$ for “infinitesimal $\epsilon$”. Now $\mathbf{RS}_A(\epsilon)$ can be thought of as the probability that the line segment $\boldsymbol{\ell}$ joining two $\cos \epsilon$-correlated Gaussians crosses $\partial A$. Since $\sin \epsilon \approx \epsilon$, $\cos \epsilon \approx 1$ up to $O(\epsilon^2)$, we can think of these correlated Gaussians as $\boldsymbol{g}$ and $\boldsymbol{g} + \epsilon \boldsymbol{g}’$ for independent $\boldsymbol{g}, \boldsymbol{g}’ \sim \mathrm{N}(0,1)^n$. When $\boldsymbol{g}$ lands near $\partial A$, the length of $\boldsymbol{\ell}$ in the direction perpendicular to $\partial A$ will, in expectation, be $\epsilon \mathop{\bf E}[|\mathrm{N}(0,1)|] = \sqrt{2/\pi} \epsilon$. Thus $\mathbf{RS}_A(\epsilon)$ should essentially be $\sqrt{2/\pi} \epsilon \cdot \mathrm{vol}_\gamma(\{z : \mathrm{dist}(z, \partial A) < \epsilon/2\})$ and we have heuristically justified \begin{equation} \label{eqn:heur-gsa2} \sqrt{\pi/2} \cdot \mathbf{RS}_A’(0^+) = \sqrt{\pi/2} \cdot \lim_{\epsilon \to 0^+} \frac{\mathbf{RS}_A(\epsilon)}{\epsilon}\ \mathop{=}^?\ \gamma^+(\partial A). \end{equation}

One more standard idea for the definition of $\text{surf}_\gamma(A)$ is “$\mathop{\bf E}[\|\nabla 1_A\|]$”. This doesn’t quite make sense since $1_A \in L^1({\mathbb R}^n,\gamma)$ is not actually differentiable. However, we might consider replacing it with the limit of $\mathop{\bf E}[\|\nabla f_m\|]$ for a sequence $(f_m)$ of smooth functions approximating $1_A$. To see why this notion should agree with the Gaussian Minkowski content $\gamma^+(\partial A)$ for nice enough $A$, let’s suppose we have a smooth approximator $f$ to $1_A$ that agrees with $1_A$ on ${\{z : \mathrm{dist}(z, \partial A) \geq \epsilon/2\}}$ and is (essentially) a linear function on $\{z : \mathrm{dist}(z, \partial A) < \epsilon/2\}$. Then $\|\nabla f\|$ will be $0$ on the former set and (essentially) constantly $1/\epsilon$ on the latter (since it must climb from $0$ to $1$ over a distance of $\epsilon$). Thus we indeed have $\mathop{\bf E}[\|\nabla f\|] \approx \frac{\mathrm{vol}_\gamma(\{z : \mathrm{dist}(z, \partial A) < \epsilon/2\})}{\epsilon} \approx \gamma^+(\partial A),$ as desired. We summarize the above technical discussion with the following definition/theorem, which is discussed further in the notes at the end of this chapter:

Definition 48 For any $A \subseteq {\mathbb R}^n$, we define its Gaussian surface area to be $\text{surf}_\gamma(A) = \sqrt{\pi/2} \cdot \mathbf{RS}_A'(0^+) \in [0, \infty].$ An equivalent definition is $\text{surf}_\gamma(A) = \inf\left\{\liminf_{m \to \infty} \mathop{\bf E}_{\boldsymbol{z} \sim \mathrm{N}(0,1)^n}[\|\nabla f_m(\boldsymbol{z})\|]\right\},$ where the infimum is over all sequences $(f_m)_{m \in {\mathbb N}}$ of smooth $f_m : {\mathbb R}^n \to [0,1]$ with first partial derivatives in $L^2({\mathbb R}^n,\gamma)$ such that $\|f_m – 1_A\|_1 \to 0$. Furthermore, this infimum is actually achieved by taking $f_m = \mathrm{U}_{\rho_m} f$ for any sequence $\rho_m \to 1^{-}$. Finally, the equality $\text{surf}_\gamma(A) = \gamma^+(\partial A)$ with Gaussian Minkowski content holds if $A$ is a disjoint union of closed, full-dimensional, convex sets.

To get further acquainted with this definition, let’s describe the Gaussian surface area of some basic sets. We start with halfspaces, which as mentioned in Remark 47 have Gaussian surface area given by the Gaussian isoperimetric function.

Proposition 49 Let $H \subseteq {\mathbb R}^n$ be any halfspace (open or closed) with $\mathrm{vol}_\gamma(H) = \alpha \in (0,1)$. Then $\text{surf}_\gamma(H) = \mathcal{U}(\alpha) = \varphi(\Phi^{-1}(\alpha))$. In particular, if $\alpha = 1/2$ — i.e., $H$’s boundary contains the origin — then $\text{surf}_\gamma(H) = \frac{1}{\sqrt{2\pi}}$.

Proof: Just as in the proof of Corollary 20, by rotational symmetry we may assume $H$ is a $1$-dimensional halfline, $H = (-\infty, t]$. Since $\mathrm{vol}_\gamma(H) = \alpha$, we have $t = \Phi^{-1}(\alpha)$. Then $\text{surf}_\gamma(H)$ is equal to $\gamma^+(\partial H) = \lim_{\epsilon \to 0^+} \frac{\mathrm{vol}_\gamma(\{z \in {\mathbb R} : \mathrm{dist}(z,\partial H) < \tfrac{\epsilon}{2}\})}{\epsilon} = \lim_{\epsilon \to 0^+} \frac{\int_{t-\epsilon/2}^{t+\epsilon/2} \varphi(s)\,ds}{\epsilon} = \varphi(t) = \mathcal{U}(\alpha). \quad \Box$

Here are some more Gaussian surface area bounds:

Examples 50 In an exercise, you are asked to generalize the above computation and show that if $A \subseteq {\mathbb R}$ is the union of disjoint nondegenerate intervals $[t_1, t_2], [t_3, t_4], \dots, [t_{2m-1}, t_{2m}]$ then $\text{surf}_\gamma(A) = \sum_{i=1}^{2m} \varphi(t_i)$. Perhaps the next easiest example is when $A \subseteq {\mathbb R}^n$ is an origin-centered ball; Ball [Bal93] gave an explicit formula for $\text{surf}_\gamma(A)$ in terms of the dimension and radius, one which is always less than $\sqrt{\frac{2}{\pi}}$ (see the exercises). This upper bound was extended to non-origin-centered balls in Klivans et al. [KOS08]. Ball also showed that every convex set $A \subseteq {\mathbb R}^n$ satisfies $\text{surf}_\gamma(A) \leq O(n^{1/4})$; Nazarov [Naz03] showed that this bound is tight up to the constant, using a construction highly reminiscent of Talagrand’s Exercise 4.15. As noted in Klivans et al. [KOS08], Nazarov’s work also immediately implies that an intersection of $k$ halfspaces has Gaussian surface area at most $O(\sqrt{\log k})$ (tight for appropriately sized cubes in ${\mathbb R}^k$), and that any cone in ${\mathbb R}^n$ with apex at the origin has Gaussian surface area at most $1$. Finally, by proving the “Gaussian special case” of the Gotsman–Linial Conjecture, Kane [Kan11a] established that if $A \subseteq {\mathbb R}^n$ is a degree-$k$ “polynomial threshold function” — i.e., $A = \{z : p(z) > 0\}$ for $p$ an $n$-variate degree-$k$ polynomial — then $\text{surf}_\gamma(A) \leq \frac{k}{\sqrt{2 \pi}}$. This is tight for every $k$ (even when $n = 1$).

Though we’ve shown that the Gaussian Isoperimetric Inequality follows from Borell’s Isoperimetric Theorem, we now discuss some alternative proofs. In the special case of sets of Gaussian volume $\tfrac{1}{2}$, we can again get a very simple proof using the subadditivity property of Gaussian rotation sensitivity, Theorem 43. That result easily yields the following kind of “concavity property” concerning Gaussian surface area:

Theorem 51 Let $A \subseteq {\mathbb R}^n$. Then for any $\delta > 0$, $\sqrt{\pi/2} \cdot \frac{\mathbf{RS}_A(\delta)}{\delta} \leq \text{surf}_\gamma(A).$

Proof: For $\delta > 0$ and $\epsilon = \delta/\ell$, $\ell \in {\mathbb N}^+$, Theorem 43 is equivalent to $\frac{\mathbf{RS}_A(\delta)}{\delta} \leq \frac{\mathbf{RS}_A(\epsilon)}{\epsilon}.$ Taking $\ell \to \infty$ hence $\epsilon \to 0^+$, the right-hand side becomes $\mathbf{RS}_A(0^+) = \sqrt{2/\pi} \cdot \text{surf}_\gamma(A)$. $\Box$

If we take $\delta = \pi/2$ in this theorem, the left-hand side becomes $\sqrt{2/\pi} \mathop{\bf Pr}_{\substack{\boldsymbol{z}, \boldsymbol{z}' \sim \mathrm{N}(0,1)^n \\ \text{independent}}}[1_A(\boldsymbol{z}) \neq 1_A(\boldsymbol{z}')] = 2\sqrt{2/\pi} \cdot \mathrm{vol}_\gamma(A)(1-\mathrm{vol}_\gamma(A)).$ Thus we obtain a simple proof of the following result, which includes the Gaussian Isoperimetric Inequality in the volume-$\frac12$ case:

Theorem 52 Let $A \subseteq {\mathbb R}^n$. Then $2\sqrt{2/\pi} \cdot \mathrm{vol}_\gamma(A)(1-\mathrm{vol}_\gamma(A)) \leq \text{surf}_\gamma(A).$ In particular, if $\mathrm{vol}_\gamma(A) = \frac12$, then we get the tight Gaussian Isoperimetric Inequality statement $\text{surf}_\gamma(A) \geq \frac{1}{\sqrt{2\pi}} = \mathcal{U}(\tfrac{1}{2})$.

As for the full Gaussian Isoperimetric Inequality, it’s a pleasing fact that it can be derived by pure analysis of Boolean functions. This was shown by Bobkov [Bob97], who proved the following very interesting isoperimetric inequality about Boolean functions:

Bobkov’s Inequality Let $f : \{-1,1\}^n \to [0,1]$. Then \begin{equation} \label{eqn:bobkov} \mathcal{U}(\mathop{\bf E}[f]) \leq \mathop{\bf E}_{{\boldsymbol{x}} \sim \{-1,1\}^n}\left[\|(\mathcal{U}(f({\boldsymbol{x}})), \nabla f({\boldsymbol{x}}))\|\right]. \end{equation} Here $\nabla f$ is the discrete gradient (as in Definition 2.33) and $\| \cdot \|$ is the usual Euclidean norm (in ${\mathbb R}^{n+1}$). Thus to restate the inequality, $\mathcal{U}(\mathop{\bf E}[f]) \leq \mathop{\bf E}_{{\boldsymbol{x}} \sim \{-1,1\}^n}\left[\sqrt{\mathcal{U}(f({\boldsymbol{x}}))^2 + \mathop{{\textstyle \sum}}_{i=1}^n \mathrm{D}_if({\boldsymbol{x}})^2}\right].$ In particular, suppose $f = 1_A$ is the $0$-$1$ indicator of a subset $A \subseteq \{-1,1\}^n$. Then since $\mathcal{U}(0) = \mathcal{U}(1) = 0$ we obtain $\mathcal{U}(\mathop{\bf E}[1_A]) \leq \mathop{\bf E}[\|\nabla 1_A\|]$.

As Bobkov noted, by the usual Central Limit Theorem argument one can straightforwardly obtain inequality \eqref{eqn:bobkov} in the setting of functions $f \in L^2({\mathbb R}^n, \gamma)$ with range $[0,1]$, provided $f$ is sufficiently smooth (for example, if $f$ is in the domain of $\mathrm{L}$; see the exercises). Then given $A \subseteq {\mathbb R}^n$, by taking a sequence of smooth approximations to $1_A$ as in Definition 48, the Gaussian Isoperimetric Inequality $\mathcal{U}(\mathop{\bf E}[1_A]) \leq \text{surf}_\gamma(A)$ is recovered. Given $A \subseteq \{-1,1\}^n$ we can write the quantity $\mathop{\bf E}[\|\nabla 1_A\|]$ appearing in Bobkov’s Inequality as \begin{equation} \label{eqn:boolean-surface-area} \mathop{\bf E}[\|\nabla 1_A\|] = \tfrac12 \cdot \mathop{\bf E}_{{\boldsymbol{x}} \sim \{-1,1\}^n}\bigl[\sqrt{\mathrm{sens}_A({\boldsymbol{x}})}\bigr], \end{equation} using the fact that for $1_A : \{-1,1\}^n \to \{0,1\}$ we have $\mathrm{D}_i1_A({\boldsymbol{x}})^2 = \tfrac14\cdot \boldsymbol{1}[\text{coordinate i is pivotal for 1_A on~x}].$ The quantity in \eqref{eqn:boolean-surface-area} — (half of) the expected square-root of the number of pivotal coordinates — is an interesting possible notion of “Boolean surface area” for sets $A \subseteq \{-1,1\}^n$. It was first essentially proposed by Talagrand [Tal93]. By Cauchy–Schwarz it’s upper-bounded by (half of) the square-root of our usual notion of boundary size, average sensitivity: \begin{equation} \label{eqn:tal-surf-vs-surf} \mathop{\bf E}[\|\nabla 1_A\|] \leq \sqrt{\mathop{\bf E}[\|\nabla 1_A\|^2]} = \sqrt{\mathbf{I}[1_A]}. \end{equation} (Note that $\mathbf{I}[1_A]$ here is actually one quarter of the average sensitivity of $A$, because we’re using $0$-$1$ indicators as opposed to $\pm 1$). But the inequality in \eqref{eqn:tal-surf-vs-surf} is often far from sharp. For example, while the majority function has average sensitivity $\Theta(\sqrt{n})$, the expected square-root of its sensitivity is $\Theta(1)$ because a $\Theta(1/\sqrt{n})$-fraction of strings have sensitivity $\lceil n/2 \rceil$ and the remainder have sensitivity $0$. Let’s turn to the proof of Bobkov’s Inequality. As you are asked to show in the exercises, the general-$n$ case of Bobkov’s Inequality follows from the $n = 1$ case by a straightforward “induction by restrictions”. Thus just as in the proof of the Hypercontractivity Theorem, it suffices to prove the $n = 1$ “two-point inequality”, an elementary inequality about two real numbers:

Bobkov’s Two-Point Inequality Let $f : \{-1,1\} \to [0,1]$. Then $\mathcal{U}(\mathop{\bf E}[f]) \leq \mathop{\bf E}[\|(\mathcal{U}(f), \nabla f)\|].$ Writing $f(x) = a + bx$, this is equivalent to saying that provided $a \pm b \in [0,1]$, $\mathcal{U}(a) \leq \tfrac{1}{2} \|(\mathcal{U}(a+b), b)\| + \tfrac{1}{2} \|(\mathcal{U}(a-b), b)\|.$

Remark 53 The only property of $\mathcal{U}$ used in proving this inequality is that it satisfies (Exercise 5.41) the differential equation $\mathcal{U} \mathcal{U}” = -1$ on $(0,1)$.

Bobkov’s proof of the two-point inequality was elementary but somewhat long and hard to motivate. In contrast, Barthe and Maurey [BM00a] gave a fairly short proof of the inequality, but it used methods from stochastic calculus, namely Itô’s Formula. We present here an elementary discretization of the Barthe–Maurey proof.

Proof of Bobkov’s Two-Point Inequality: By symmetry and continuity we may assume $\delta \leq a-b < a+b \leq 1-\delta$ for some $\delta > 0$. Let $\tau = \tau(\delta) > 0$ be a small quantity to be chosen later such that $b/\tau$ is an integer. Let $\boldsymbol{y}_0, \boldsymbol{y}_1, \boldsymbol{y}_2, \dots$ be a random walk within $[a-b, a+b]$ that starts at $\boldsymbol{y}_0 = a$, takes independent equally likely steps of $\pm \tau$, and is absorbed at the endpoints $a \pm b$. Finally, for $t \in {\mathbb N}$, define $\boldsymbol{z}_t = \|(\mathcal{U}(\boldsymbol{y}_t), \tau \sqrt{t})\|$. The key claim for the proof is:

Claim 54 Assuming $\tau = \tau(\delta) > 0$ is small enough, $(\boldsymbol{z}_t)_t$ is a submartingale with respect to $(\boldsymbol{y}_t)_t$, i.e., $\mathop{\bf E}[\boldsymbol{z}_{t+1} \mid \boldsymbol{y}_0, \dots, \boldsymbol{y}_t] = \mathop{\bf E}[\boldsymbol{z}_{t+1} \mid \boldsymbol{y}_t] \geq \boldsymbol{z}_t$.

Let’s complete the proof given the claim. Let ${\boldsymbol{T}}$ be the stopping time at which $\boldsymbol{y}_t$ first reaches $a\pm b$. By the Optional Stopping Theorem we have $\mathop{\bf E}[\boldsymbol{z}_0] \leq \mathop{\bf E}[\boldsymbol{z}_{\boldsymbol{T}}]$; i.e., \begin{equation} \label{eqn:submartingale-deduction} \mathcal{U}(a) \leq \mathop{\bf E}[\|(\mathcal{U}(\boldsymbol{z}_{\boldsymbol{T}}), \tau\sqrt{{\boldsymbol{T}}})\|]. \end{equation} In the expectation above we can condition on whether the walk stopped at $a+b$ or $a-b$. By symmetry, both events occur with probability $1/2$ and neither changes the conditional distribution of ${\boldsymbol{T}}$. Thus we get \begin{align*} \mathcal{U}(a) &\leq \tfrac{1}{2}\mathop{\bf E}[\|(\mathcal{U}(a+b), \tau \sqrt{{\boldsymbol{T}}})\|] + \tfrac{1}{2}\mathop{\bf E}[\|(\mathcal{U}(a-b), \tau \sqrt{{\boldsymbol{T}}})\|] \\ &\leq \tfrac{1}{2} \|(\mathcal{U}(a+b), \sqrt{\mathop{\bf E}[\tau^2{\boldsymbol{T}}]})\| + \tfrac{1}{2}\|(\mathcal{U}(a-b), \sqrt{\mathop{\bf E}[\tau^2{\boldsymbol{T}}]})\|, \end{align*} with the second inequality using concavity of $v \mapsto \sqrt{u^2 + v}$. But it’s a well-known fact (exercise) that $\mathop{\bf E}[{\boldsymbol{T}}] = (b/\tau)^2$. Substituting this into the above completes the proof.

It remains to verify Claim 54. Actually, although the claim is true as stated (see the exercises) it will be more natural to prove the following slightly weaker claim: \begin{equation} \label{eqn:weaker-submartingale} \mathop{\bf E}[\boldsymbol{z}_{t+1} \mid \boldsymbol{y}_t] \geq \boldsymbol{z}_t – C_\delta \tau^3 \end{equation} for some constant $C_\delta$ depending only on $\delta$. This is still enough to complete the proof: Applying the Optional Stopping Theorem to the submartingale ${(\boldsymbol{z}_t + C_\delta \tau^3 t)_t}$ we get that \eqref{eqn:submartingale-deduction} holds up to an additive $C_\delta \tau^3 \mathop{\bf E}[{\boldsymbol{T}}] = C_\delta b^2 \tau$. Then continuing with the above we deduce Bobkov’s Inequality up to $C_\delta b^2 \tau$, and we can make $\tau$ arbitrarily small.

Even though we only need to prove \eqref{eqn:weaker-submartingale}, let’s begin a proof of the original Claim 54 anyway. Fix $t \in {\mathbb N}^+$ and condition on $\boldsymbol{y}_t = y$. If $y$ is $a \pm b$, then the walk is stopped and the claim is clear. Otherwise, $\boldsymbol{y}_{t+1}$ is $y \pm \tau$ with equal probability, and we want to verify the following inequality (assuming $\tau > 0$ is sufficiently small as a function of $\delta$, independent of $y$): \begin{align} \|(\mathcal{U}(y), \tau \sqrt{t})\|&\leq \tfrac{1}{2} \|(\mathcal{U}(y+\tau), \tau\sqrt{t+1})\| + \tfrac{1}{2} \|(\mathcal{U}(y-\tau), \tau\sqrt{t+1})\| \label{eqn:bobkov-verify}\\ &= \tfrac{1}{2} \bigl\|\bigl(\sqrt{\mathcal{U}(y+\tau)^2 + \tau^2}, \tau\sqrt{t}\bigr)\bigr\| + \tfrac{1}{2} \bigl\|\bigl(\sqrt{\mathcal{U}(y-\tau)^2 + \tau^2}, \tau\sqrt{t}\bigr)\bigr\|. \nonumber \end{align} By the triangle inequality, it’s sufficient to show $\mathcal{U}(y) \leq \tfrac{1}{2} \sqrt{\mathcal{U}(y+\tau)^2 + \tau^2} + \tfrac{1}{2} \sqrt{\mathcal{U}(y-\tau)^2 + \tau^2},$ and this is actually necessary too, being the $t = 0$ case of \eqref{eqn:bobkov-verify}. Finally, since we actually only need the weakened submartingale statement \eqref{eqn:weaker-submartingale}, we’ll instead establish \begin{equation} \label{eqn:bobkov-taylor} \mathcal{U}(y) – C_\delta \tau^3 \leq \tfrac{1}{2} \sqrt{\mathcal{U}(y+\tau)^2 + \tau^2} + \tfrac{1}{2} \sqrt{\mathcal{U}(y-\tau)^2 + \tau^2} \end{equation} for some constant $C_\delta$ depending only on $\delta$ and for every $\tau \leq \frac{\delta}{2}$. We do this using Taylor’s theorem. Write $V_y(\tau)$ for the function of $\tau$ on the right-hand side of \eqref{eqn:bobkov-taylor}. For any $y \in [a-b,a+b]$ the function $V_y$ is smooth on $[0,\frac{\delta}{2}]$ because $\mathcal{U}$ is a smooth, positive function on $[\frac{\delta}{2}, 1 - \frac{\delta}{2}]$. Thus $V_y(\tau) = V_y(0) + V_y'(0)\tau + \tfrac12 V_y''(0)\tau^2 + \tfrac16 V_y'''(\xi) \tau^3$ for some $\xi$ between $0$ and $\tau$. The magnitude of $V_y”’(\xi)$ is indeed bounded by some $C_\delta$ depending only on $\delta$, using the fact that $\mathcal{U}$ is smooth and positive on $[\frac{\delta}{2}, 1 - \frac{\delta}{2}]$. But $V_y(0) = \mathcal{U}(y)$, and it’s straightforward to calculate that $V_y'(0) = 0, \qquad V_y''(0) = \mathcal{U}''(y) + 1/\mathcal{U}(y) = 0,$ the last identity used the key property $\mathcal{U}” = -1/\mathcal{U}$ mentioned in Remark 53. Thus we conclude $V_y(\tau) \geq \mathcal{U}(y) – C_\delta \tau^3$, verifying \eqref{eqn:bobkov-taylor} and completing the proof. $\Box$

As a matter of fact, by a minor adjustment (see the exercises) to this random walk argument we can establish the following generalization of Bobkov’s Inequality:

Theorem 55 Let $f : \{-1,1\}^n \to [0,1]$. Then $\mathop{\bf E}[\|(\mathcal{U}(\mathrm{T}_\rho f), \nabla \mathrm{T}_\rho f)\|]$ is an increasing function of $\rho \in [0,1]$. We recover Bobkov’s Inequality by considering $\rho = 0, 1$.

We end this section by remarking that De, Mossel, and Neeman [DMN13] have given a “Bobkov-style” Boolean inductive proof that yields both Borell’s Isoperimetric Theorem and also the Majority Is Stablest Theorem (albeit with some aspects of the Invariance Principle-based proof appearing in the latter case); see an exercise and the notes at the end of this chapter.