§8.2: Generalized Fourier formulas

In this section we will revisit a number of combinatorial/probabilistic notions and show that for functions $f \in L^2(\Omega^n, \pi^{\otimes n})$, these notions have familiar Fourier formulas which don’t depend on the Fourier basis.

The orthonormality of Fourier bases gives us some formulas almost immediately:

Proposition 16 Let $f, g \in L^2(\Omega^n, \pi^{\otimes n})$. Then for any fixed product Fourier basis, the following formulas hold: \begin{align*} \mathop{\bf E}[f] &= \widehat{f}(0)\\ \mathop{\bf E}[f^2] &= \sum_{\alpha \in {\mathbb N}_{< m}^n} \widehat{f}(\alpha)^2 \tag{Parseval} \\ \mathop{\bf Var}[f] &= \sum_{\alpha \neq 0} \widehat{f}(\alpha)^2 \\ \langle f, g \rangle &= \sum_{\alpha \in {\mathbb N}_{<m}^n} \widehat{f}(\alpha)\widehat{g}(\alpha) \tag{Plancherel} \\ \mathop{\bf Cov}[f,g] &= \sum_{\alpha \neq 0} \widehat{f}(\alpha)\widehat{g}(\alpha). \end{align*}

Proof: We verify Plancherel’s Theorem, from which the other identities follow (see the exercises): \begin{align*} \langle f, g \rangle &= \Bigl\langle \sum_{\alpha \in {\mathbb N}_{< m}^n} \widehat{f}(\alpha) \phi_\alpha, \sum_{\beta \in {\mathbb N}_{< m}^n} \widehat{g}(\beta) \phi_\beta \Bigr\rangle \\ &= \sum_{\alpha, \beta \in {\mathbb N}_{< m}^n} \widehat{f}(\alpha) \widehat{g}(\beta) \langle \phi_\alpha, \phi_\beta \rangle \\ &= \sum_{\alpha \in {\mathbb N}_{< m}^n} \widehat{f}(\alpha) \widehat{g}(\alpha) \end{align*} by orthonormality of $(\phi_\alpha)_{\alpha \in {\mathbb N}_{<m}^n}$. $\Box$

We now give a definition which will be the key for developing basis-independent Fourier expansions. In the case of $L^2(\{-1,1\})$ this definition appeared already in Exercise 3.28.

Definition 17 Let $J \subseteq [n]$ and write ${\overline{J}} = [n] \setminus J$. Given $f \in L^2(\Omega^n, \pi^{\otimes n})$, the projection of $f$ on coordinates $J$ is the function $f^{\subseteq J} \in L^2(\Omega^n, \pi^{\otimes n})$ defined by \[ f^{\subseteq J}(x) = \mathop{\bf E}_{{\boldsymbol{x}}' \sim \pi^{\otimes {\overline{J}}}}[f(x_J, {\boldsymbol{x}}')], \] where $x_J \in \Omega^J$ denotes the values of $x$ in the $J$-coordinates. In other words, $f^{\subseteq J}(x)$ is the expectation of $f$ when the ${\overline{J}}$-coordinates of $x$ are rerandomized. Note that we take $f^{\subseteq J}$ to have $\Omega^n$ as its domain, even though it only depends on the coordinates in $J$.

Forming $f^{\subseteq J}$ is indeed the application of a projection linear operator to $f$, namely the expectation over ${\overline{J}}$ operator, $\mathrm{E}_{{\overline{J}}}$. We take this as the definition of the operator: $\mathrm{E}_{{\overline{J}}} f = f^{\subseteq J}$. When ${\overline{J}} = \{i\}$ is a singleton we write simply $\mathrm{E}_i$.

Remark 18 This definition of $\mathrm{E}_i$ is consistent with Definition 2.22. You are asked to verify that $\mathrm{E}_{{\overline{J}}}$ is indeed a projection, self-adjoint linear operator in the exercises.

Proposition 19 Let $J \subseteq [n]$ and $f \in L^2(\Omega^n, \pi^{\otimes n})$. Then for any fixed product Fourier basis, \[ f^{\subseteq J} = \sum_{\substack{\alpha \in {\mathbb N}_{< m}^n \\ \mathrm{supp}(\alpha) \subseteq J}} \widehat{f}(\alpha)\,\phi_\alpha. \]

Proof: Since $\mathrm{E}_{{\overline{J}}}$ is a linear operator, it suffices to verify for all $\alpha$ that \[ \phi_\alpha^{\subseteq J} = \begin{cases} \phi_\alpha & \text{if } \mathrm{supp}(\alpha) \subseteq J, \\ 0 & \text{otherwise}. \end{cases} \] If $\mathrm{supp}(\alpha) \subseteq J$ then $\phi_\alpha$ does not depend on the coordinates outside $J$; hence indeed $\phi_\alpha^{\subseteq J}= \phi_\alpha$. So suppose $\mathrm{supp}(\alpha) \not \subseteq J$. Since $\phi_\alpha(x) = \bigl(\mathop{{\textstyle \prod}}_{i\in J} \phi_{\alpha_i}(x_i)\bigr)\bigl(\mathop{{\textstyle \prod}}_{i\in {\overline{J}}} \phi_{\alpha_i}(x_i)\bigr)$, we can write $\phi_\alpha = \phi_{\alpha_J} \cdot \phi_{\alpha_{\overline{J}}}$, where $\phi_{\alpha_J}$ depends only on the coordinates in $J$, $\phi_{\alpha_{{\overline{J}}}}$ depends only on the coordinates in ${\overline{J}}$, and $\mathop{\bf E}[\phi_{\alpha_{{\overline{J}}}}] = 0$ precisely because $\mathrm{supp}(\alpha) \not\subseteq J$. Thus for every $x \in \Omega^n$, \[ \phi_{\alpha}^{\subseteq J}(x) = \mathop{\bf E}_{{\boldsymbol{x}}' \sim \pi^{\otimes {\overline{J}}}}[\phi_{\alpha_J}(x_J) \phi_{\alpha_{{\overline{J}}}}({\boldsymbol{x}}')] = \phi_{\alpha_J}(x_J) \cdot \mathop{\bf E}_{{\boldsymbol{x}}’ \sim \pi^{\otimes {\overline{J}}}}[ \phi_{\alpha_{{\overline{J}}}}({\boldsymbol{x}}')] = 0 \] as needed. $\Box$

Corollary 20 Let $f \in L^2(\Omega^n,\pi^{\otimes n})$ and fix a product Fourier basis. If $f$ depends only on the coordinates in $J \subseteq [n]$ then $\widehat{f}(\alpha) = 0$ whenever $\mathrm{supp}(\alpha) \not \subseteq J$.

Proof: This follows from Proposition 19 because $f = f^{\subseteq J}$. $\Box$

Corollary 21 Let $i \in [n]$ and $f \in L^2(\Omega^n, \pi^{\otimes n})$. Then for any fixed product Fourier basis, \[ \mathrm{E}_i f = \sum_{\alpha : \alpha_i = 0} \widehat{f}(\alpha)\,\phi_\alpha. \]

Let us now define influences for functions $f \in L^2(\Omega^n, \pi^{\otimes n})$. In the case of $\Omega = \{-1,1\}$, our definition of $\mathbf{Inf}_i[f]$ from Chapter 2.2 was $\mathop{\bf E}[\mathrm{D}_i f]$. However the notion of a derivative operator does not make sense for more general domains $\Omega$. In fact, even in the case of $\Omega = \{-1,1\}$ it isn’t a basis-invariant notion: the choice of $\frac{f(x^{(i\mapsto 1)}) – f(x^{(i \mapsto -1)})}{2}$ rather than $\frac{f(x^{(i\mapsto -1)}) – f(x^{(i \mapsto 1)})}{2}$ is inherently arbitrary. Instead we can fall back on the Laplacian operators, and take the identity $\mathbf{Inf}_i[f] = \langle f, \mathrm{L}_i f \rangle$ from Proposition 2.25 as a definition.

Definition 22 Let $i \in [n]$ and $f \in L^2(\Omega^n, \pi^{\otimes n})$. The $i$th coordinate Laplacian operator $\mathrm{L}_i$ is the self-adjoint, projection linear operator defined by \[ \mathrm{L}_i f = f - \mathrm{E}_i f. \] The influence of coordinate $i$ on $f$ is defined to be \[ \mathbf{Inf}_i[f] = \langle f, \mathrm{L}_i f \rangle = \langle \mathrm{L}_i f, \mathrm{L}_i f \rangle. \] The total influence of $f$ is defined to be $\mathbf{I}[f] = \sum_{i=1}^n \mathbf{Inf}_i[f]$.

You can think of $\mathrm{L}_i f$ as “the part of $f$ which depends on the $i$th coordinate”.

Proposition 23 Let $i \in [n]$ and $f \in L^2(\Omega^n, \pi^{\otimes n})$. Then for any fixed product Fourier basis, \begin{align*} \mathrm{L}_i f = \sum_{\alpha : \alpha_i \neq 0} \widehat{f}(\alpha)\,\phi_\alpha, \quad \mathbf{Inf}_i[f] = \sum_{\alpha : \alpha_i \neq 0} \widehat{f}(\alpha)^2, \quad \mathbf{I}[f] = \sum_{\alpha} \#\alpha \cdot \widehat{f}(\alpha)^2, \end{align*}

Proof: The first formula is immediate from Corollary 21, the second from Plancherel, and the third from summing over $i$. $\Box$

In the exercises you are asked to verify the following formulas which are often useful for computations:

Proposition 24 Let $i \in [n]$ and $f \in L^2(\Omega^n, \pi^{\otimes n})$. Then \[ \mathbf{Inf}_i[f] = \mathop{\bf E}_{{\boldsymbol{x}} \sim \pi^{\otimes n}}[\mathop{\bf Var}_{{\boldsymbol{x}}_i' \sim \pi}[f({\boldsymbol{x}}_1, \dots, {\boldsymbol{x}}_{i-1}, {\boldsymbol{x}}_i', {\boldsymbol{x}}_{i+1}, \dots, {\boldsymbol{x}}_n)]]. \] If furthermore $f$’s range is $\{-1,1\}$ then \[ \mathbf{Inf}_i[f] = \mathop{\bf E}[|\mathrm{L}_i f|] = 2\mathop{\bf Pr}_{\substack{{\boldsymbol{x}} \sim \pi^{\otimes n} \\ {\boldsymbol{x}}_i’ \sim \pi}}[f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}_1, \dots, {\boldsymbol{x}}_{i-1}, {\boldsymbol{x}}_i', {\boldsymbol{x}}_{i+1}, \dots, {\boldsymbol{x}}_n)]. \]

Example 25 Let’s continue Example 15, in which $\{a,b,c\}$ has the uniform distribution and $f : \{a,b,c\}^2 \to \{0,1\}$ is $1$ if and only if both inputs are $c$. We compute $\mathbf{Inf}_1[f]$ two ways. Using Proposition 24 we have $\mathop{\bf Var}[f({\boldsymbol{x}}_1, a)] = \mathop{\bf Var}[f({\boldsymbol{x}}_1, b)] = 0$ and $\mathop{\bf Var}[f({\boldsymbol{x}}_1, c)] = \frac13 \cdot \frac23 = \frac29$ (because $f({\boldsymbol{x}}_1, c)$ is Bernoulli with parameter $\frac13$); thus $\mathbf{Inf}_1[f] = \frac13 \cdot \frac29 = \frac{2}{27}$. Alternatively, using the formula from Proposition 23 as well as the Fourier expansion from Example 15, we can compute $\mathbf{Inf}_1[f] = (-\tfrac{\sqrt{2}}{18})^2 + (-\tfrac{\sqrt{6}}{18})^2 + (\tfrac{1}{18})^2 + (\tfrac{\sqrt{12}}{36})^2 +(\tfrac{\sqrt{12}}{36})^2 +(\tfrac16)^2 = \tfrac{2}{27}$.

Next, we straightforwardly extend our definitions of the noise operator and noise stability to general product spaces.

Definition 26 Fix a finite product probability space $(\Omega^n, \pi^{\otimes n})$. For $\rho \in [0,1]$ and $x \in \Omega^n$ we write $\boldsymbol{y} \sim N_\rho(x)$ to denote that $\boldsymbol{y} \in \Omega^n$ is randomly chosen as follows: for each $i \in [n]$ independently, \[ \boldsymbol{y}_i = \begin{cases} x_i & \text{with probability $\rho$,}\\ \text{drawn from $\pi$} & \text{with probability $1-\rho$.} \end{cases} \] If ${\boldsymbol{x}} \sim \pi^{\otimes n}$ and $\boldsymbol{y} \sim N_\rho({\boldsymbol{x}})$, we say that $({\boldsymbol{x}}, \boldsymbol{y})$ is a $\rho$-correlated pair under $\pi^{\otimes n}$. (This definition is symmetric in ${\boldsymbol{x}}$ and $\boldsymbol{y}$.)

Definition 27 For a fixed space $L^2(\Omega^n, \pi^{\otimes n})$ and $\rho \in [0,1]$, the noise operator with parameter $\rho$ is the linear operator $\mathrm{T}_\rho$ on functions $f \in L^2(\Omega^n,\pi^{\otimes n})$ defined by \[ \mathrm{T}_\rho f(x) = \mathop{\bf E}_{\boldsymbol{y} \sim N_\rho(x)}[f(\boldsymbol{y})]. \] The noise stability of $f$ at $\rho$ is \[ \mathbf{Stab}_\rho[f] = \langle f, \mathrm{T}_\rho f \rangle = \mathop{\bf E}_{{\substack{({\boldsymbol{x}}, \boldsymbol{y}) \text{ $\rho$-correlated}\\ \text{under } \pi^{\otimes n}}}}[f({\boldsymbol{x}}) f(\boldsymbol{y})]. \]

Proposition 28 Let $\rho \in [0,1]$ and let $f \in L^2(\Omega^n, \pi^{\otimes n})$. Then for any fixed product Fourier basis, \[ \mathrm{T}_\rho f = \sum_{\alpha \in {\mathbb N}_{<m}^n} \rho^{\#\alpha} \widehat{f}(\alpha)\,\phi_\alpha, \qquad \mathbf{Stab}_\rho[f] = \sum_{\alpha \in {\mathbb N}_{<m}^n} \rho^{\#\alpha} \widehat{f}(\alpha)^2. \]

Proof: Let $\boldsymbol{J}$ denote a $\rho$-random subset of $[n]$; i.e., $\boldsymbol{J}$ is formed by including each $i \in [n]$ independently with probability $\rho$. Then by definition $T_\rho f (x) = \mathop{\bf E}_{\boldsymbol{J}} [f^{\subseteq \boldsymbol{J}}(x)]$ and so from Proposition 19 we get \[ T_\rho f (x) = \mathop{\bf E}_{\boldsymbol{J}} [f^{\subseteq \boldsymbol{J}}(x)] = \mathop{\bf E}_{\boldsymbol{J}} \Bigl[\sum_{\substack{\alpha \in {\mathbb N}_{<m}^n \\ \mathrm{supp}(\alpha) \subseteq \boldsymbol{J}}} \widehat{f}(\alpha)\,\phi_\alpha(x)\Bigr]= \sum_{\alpha \in {\mathbb N}_{<m}^n} \rho^{\#\alpha} \widehat{f}(\alpha)\,\phi_\alpha(x), \] since for a fixed $\alpha$, the probability of $\mathrm{supp}(\alpha) \subseteq \boldsymbol{J}$ is $\rho^{\#\alpha}$. The formula for $\mathbf{Stab}_\rho[f]$ now follows from Plancherel. $\Box$

Remark 29 The first formula in this proposition may be used to extend the definition of $\mathrm{T}_\rho f$ to values of $\rho$ outside $[0,1]$.

We also define $\rho$-stable influences. The factor of $\rho^{-1}$ in our definition is for consistency with the $L^2(\{-1,1\}^n)$ case.

Definition 30 For $f \in L^2(\Omega^n, \pi^{\otimes n})$, $\rho \in (0,1]$, and $i \in [n]$, the $\rho$-stable influence of $i$ on $f$ is \[ \mathbf{Inf}_i^{(\rho)}[f] = \rho^{-1} \mathbf{Stab}_\rho[\mathrm{L}_i f] = \sum_{\alpha : \alpha_i \neq 0} \rho^{\#\alpha-1} \widehat{f}(\alpha)^2. \] We also define $\mathbf{I}^{(\rho)}[f] = \sum_{i=1}^n \mathbf{Inf}_i^{(\rho)}[f]$.

Just as in the case of $L^2(\{-1,1\}^n)$ we can use stable influences to define the “notable” coordinates of a function, of which there is a bounded quantity. A verbatim repetition of the proof of Proposition 2.53 yields the following generalization:

Proposition 31 Suppose $f \in L^2(\Omega^n, \pi^{\otimes n})$ has $\mathop{\bf Var}[f] \leq 1$. Given $0 < \delta < 1$, $0 < \epsilon \leq 1$, let $J = \{ i \in [n] : \mathbf{Inf}_i^{(1-\delta)}[f] \geq \epsilon\}$. Then $|J| \leq \frac{1}{\delta \epsilon}$.

We end this section by discussing the “degree” of functions on general product spaces. For $f \in L^2(\{-1,1\}^n)$ the Fourier expansion is a real polynomial; this yields an obvious definition for degree. But for general $f \in L^2(\Omega^n, \pi^{\otimes n})$ the domain is just an abstract set so we need to look for a more intrinsic definition. We take our cue from Exercise 1.11(b):

Definition 32 Let $f \in L^2(\Omega^n, \pi^{\otimes n})$ be nonzero. The degree of $f$, written $\deg(f)$, is the least $k \in {\mathbb N}$ such that $f$ is a sum of $k$-juntas (functions depending on at most $k$ coordinates).

Proposition 33 Let $f \in L^2(\Omega^n, \pi^{\otimes n})$ be nonzero. Then for any fixed product Fourier basis we have $\deg(f) = \max\{\#\alpha : \widehat{f}(\alpha) \neq 0\}$.

Proof: The inequality $\deg(f) \leq \max\{\#\alpha : \widehat{f}(\alpha) \neq 0\}$ is immediate from the Fourier expansion: \[ f = \sum_{\alpha : \widehat{f}(\alpha) \neq 0} \widehat{f}(\alpha)\,\phi_\alpha \] and each function $\widehat{f}(\alpha)\,\phi_\alpha$ depends on at most $\#\alpha$ coordinates. For the reverse inequality, suppose $f = g_1 + \cdots + g_m$ where each $g_i$ depends on at most $k$ coordinates. By Corollary 20 each $g_i$ has its Fourier support on functions $\phi_\alpha$ with $\#\alpha \leq k$. But $\widehat{f}(\alpha) = \widehat{g_1}(\alpha) + \cdots + \widehat{g_m}(\alpha)$ so the same is true of $f$. $\Box$

8 comments to §8.2: Generalized Fourier formulas

Leave a Reply




You can use most standard LaTeX, as well as these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>