§5.1: Linear threshold functions and polynomial threshold functions

Recall from Chapter 2.1 that a linear threshold function (abbreviated LTF) is a boolean-valued function $f : \{-1,1\}^n \to \{-1,1\}$ that can be represented as $$\label{eqn:generic-LTF} f(x) = \mathrm{sgn}(a_0 + a_1 x_1 + \cdots + a_n x_n)$$ for some constants $a_0, a_1, \dots, a_n \in {\mathbb R}$.

[...]

§4.5: Highlight: LMN’s work on constant-depth circuits

Having derived strong results about the Fourier spectrum of small DNFs and CNFs, we will now extend to the case of constant-depth circuits. We begin by describing how Håstad applied his Switching Lemma to constant-depth circuits. We then describe some Fourier-theoretic consequences coming from a very early (1989) work in analysis of boolean functions [...]

§4.4: Håstad’s Switching Lemma and the spectrum of DNFs

Let’s further investigate how random restrictions can simplify DNF formulas.

[...]

§4.3: Random restrictions

In this section we describe the method of applying random restrictions. This is a very “Fourier-friendly” way of simplifying a boolean function.

[...]

§4.2: Tribes

In this section we study the tribes DNF formulas, which serve as an important examples and counterexamples in analysis of boolean functions. Perhaps the most notable feature of the tribes function is that (for a suitable choice of parameters) it is essentially unbiased and yet all of its influences are quite tiny.

[...]

§4.1: DNF formulas

Perhaps the commonest way of representing a boolean function $f : \{0,1\}^n \to \{0,1\}$ is by a DNF formula:

[...]

§3.5: Highlight: the Goldreich–Levin Algorithm

We close this chapter by briefly describing a topic which is in some sense the “opposite” of learning theory: cryptography. At the highest level, cryptography is concerned with constructing functions which are computationally easy to compute but computationally difficult to invert. Intuitively, think about the task of encrypting secret messages: you would like a [...]

§3.4: Learning theory

Computational learning theory is an area of algorithms research devoted to the following task: given a source of “examples” $(x, f(x))$ from an unknown function $f$, compute a “hypothesis” function $h$ which is good at predicting $f(y)$ on future inputs $y$. We will focus on just one possible formulation of the task:

[...]

§3.3: Restrictions

A common operation on boolean functions $f : \{-1,1\}^n \to {\mathbb R}$ is restriction to subcubes. Suppose $[n]$ is partitioned into two sets, $J$ and $\overline{J} = [n] \setminus J$. If the inputs bits in $\overline{J}$ are fixed to constants, the result is a function $\{-1,1\}^J \to {\mathbb R}$. For example, if we [...]

§3.2: Subspaces and decision trees

In this section we treat the domain of a boolean function as ${\mathbb F}_2^n$, an $n$-dimensional vector space over the field ${\mathbb F}_2$. As mentioned earlier, it can be natural to index the Fourier characters $\chi_S : {\mathbb F}_2^n \to \{-1,1\}$ not by subsets $S \subseteq [n]$ but by their $0$-$1$ indicator vectors \$\gamma [...]