§6.3: Constructions of various pseudorandom functions

In this section we give some constructions of boolean functions with strong pseudorandomness properties.

We begin by discussing bent functions:

Definition 26 A function $f : {\mathbb F}_2^n \to \{-1,1\}$ (with $n$ even) is called bent if $|\widehat{f}(\gamma)| = 2^{-n/2}$ for all $\gamma \in {\widehat{{\mathbb F}_2^n}}$.

Bent functions are $2^{-n/2}$-regular. If the definition of $\epsilon$-regularity were changed so that even $|\widehat{f}(0)|$ needed to be at most $\epsilon$, then bent functions would be the most regular possible functions. This is because $\sum_\gamma \widehat{f}(\gamma)^2 = 1$ for any $f : {\mathbb F}_2^n \to \{-1,1\}$ and hence at least one $|\widehat{f}(\gamma)|$ must be at least $2^{-n/2}$. In particular, bent functions are those which are maximally distant from the class of affine functions, $\{\pm \chi_\gamma : \gamma \in {\widehat{{\mathbb F}_2^n}}\}$.

We have encountered some bent functions already. The canonical example is the inner product mod $2$ function, $\mathrm{IP}_{n}(x) = \chi(x_1 x_{n/2+1} + x_2 x_{n/2+2} + \cdots + x_{n/2} x_n)$. (Recall the notation $\chi(b) = (-1)^b$.) For $n = 2$ this is just the $\mathrm{AND}_2$ function $\tfrac{1}{2} + \tfrac{1}{2} x_1 + \tfrac{1}{2} x_2 – \tfrac{1}{2} x_1x_2$, which is bent by inspection. For general $n$, the bentness is a consequence of the following fact (deferred to the exercises):

Proposition 27 Let $f : {\mathbb F}_2^n \to \{-1,1\}$ and $g : {\mathbb F}_2^{n’} \to \{-1,1\}$ be bent. Then $f \oplus g : {\mathbb F}_2^{n+n’} \to \{-1,1\}$ defined by $(f \oplus g)(x, x’) = f(x) g(x’)$ is also bent.

Another example of a bent function is the complete quadratic function $\mathrm{CQ}_n(x) = \chi(\sum_{1 \leq i < j \leq n} x_ix_j)$ from Exercise 1.1. Actually, in some sense it is the “same” example, as we now explain.

Proposition 28 Let $f : {\mathbb F}_2^n \to \{-1,1\}$ be bent. Then $\pm \chi_\gamma \cdot f$ is bent for any $\gamma \in {\widehat{{\mathbb F}_2^n}}$, as is $f \circ M$ for any invertible linear transformation $M : {\mathbb F}_2^n \to {\mathbb F}_2^n$.


Proof: Multiplying by $-1$ does not change bentness, and both $\chi_\gamma \cdot f$ and $f \circ M$ have the same Fourier coefficients as $f$ up to a permutation. $\Box$

We claim that $\mathrm{CQ}_n$ arises from $f = \mathrm{IP}_n$ as in Proposition 28. In the case $n = 4$, this is because $\mathop{{\textstyle \sum}}_{1 \leq i < j \leq 4} x_ix_j = (x_1 + x_3)(x_2+x_3) + (x_1+x_2+x_3)x_4 + x_3$ over ${\mathbb F}_2$; thus \[ \mathrm{CQ}_4(x) = \mathrm{IP}_4(Mx) \cdot \chi_{(0,0,1,0)}(x), \quad \text{where } M = \begin{bmatrix} 1 & 0 & 1 & 0 \\ 1 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \text{ is invertible}. \] The general case is left to the exercises. In fact, every bent $f$ with $\deg_{{\mathbb F}_2}(f) \leq 2$ arises by applying Proposition 28 to the inner product mod $2$ function; again, see the exercises. There are other large families of bent functions; however, the problem of classifying all bent functions is open and seems difficult. We content ourselves by describing one more family:

Proposition 29 Let $f : {\mathbb F}_2^{2n} \to \{-1,1\}$ be defined by $f(x,y) = \mathrm{IP}_{2n}(x,y)g(y)$ where $g : \{-1,1\}^n \to \{-1,1\}$ is arbitrary. Then $f$ is bent.


Proof: We will think of $y \in {\widehat{{\mathbb F}_2^n}}$, so $\mathrm{IP}_{2n}(x,y) = \chi_y(x)$. We’ll also write a generic $\gamma \in \widehat{{\mathbb F}_2^{2n}}$ as $(\gamma_1, \gamma_2)$. Then indeed \begin{multline*} \widehat{f}(\gamma) = \mathop{\bf E}_{{\boldsymbol{x}}, \boldsymbol{y}}[\chi_{\boldsymbol{y}}({\boldsymbol{x}}) g(\boldsymbol{y}) \chi_{(\gamma_1, \gamma_2)}({\boldsymbol{x}}, \boldsymbol{y})] = \mathop{\bf E}_{\boldsymbol{y}} \left[g(\boldsymbol{y}) \chi_{\gamma_2}(\boldsymbol{y}) \mathop{\bf E}_{{\boldsymbol{x}}}[\chi_{\boldsymbol{y}+\gamma_1}({\boldsymbol{x}})]\right] \\= \mathop{\bf E}_{\boldsymbol{y}} [g(\boldsymbol{y}) \chi_{\gamma_2}(\boldsymbol{y}) \boldsymbol{1}_{\{\boldsymbol{y}+\gamma_1 = 0\}}] = 2^{-n} g(\gamma_1)\chi_{\gamma_2}(\gamma_1) = \pm 2^{-n}. \quad \Box\end{multline*}

We next discuss explicit constructions of small $\epsilon$-biased sets, which are of considerable use in the field of algorithmic derandomization. The most basic step in a randomized algorithm is drawing a string ${\boldsymbol{x}} \sim {\mathbb F}_2^n$ from the uniform distribution; however this has the “cost” of generating $n$ independent, random bits. But sometimes it’s not necessary that ${\boldsymbol{x}}$ precisely have the uniform distribution; it may suffice that ${\boldsymbol{x}}$ be drawn from an $\epsilon$-biased density. If we can deterministically find an $\epsilon$-biased (multi-)set $A$ of cardinality say, $2^\ell$, then we can generate ${\boldsymbol{x}} \sim \varphi_A$ using just $\ell$ independent random bits. We will see some example derandomizations of this nature in §6.4; for now we discuss constructions.

Fix $\ell \in {\mathbb N}^+$ and recall that there exists a finite field ${\mathbb F}_{2^\ell}$ with exactly $2^\ell$ elements. It is easy to find an explicit representation for ${\mathbb F}_{2^\ell}$ — a complete addition and multiplication table, say — in time $2^{O(\ell)}$. (In fact, one can compute within ${\mathbb F}_{2^\ell}$ even in deterministic $\mathrm{poly}(\ell)$ time.) The field elements $x \in {\mathbb F}_{2^\ell}$ are naturally encoded by distinct $\ell$-bit vectors; we will write $\mathrm{enc} : {\mathbb F}_{2^\ell} \to {\mathbb F}_2^\ell$ for this encoding. The encoding is linear; i.e., it satisfies $\mathrm{enc}(0) = (0, \dots, 0)$ and $\mathrm{enc}(x+y) = \mathrm{enc}(x) + \mathrm{enc}(y)$ for all $x, y \in {\mathbb F}_{2^\ell}$.

Theorem 30 There is a deterministic algorithm which, given $n \geq 1$ and $0 < \epsilon \leq 1/2$, runs in $\mathrm{poly}(n/\epsilon)$ time and outputs a multiset $A \subseteq {\mathbb F}_2^n$ of cardinality at most $16(n/\epsilon)^2$ with the property that $\varphi_A$ is an $\epsilon$-biased density.


Proof: It suffices to obtain cardinality $(n/\epsilon)^2$ under the assumption that $\epsilon = 2^{-t}$ and $n = 2^{\ell – t}$ are integer powers of $2$. We will describe a probability density $\varphi$ on ${\mathbb F}_2^n$ by giving a procedure for drawing a string $\boldsymbol{y} \sim \varphi$ which uses $2\ell$ independent random bits. $A$ will be the multiset of $2^{2\ell} = (n/\epsilon)^2$ possible outcomes for $\boldsymbol{y}$. It will be clear that $A$ can be generated in deterministic polynomial time. The goal will be to show that $\varphi$ is $2^{-t}$-biased.

To draw $\boldsymbol{y} \sim \varphi$, first choose $\boldsymbol{r}, \boldsymbol{s} \sim {\mathbb F}_{2^\ell}$ independently and uniformly. This uses $2\ell$ independent random bits. Then define the $i$th coordinate of $\boldsymbol{y}$ by \[ \boldsymbol{y}_i = \langle \mathrm{enc}(\boldsymbol{r}^i), \mathrm{enc}(\boldsymbol{s}) \rangle, \quad i \in [n], \] where the inner product $\langle \cdot, \cdot \rangle$ takes place in ${\mathbb F}_2^\ell$. Fixing $\gamma \in {\widehat{{\mathbb F}_2^n}} \setminus \{0\}$, we need to argue that $|\mathop{\bf E}[\chi_\gamma(\boldsymbol{y})]| \leq 2^{-t}$. Now over ${\mathbb F}_2^\ell$, \begin{align*} \langle \gamma, \boldsymbol{y} \rangle = \sum_{i=1}^n \gamma_i \langle \mathrm{enc}(\boldsymbol{r}^i), \mathrm{enc}(\boldsymbol{s}) \rangle = \Bigl\langle \sum_{i=1}^n \gamma_i \mathrm{enc}(\boldsymbol{r}^i), \mathrm{enc}(\boldsymbol{s}) \Bigr\rangle = \langle \mathrm{enc}(\mathop{{\textstyle \sum}}_{i=1}^n \gamma_i \boldsymbol{r}^i), \mathrm{enc}(\boldsymbol{s}) \rangle, \end{align*} where the last step used linearity of $\mathrm{enc}$. Thus \begin{equation} \label{eqn:aghp} \mathop{\bf E}[\chi_\gamma(\boldsymbol{y})] = \mathop{\bf E}[(-1)^{\langle \gamma, \boldsymbol{y} \rangle}] = \mathop{\bf E}_{\boldsymbol{r}}\left[\mathop{\bf E}_{\boldsymbol{s}}[(-1)^{\langle \mathrm{enc}(p_\gamma(\boldsymbol{r})), \mathrm{enc}(\boldsymbol{s})\rangle}]\right], \end{equation} where $p_\gamma : {\mathbb F}_{2^\ell} \to {\mathbb F}_{2^\ell}$ is the polynomial $a \mapsto \gamma_1 a + \gamma_2 a^2 + \cdots + \gamma_n a^n$. This polynomial is of degree at most $n$, and is nonzero since $\gamma \neq 0$. Hence it has at most $n$ roots (zeroes) over the field ${\mathbb F}_{2^\ell}$. Whenever $\boldsymbol{r}$ is one of these roots, $\mathrm{enc}(p_\gamma(\boldsymbol{r})) = 0$ and the inner expectation in \eqref{eqn:aghp} is $1$. But whenever $\boldsymbol{r}$ is not a root of $p_\gamma$ we have $\mathrm{enc}(p_\gamma(\boldsymbol{r})) \neq 0$ and so the inner expectation is $0$. We deduce that \[ 0 \leq \mathop{\bf E}[\chi_\gamma(\boldsymbol{y})] \leq \mathop{\bf Pr}[\boldsymbol{r} \text{ is a root of } p_\gamma] \leq \frac{n}{2^\ell} = 2^{-t}, \] which is stronger than what we need. $\Box$

The bound of $O(n/\epsilon)^2$ in this theorem is fairly close to being optimally small; see the exercises and notes for this chapter.

Another useful tool in derandomization is that of $k$-wise independent distributions. Sometimes a randomized algorithm using $n$ independent random bits will still work assuming only that every subset of $k$ of the bits is independent. Thus as with $\epsilon$-biased sets, it’s worthwhile to come up with deterministic constructions of small sets $A \subset {\mathbb F}_2^n$ such that the density function $\varphi_A$ is $k$-wise independent (i.e., $(0, k)$-regular). The best known examples have the additional pleasant feature that $A$ is a linear subspace of ${\mathbb F}_2^n$; in this case, $k$-wise independence is easy to characterize:

Proposition 31 Let $H$ be an $m \times n$ matrix over ${\mathbb F}_2$ and let $A \leq {\mathbb F}_2^n$ be the span of $H$’s rows. Then $\varphi_A$ is $k$-wise independent if and only if any sum of at most $k$ columns of $H$ is nonzero in ${\mathbb F}_2^m$. (We exclude the “empty” sum.)


Proof: Since $\varphi_A = \sum_{\gamma \in A^\perp} \chi_\gamma$ (Proposition 3.11), $\varphi_A$ is $k$-wise independent if and only if $|\gamma| > k$ for every $\gamma \in A^\perp \setminus \{0\}$. But $\gamma \in A^\perp$ if and only if $H \gamma = 0$. $\Box$

Here is a simple construction of such a matrix with $m \sim k \log n$:

Theorem 32 Let $k, \ell \in {\mathbb N}^+$ and assume $n = 2^{\ell} \geq k$. Then for $m = (k-1)\ell +1$, there is a matrix $H \in {\mathbb F}_2^{m \times n}$ such that any sum of at most $k$ columns of $H$ is nonzero in ${\mathbb F}_2^m$.


Proof: Write $\alpha_1, \dots, \alpha_n$ for the elements of the finite field ${\mathbb F}_n$, and consider the following matrix $H’ \in {\mathbb F}_{n}^{k \times n}$: \[ H' = \begin{bmatrix} 1 & 1 & \cdots & 1 \\ \alpha_1 & \alpha_2 & \cdots & \alpha_n \\ \alpha_1^2 & \alpha_2^2 & \cdots & \alpha_n^2 \\ \vdots & \vdots & \ddots & \vdots \\ \alpha_1^{k-1} & \alpha_2^{k-1} & \cdots & \alpha_n^{k-1} \end{bmatrix}. \] Any submatrix of $H’$ formed by choosing $k$ columns is a Vandermonde matrix and is therefore nonsingular. Hence any subset of $k$ columns of $H’$ is linearly independent in ${\mathbb F}_{n}^k$. In particular, any sum of at most $k$ columns of $H’$ is nonzero in ${\mathbb F}_n^k$. Now form $H \in {\mathbb F}_2^{m \times n}$ from $H’$ by replacing each entry $\alpha_j^i$ ($i > 0$) with $\mathrm{enc}(\alpha_j^i)$, thought of as a column vector in ${\mathbb F}_2^\ell$. Since $\mathrm{enc}$ is a linear map we may conclude that any sum of at most $k$ columns of $H$ is nonzero in ${\mathbb F}_2^{m}$. $\Box$

Corollary 33 There is a deterministic algorithm which, given integers $1 \leq k \leq n$, runs in $\mathrm{poly}(n^k)$ time and outputs a subspace $A \leq {\mathbb F}_2^n$ of cardinality at most $2^kn^{k-1}$ such that $\varphi_A$ is $k$-wise independent.


Proof: It suffices to assume $n =2^\ell$ is a power of $2$ and then obtain cardinality $2n^{k-1} = 2^{(k-1)\ell+1}$. In this case, the algorithm constructs $H$ as in Theorem 32 and takes $A$ to be the span of its rows. The fact that $\varphi_A$ is $k$-wise independent is immediate from Proposition 31. $\Box$

For constant $k$ this upper bound of $O(n^{k-1})$ is close to optimal. It can be improved to $O(n^{\lfloor k/2 \rfloor})$, but there is a lower bound of $\Omega(n^{\lfloor k/2 \rfloor})$ for constant $k$; see the exercises. We conclude this section by noting that taking an $\epsilon$-biased density within a $k$-wise independent subspace yields an $(\epsilon,k)$-wise independent density:

Lemma 34 Suppose $H \in {\mathbb F}_2^{m \times n}$ is such that any sum of at most $k$ columns of $H$ is nonzero in ${\mathbb F}_2^m$. Let $\varphi$ be an $\epsilon$-biased density on ${\mathbb F}_2^m$. Consider drawing $\boldsymbol{y} \sim \varphi$ and setting $\boldsymbol{z} = \boldsymbol{y}^\top H \in {\mathbb F}_2^n$. Then the density of $\boldsymbol{z}$ is $(\epsilon,k)$-wise independent.


Proof: Suppose $\gamma \in {\widehat{{\mathbb F}_2^n}}$ has $0 < |\gamma| \leq k$. Then $H\gamma$ is nonzero by assumption and hence $ |\mathop{\bf E}[\chi_\gamma(\boldsymbol{z})]| = |\mathop{\bf E}_{\boldsymbol{y} \sim \varphi}[(-1)^{\boldsymbol{y}^\top H \gamma}]| \leq \epsilon $ since $\varphi$ is $\epsilon$-biased. $\Box$

As a consequence, combining the constructions of Theorem 30 and Theorem 32 gives an $(\epsilon,k)$-wise independent distribution which can be sampled from using only $O(\log k + \log \log(n) + \log(1/\epsilon))$ independent random bits:

Theorem 35 There is a deterministic algorithm which, given integers $1 \leq k \leq n$ and also $0 < \epsilon \leq 1/2$, runs in time $\mathrm{poly}(n/\epsilon)$ and outputs a multiset $A \subseteq {\mathbb F}_2^n$ of cardinality $O(k \log(n)/\epsilon)^2$ (a power of $2$) such that $\varphi_A$ is $(\epsilon,k)$-wise independent.

2 comments to §6.3: Constructions of various pseudorandom functions

Leave a Reply

  

  

  

You can use most standard LaTeX, as well as these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>