2022-12-11
Recursive arguments are a technique for compressing what would be a linear sized proof into a logarithmic one. They first showed up in the context of electronic voting as a way to prove an honest permutation of ballots and have since been used to build more general succinct arguments.
Given a polynomial \(t(x) \in \mathbb{F}\_q[x]\) we’d like to argue the validity of a commitment to the evaluation of \(t\) at a given point. Polynomial evaluation can be expressed as an inner product of the form \(t(x) = \sum_{i = 0}^{d-1} t_i x^i = \langle \vec{t}, \vec{x} \rangle\) where \(\vec{t} = (t_0, t_1, t_2, \dots, t_{d-1}) \in \mathbb{F}_{q}^{d}\) is a vector of coefficients and \(\vec{x} = (1, x, x^2, \dots, x^{d-1})\) is a vector of successive powers of \(x\).
Fix some cyclic group \(\mathbb{G}\) where the discrete log problem is hard and generate the following public parameters.
# Polynomial degree
= (1 << 10)
d
# Curve of prime order
= 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC2F
p = EllipticCurve(GF(p), [0, 7])
E = E.order()
q
# Scalar field
= GF(q)
K
# Group generators
= [E.random_element() for _ in range(d)]
g = [E.random_element() for _ in range(d)]
h = E.random_element() u
The Pedersen hash of two vectors is the group element satisfying
\[P(\vec{t},\vec{x}) = \langle \vec{t}, \vec{g}\rangle + \langle \vec{x}, \vec{h}\rangle \in \mathbb{G}\]
where \(\vec{g}\) and \(\vec{h}\) are the fixed generator vectors defined above. This hash function is desirable because of its additive properties
\[P(\vec{t},\vec{x}) + P(\vec{v},\vec{y}) = P(\vec{t} + \vec{v}, \vec{x} + \vec{y})\]
\[\alpha P(\vec{t},\vec{x}) = P(\alpha \vec{t}, \alpha \vec{x})\]
We can extend this to commit to the inner product of the two vectors
\[\text{com}(\vec{t}, \vec{x}) = P(\vec{t},\vec{x}) + \langle \vec{t}, \vec{x}\rangle u \in \mathbb{G}\]
Where \(u\) is the fixed group generator defined above.
# Inner product
= lambda t,x: sum([i*j for (i,j) in zip(t, x)])
ip
# Pedersen hash
= lambda t,x,g,h: ip(t, g) + ip(x, h) + ip(t, x)*u
com
# Sample coefficient vectors
= vector(K, [K.random_element() for _ in range(d)])
t = vector(K, [K.random_element() for _ in range(d)])
x
# Derive commitment
= com(t, x, g, h) c
At this point, we’ve commit to the coefficients of two vectors along with their inner product using a single group element. The next step is to generate a proof of knowledge of this commitment’s pre-image to convince a verifier that \(C\) is of the right form.
# Left half of a vector
= lambda v: v[:len(v)//2]
L
# Right half of a vector
= lambda v: v[len(v)//2:] R
Let \(\vec{v}\_{L}\) and \(\vec{v}\_{R}\) denote the left and right half of a vector \(\vec{v}\). At each step of the argument, the prover sends two intermediate commitments that account for vectors of half the original size.
\[ \begin{equation} \begin{cases} C_{L} &= \text{com}(\vec{t}_{R}, \vec{x}_{L}, \vec{g}_{L}, \vec{h}_{R}) \in \mathbb{G} \\ C_{R} &= \text{com}(\vec{t}_{L}, \vec{x}_{R}, \vec{g}_{R}, \vec{h}_{L}) \in \mathbb{G} \end{cases} \end{equation} \]
The verifier responds with a random challenge \(r \in \mathbb{F}\_{q}\) which is used to update the parameters for the next step
\[ \begin{equation} \begin{cases} \vec{x} &\leftarrow \vec{x}_{L} + r^{-1}\vec{x}_{R}\\ \vec{g} &\leftarrow \vec{g}_{L} + r^{-1}\vec{g}_{R}\\ \vec{t} &\leftarrow \vec{t}_{L} + r\vec{t}_{R}\\ \vec{h} &\leftarrow \vec{h}_{L} + r\vec{h}_{R}\\ C &\leftarrow r C_{L} + C + r^{-1} C_{R}\\ \end{cases} \end{equation} \]
The trick here is that the last step lets the verifier indirectly compute the updated commitment
\[r C_{L} + C + r^{-1} C_{R} = \text{com}(\vec{t}, \vec{x}, \vec{g}, \vec{h})\]
without ever using the vectors themselves. If we continue to halve at each step, we’ll eventually reach our base case where \(\vec{t}\) and \(\vec{x}\) are of length one. At this point, we send over the two scalars and conclude the proof.
Technical note: as is common with interactive proofs, we apply the Fiat-Shamir transform and replace the random challenge with a hash of the intermediate commitments to make the scheme non-interactive.
# Fiat-Shamir transform
def fst(l, r):
from hashlib import blake2b
# Integer to bytes
= lambda i: i.to_bytes((i.bit_length() + 7) >> 3, 'big')
i2b = blake2b()
h b'fiat-shamir-transform')
h.update(= b''.join([i2b(int(i)) for i in l[:] + r[:]])
payload
h.update(payload)return K(int(h.hexdigest(), base=16))
def prove(t, x, g, h):
if len(t) == 1:
return [t, x]
else:
# Decompose
= com(R(t), L(x), L(g), R(h))
cL = com(L(t), R(x), R(g), L(h))
cR
# Derive challenge
= fst(cL, cR)
r = r^-1
r_inv
# Update parameters
= vector(K, L(x)) + r_inv*vector(K, R(x))
x = [i+j for (i,j) in zip(L(g), [r_inv*k for k in R(g)])]
g = vector(K, L(t)) + r*vector(K, R(t))
t = [i+j for (i,j) in zip(L(h), [r*k for k in R(h)])]
h
return [cL, cR] + prove(t, x, g, h)
= prove(t, x, g, h)
proof assert(len(proof) == 2*log(d, 2) + 2)
At this point, the verifier has the following view
The original commitment: \(C \in \mathbb{G}\)
The set of intermediate commitments: \(\\{ C_L^{(i)}, C_R^{(i)} \\}_{0 \leq i \leq \log d}\)
The set of challenge values: \(\\{r_i\\}_{0 \leq i \leq \log d}\)
The verifier proceeds through the set of intermediate commitment pairs, deriving the challenge at each step while keeping track of the updated generator vectors (\(\vec{g}, \vec{h}\)) and commitment \(C\). After reaching the base case, the verifier uses the vectors of length one (\(\vec{t}, \vec{x}\)) to check
\[C \stackrel{?}{=} \text{com}(\vec{t}, \vec{x}, \vec{g}, \vec{h})\]
If this relation holds, the verifier is convinced that (1) the intermediate values were generated honestly and (2) the original commitment is of the correct form.
def verify(c, g, h, proof):
if len(proof) == 2:
= proof[0], proof[1]
t, x return c == com(t, x, g, h)
else:
= proof[:2]
(cL, cR)
# Derive challenge
= fst(cL, cR)
r = r^-1
r_inv
# Update parameters
= [i+j for (i,j) in zip(L(g), [r_inv*k for k in R(g)])]
g = [i+j for (i,j) in zip(L(h), [r*k for k in R(h)])]
h = (r*cL) + c + (r_inv*cR)
c
return verify(c, g, h, proof[2:])
assert(verify(c, g, h, proof))