CryptoDB
Scalable zkSNARKs for Matrix Computations: A Generic Framework for Verifiable Deep Learning
Authors: |
|
---|---|
Download: | |
Conference: | ASIACRYPT 2025 |
Abstract: | Sublinear proof sizes have recently become feasible in verifiable machine learning (VML), yet no approach achieves the trio of strictly linear prover time, logarithmic proof and verification, and architecture privacy. Hurdles persist because we lack both a succinct commitment to the full neural network and a heterogeneous-model framework, leaving verification dependent on explicit architecture knowledge. Existing limits motivate our new approach: a unified proof-composition framework that casts VML as the design of zero-knowledge succinct non-interactive arguments of knowledge (zkSNARKs) for matrix computations. Representing neural networks with linear and non-linear layers as a directed acyclic graph of atomic matrix operations enables topology-aware composition without revealing the graph. Modeled this way, we split proving into a reduction layer and a compression layer that attests to the reduction with a proof of proof. At the reduction layer, inspired by reduction of knowledge (Crypto '23), root-node proofs are reduced to leaf-node proofs under an interface standardized for heterogeneous linear and non-linear operations. Next, a recursive zkSNARK compresses the transcript into a single proof while preserving architecture privacy. Complexity-wise, for a matrix expression with $M$ atomic operations on $n \times n$ matrices, the prover runs in $O(M n^2)$ time while proof size and verification time are $O(\log(M n))$, outperforming known VML systems. Honed for this framework, we formalize all relations directly in matrix or vector---a more intuitive form for VML than traditional polynomials. Our LiteBullet proof, an inner-product proof from folding and its connection to sumcheck (Crypto '21), yields a polynomial-free alternative. With these ingredients, we reconcile heterogeneity, zero-knowledge, succinctness, and architecture privacy in a single VML system. |
BibTeX
@inproceedings{asiacrypt-2025-36160, title={Scalable zkSNARKs for Matrix Computations: A Generic Framework for Verifiable Deep Learning}, publisher={Springer-Verlag}, author={Mingshu Cong and Sherman S. M. Chow and Siu Ming Yiu and Tsz Hon Yuen}, year=2025 }