International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Scalable zkSNARKs for Matrix Computations: A Generic Framework for Verifiable Deep Learning

Authors:
Mingshu Cong , The University of Hong Kong
Sherman S. M. Chow , The Chinese University of Hong Kong
Siu Ming Yiu , The University of Hong Kong
Tsz Hon Yuen , Monash University
Download:
Search ePrint
Search Google
Conference: ASIACRYPT 2025
Abstract: Sublinear proof sizes have recently become feasible in verifiable machine learning (VML), yet no approach achieves the trio of strictly linear prover time, logarithmic proof size and verification time, and architecture privacy. Hurdles persist because we lack a succinct commitment to the full neural network and a framework for heterogeneous models, leaving verification dependent on architecture knowledge. Existing limits motivate our new approach: a unified proof-composition framework that casts VML as the design of zero-knowledge succinct non-interactive arguments of knowledge (zkSNARKs) for matrix computations. Representing neural networks with linear and non-linear layers as a directed acyclic graph of atomic matrix operations enables topology-aware composition without revealing the graph. Modeled this way, we split proving into a reduction layer and a compression layer that attests to the reduction with a proof of proof. At the reduction layer, inspired by reduction of knowledge (Crypto '23), root-node proofs are reduced to leaf-node proofs under an interface standardized for heterogeneous linear and non-linear operations. Next, a recursive zkSNARK compresses the transcript into a single proof while preserving architecture privacy.
BibTeX
@inproceedings{asiacrypt-2025-36178,
  title={Scalable zkSNARKs for Matrix Computations: A Generic Framework for Verifiable Deep Learning},
  publisher={Springer-Verlag},
  author={Mingshu Cong and Sherman S. M. Chow and Siu Ming Yiu and Tsz Hon Yuen},
  year=2025
}