Notes on Collaborative zk-SNARKs
May 17, 2024
Thanks to Franco Nieddu and the TACEO team for the discussions and reviews. You can leave comments on the hackMD version of this document here.
Resources
Introduction
Problem: zk-SNARKs do not work on distributed secrets. One party needs to know the whole witness in order to be able to generate a valid proof. But this might not always be the case. An example mentioned in the paper is when multiple credit bureaus hold the data necessary to compute the credit score of a Alice. Alice needs her credit score to access a loan. The lender also requires a proof that this credit score was generated correctly starting from public commitments of the credit bureaus’ databases. By only relying on zk-SNARKs, such proof can only be generated by a party who has access to the data of all the credit bureaus.
In Collaborative zk-SNARKs (co-SNARKs), the 3 parties $P_1$, $P_2$ and $P_3$ each hold a piece of the secret data (secret witness $w$). They will then interact with each other into this MPC protocol to generate a single $\pi$ which is a zk-SNARK.
The main difference is in the prove algorithm which is now a MPC protocol. You can think of this $\pi$, in the Collaborative setting, as a proof about a joint property about the secrets. The setup and the verify algorithms stay unchanged.
Using co-SNARKs is possible to solve the problem in the credit score application. Now all the credit bureaus can join the MPC and generate a proof that attests the correct computation of Alice’s credit score. Co-SNARKs guarantee that the data belonging to each credit bureau remain private to the party owning them.
Note that in such scenario the number of parties (or provers) $N$ is known to the other fellow provers (but not to the verifier, who only receives a “simple” proof).
The paper defines the implementation of co-SNARKs with Elliptic-curve and pairing based zk-SNARKs constructions such as:
- Groth16
- Marlin (KZG)
- Plonk (KZG)
And also with hash-based constructions such as Fractal.
Publicly Auditable MPC
MPC protocols conventionally have the problem that any third party cannot verify the correctness of their execution. The easiest way to solve this problem is to let the third party (or auditor) to join the MPC themselves.
Publicly Auditable MPC (PA-MPC) includes a series of technique that allow a third party auditor to verify the correct execution of an MPC without having to participate. You can build PA-MPC from co-SNARKs. However, one can also build PA-MPC without ZK at all. For example 2014/075 does so using just Pedersen Commitments, however, the “proof” is linear in the size of the circuit instead of sub-linear like one gets with the above co-SNARKs.
Security Definition
There’s no longer a definition of zero knowledge. Instead what we now have is $t$ zero knowledge. One of the prover does not only have to worry about the verifier not learning information about their private $w$, but also about the other provers in the protocol not knowing it. Formally, this property is defined as follow:
Such security definition can be borrowed by the security defintion of a Secure Multi Party Computation (MPC). In particular, the security assumption of the MPC protocol used under the hood applies to the t-zero knowledge security definition too. If the MPC protocol is secure against dishonest majority, this same property applies to the zero knowledge security. This basically means that even if all the parties but one are corrupted, they cannot learn the secret of the “one” party. The same goes for an honest majority assumption.
Constraint Defintion
As implemented in the paper co-SNARKs work with R1CS relations to define arithmetic constraints inside the circuit. Note that in theory any other type of constraint relations, such as PLONKish, can be expressed inside a co-SNARK.
Computation -> Arithmetic Circuits -> R1CS constraints. In order to support the MPC protocol, co-SNARKs work with secret shares of the witness to be fed into the R1CS. In particular, we assume that at the beginning of the protocol the witness $w$ is secret shared among the parties $[w]$. For example, this could be constructed starting from additive secret sharing
Approaches
Approach #1: Snarkify the MPC
Mentioned here. One approach to the problem might be the following:
If you are able to take your computation statement and transform it into arithmetic circuit than you can leverage any traditional MPC machinery or protocol to perform such computation in a distributed manner. Once you are done, you can take the execution trace of the MPC and generate a zk-SNARK out it. We can define this approach as Snarkify the MPC. You can think it as performing a zk-SNARK on top of a MPC computation
Approach #2: MPC the Prover
The approach used in the co-SNARKs paper is different. They leverage generic MPC protocols. Such protocols allows you to take any computation that can be expressed as an arithmetic circuit and securely evaluate it over secret shared data. In order words, the function $f$ to be run inside the MPC (and to be represented as an arithmetic circuit) is chosen to be a zk-SNARK proof generation function. In such case the computation will be the zk-SNARK prover. You can think it as performing a zk-SNARK inside a MPC computation. This approach can be defined as MPC the prover
Naively implementing it might not be practical and be very slow, as it composes both the techniques and the complexity multiplies.
The goal of the paper is to define techniques that are able to achieve this composition between two protocols without incurring into a 1M times slowdown.
The MPC schemes used in the paper allow to perform arithmetic operations over finite field. In particular these schemes are:
- SPDZ - authenticated additive shares - malicious and dishonest majority
- GSZ - shamir shares - semi-honest (can be tuned to malicious) and honest majority
Note that the SPDZ schemes does not support the property of guaranteed output delivery. This mean that any party can drop off while the protocol is being run and this will prevent the proof to be generated. Therefore we are gonna assume that all the $N$ parties joining the protocol as provers do want to generate the proof and therefore dropping off the protocol is against their interest. GSZ instead does support guaranteed output delivery
Bottlenecks
Here we identify the main bottlenecks that might lead the computation to a very huge slowdown.
“Traditional” zk prover bottlenecks:
- Elliptic curve operations - Paper gonna explore techniques to achieve that efficiently in MPC setting
- Fourier Transforms - Not a problem in the MPC setting because these are linear operations
- Polynomial Commitments - Paper gonna explore techniques to achieve that efficiently in MPC setting
MPC bottlenecks (this contains the class of non-linear operations that are not traditionally a bottleneck in single player proving systems but might become a bottleneck when translated to MPC):
- Polynomial Division - highly non-linear operation but actually this is not a problem because the divisor polynomial is public therefore the operation ends up being actually linear
- Sequence of products (typical to PLONK) - there’s a solution to this actually explained in the paper
- Merkle Tree evaluations and hashing (for hashing-based SNARKs) - partial solution proposed to that with some drawback.
Elliptic Curve Operations
A naive approach to perform EC operations inside MPC would be to define a point on the curve as a set of $x, y$ coordinates. The coordinates are then secret shared among the parties, for example via additive secret sharing. Then in order to perform the operation you perform it as a set of addition and multiplication across the secret shared coordinates.
This approach is not efficient because the EC addition formula involves a lot of multiplications in the field, which is not good in MPC!
The second option is to share the points themselves using an EC additive secret sharing.
Now elliptic curve addition becomes a linear operation that can be performed locally with no-cost and no-communication between the parties. This also makes elliptic curve scalar multiplications to be pretty fast as well, at least as long as the scalar is known to the public.
Note in the paper, the field of the MPC protocol is chosen as the scalar field of the Elliptic Curve.
Further note is that scalar multiplication when the scalar and the point are both private is definitely a non-linear operation and therefore is expensive. Luckly, according to the authors of the paper, this doesn’t happen too often.
This also helps when to perform FFT. Indeed this can be computed locally and without any interaction.
KZG Commitments
The paper considers KZG as a type of polynomial commitment to be computed as part of the Prove algorithm of a zk-SNARK. The authors suggest that this can be efficiently done in a MPC setting by having each party to locally commit to their individual share of the polynomial to be committed. The results can be interpreted as shares of the desired commitment. Given that such polynomial commitment it is required to be public, the polynomial commitment can be recovered starting from the shares of the parties given that any linear secret sharing scheme has linear reconstruction property.
Hashing
To compute Merkle Tree, in hashing-based prover systems, it is required to perform several hashing operations over secret shared data. This is a non-linear operation and therefore very expensive to compute inside MPC. The paper doesn’t provide a satisfactory solution that ease up the MPC operation (similarly to what happened with EC opeartions), instead what they propose is that the prover doesn’t have to collectively compute the merkle root over secret shared. Their solution is for each prover to compute a merkle root from their own share. And then you consider this vector of roots as the vector commitment for your prover system.
They didn’t end up implementing it becasue this solution doesn’t seem efficient enough, compared to one that relies on EC-based zk-SNARKs
Performance
Variables inside the benchmark:
Note that the MPC pre-processing is excluded from the benchmarks (because it is considered not impactful on the overall proving time)
From the Experiment 1 you can see that, when compared to a single prover system, the dishonest majority case (SPDZ) the performance becomes is twice the performance of a traditional single prover. In the case of an honest majority, the performance is the same as the one of a traditional prover. One of the reason of that might be that when working with SPDZ, every computation needs to be carried out twice. One for the secret share and one for the MAC secret share!
In this second example, the thing that is measured is the slowdown compared to the single prover as the number of provers increase. You can see that for 2 parties, the slowdown is the one described in the first experiment. Naturally, as expected, as the number of parties grows, the co-SNARK proving time slows down as well. The author notes here that there’s no reason why the slowdown in the honest majority should be higher than the one for the dishonest majority when parties are more than 8. There must be something wrong there that can be an interesting area of research.
The third and last experiment considers the slowdown that is encountered as the bandwidth capacity of the network decreases. For very low capacity connection the slowdown is, as expected, higher.
Important. The benchmarks for experiment 1 are run on machines with high capacity links, namely networks that are built to handle very high bandwith of 3GB/s or 3000MB/s. But as you can see in experiment 3 a very similar slowdown is achieved even using networks with lower bandwidth requirement (64mb/s), which is fairly standard.
As the author of the paper mentioned in a private conversation, Figure 8 uses 3Gb/s because that was the default for our cloud provider: GCP. Indeed, it’s probably not needed. We didn’t try to measure this, but the high bandwidth might help when the number of constraints is big (2^20) and therefore the amount of communication is big.
On Publicly Auditable MPC
Publicly Auditable MPC is a protocol in which the MPC also produces a proof by which the public can verify that the computation was performed correcly with respect to commitments to the inputs. This feature doesn’t come out of the box with co-SNARKs because this doesn’t perform any check on the inputs used to generate the zk-SNARK proof. Instead, PA-MPC can be achieved as an extension of co-SNARK.
As proposed by the paper, the missing piece is a commitment performed on the input of the co-SNARK by each prover. This commitment is crucial to guarantee the integrity of the input of the co-SNARK. The proposed solution works as follow:
- Each prover provides a short commitment to their data input. It can be a simple hash or a merkle tree.
- After the proof is generated (and verified) a dispute over the accuracy of the aggregated statistic might appear. In such a case, an investigator might require the one of the provers to open their commitment and provide the data. A prover that commited to incorrect data will be penalized.
Note that such solution will indeed reveal the input data if a dispute is opened. An alternative to that, which is not part of the paper, is to extend the zkSNARK to prove correctness of the input. In such scenario another proof is attached to the collaborative zk-SNARK in order to prove attributes about the input being fed into the collaborative zk-SNARK. For example that they are within a certain range, one of a pre-defined set of candidates or signed by a certain party.
Conclusions
- Collaborative snarks suits better for elliptic curve based SNARK rather than hashing based SNARKs because performing hashing is an highly non-linear computation that is not efficient inside a MPC
- Wrapping ZKP inside MPC makes more sense as a general approach rather than the opposite
- The measured performance goes against the common sentiment that wrapping MPC on top of a computation would lead to a 100x slowdown in computation time. In particular, the performance shows a 1x/2x slowdown between the single player proof generation and the multi-party proof generation. This is due to optimization techniques that transformed many of the main zkp bottlenecks into linear operations that are really efficient to translate into MPC. This might suggest that such improvements can also apply to other cryptographic primitives that rely on algebraic operations
- An interesting next step would be to build PA-MPC from 2014/075 and snarkify the verification process such that a sublinear proof can be obtained out of a MPC protocol, therefore achieving the same result of co-SNARKs