In the current era of AI-driven applications, the use of language models like GPT and BERT is pervasive. These models are the engines behind tasks ranging from chatbots to recommendation systems. However, these AI systems are increasingly executed in opaque ways, such as behind closed APIs.

Because these models are behind closed APIs, users have no guarantees over which models/weights are used. For example, OpenAI recently changed ChatGPT’s behavior (and has on an ongoing basis), leading to speculation that OpenAI is using cheaper models to save costs. However, OpenAI’s models are trade secrets so they have strong incentives to keep them private. How can we verify that these AI models are behaving as claimed without compromising trade secrets?

A few weeks ago, we announced the open-source release of zkml, which allows for the trustless execution of ML models. In this blog post, we’ll recap the capabilities of zkml and how it can be used to verify the outputs of common natural language processing (NLP) models like GPT, BERT, and CLIP, all without revealing proprietary weights or private data.

Currently, we can produce proofs of GPT-2, MobileBert, and CLIP. We’ve released proofs of GPT-2 and CLIP here, with more models coming soon. In the rest of the post, we’ll describe how to do trustless execution of ML models with ZK-SNARKs and how to use zkml to produce these proofs!

Trustless Execution with Zero-Knowledge Proofs

Zero-knowledge proofs (ZKPs) are cryptographic protocols that allow one party to prove to another that they know a specific piece of information without revealing that information. When applied to machine learning (ML) models, ZKPs can allow a model runner to prove that they ran the model correctly without revealing any specifics about the input data or model parameters.

zkml produces ZK-SNARKs, which are succinct (i.e., short) zero-knowledge proofs. For NLP models such as GPT and BERT, zkml can produce these ZK-SNARKs. We’ve previously described how to use ZK-SNARKs to verify the Twitter algorithm and ML models more broadly. Check out those posts for more details!

Applying zkml to GPT, BERT, and CLIP

Let’s take a look at how we can apply zkml to models like GPT, BERT, and CLIP. The GPT series of models have achieved state-of-the-art performance on language tasks. When fine-tuned on human-generated text, GPT can produce state-of-the-art chatbots. Similarly, BERT can be used for a range of NLP tasks. In contrast, CLIP is typically used in conjunction with vision models.

Suppose a user interacts with a service powered by GPT-4, like a chatbot. The chatbot provider can use zkml to prove that the responses are generated by an unaltered version of GPT-4, without disclosing the actual weights of the model or the specifics of the user’s inputs. The process can be simplified:

  1. Generating a Proof: The service provider runs the model with the user’s input and generates a proof that the model was run correctly using zkml.
  2. Proof Verification: The user or an auditor can then verify the proof. If the proof holds, they can be sure that the model ran correctly without needing to know model weights or the input data.

This process can be easily extended to CLIP, BERT, and any other language model.

Trustless Execution in Action

To illustrate the process, let’s take an example where a user is interacting with a GPT powered chatbot. The chatbot provider can use zkml to generate a proof of correct execution.

To use zkml for GPT, we can run the following commands:

# You’ll need to install zkml as described in the instructions here:  
# You’ll also need to download the parameters from here:   
# Place them in the directory params_kzg  
# Because the proving is resource intensive, we’ve provided a proof that you can verify as follows:  
cd examples/nlp/gpt-2  
tar -zxvf vkey.tar.gz  
tar -zxvf public_vals.tar.gz  
cd ../../../  
cargo build --release  
./target/release/verify_circuit examples/nlp/gpt-2/config.msgpack examples/nlp/gpt-2/vkey examples/nlp/gpt-2/proof examples/nlp/gpt-2/public_vals kzg

This process ensures that the chatbot provider is not manipulating the model or the user’s input in any way. Currently, we can produce proofs for GPT-2, MobileBert, and CLIP. Expect more models soon!

Toward a Future of Trustless AI

As AI models become an integral part of our lives, the need for transparency grows more important. Trustless execution of AI models using zkml represents a significant step in this direction. By allowing service providers to prove that they’re running models correctly without revealing any sensitive information, zkml brings us closer to a future where we can fully trust AI systems without compromising trade secrets or privacy.

Stay tuned for more posts on this topic, as we delve deeper into the applications of zkml and other tools for AI transparency and accountability. And if you’d like to discuss your idea or brainstorm with us, fill out this form and join our Telegram group. Follow me on Twitter for the latest updates as well!

Special thanks to Pun Waiwitlikhit, Yi Sun, Tatsunori Hashimoto, and Ion Stoica for their help.