Computational overhead & Inference verifiability

Machine learning requires large amounts of compute to run complex models, which prevents AI models from being directly run inside smart contracts due to high costs.

An Ethereum block takes a few hundred milliseconds for a client to verify directly, but generating a ZK-SNARK to prove the correctness of such a block can take hours. The typical overhead of other cryptographic gadgets, like MPC, can be even worse. AI computation is expensive already: the most powerful LLMs can output individual words only a little bit faster than human beings can read them, not to mention the often multimillion-dollar computational costs of training the models. The difference in quality between top-tier models and the models that try to economize much more on training cost or parameter count is large .

Currently few teams are designing the space for compute & inference verifiability -

EZKL and Giza are two projects building this tooling by providing verifiable proofs of machine learning model execution. Both help teams build machine learning models to ensure that those models can then be executed in a way that the results can be trustlessly verified on-chain .

EZKL is open source and produces zk-SNARKS while Giza is closed source and produces zk-STARKS.

Modulus Labs is also developing a new zk-proof technique custom tailored for AI models . They building a specialized zero-knowledge prover built specifically to reduce costs and proving time for AI models with a goal to make it economically feasible for projects to integrate models into their smart contracts at scale.

As with EZKL, Giza, and Modulus, they aim to completely abstract the zero-knowledge proof generation process, creating essentially zero-knowledge virtual machines capable of executing programs offchain and generating proofs for verification on-chain. RiscZero and Axiom can service simple AI models as they are meant to be more general-purpose coprocessors while Ritual is purpose-built for use with AI models.

zkML helps reconcile the contradiction between blockchains and AI, where the former are inherently resource constrained and the latter requires large amounts of compute and data . And there are many other techniques to produce proof for the consumers such MPC, TEE, FHE etc.

With Koboto network we going to solve this by building in coordination and writing new economy all together ….

KOBOTO network leverages zkML , optimistic and TEE techniques for its user & consumers base meanwhile providing them the flexibility to choose the type of proof they want indeed.

Providing inference verifiability through our modular & dynamic stack gives the flexibility to the compute provider to run the data through their models and then generate a proof that can be verified on-chain to prove the model’s output for the given input is correct by running zkML or MPC within your off-chain docker container. In this case, the model provider would have the added advantage of being able to offer their models without having to reveal the underlying weights that produce the output. The opposite could also be done.

Being Dynamic in nature; In future koboto network will introduce embedded rag-based models which can be run within a smart contract

Though in future models in Koboto network will be capable to produce proofs , check our future roadmap.

Last updated