# Computational overhead & Inference verifiability

Machine learning requires large amounts of compute to run complex models, which prevents AI models from being directly run inside smart contracts due to high costs.

An Ethereum block takes a few hundred milliseconds for a client to verify directly, but generating a ZK-SNARK to prove the correctness of such a block can take hours. The typical overhead of other cryptographic gadgets, like MPC, can be even worse. AI computation is expensive already: the most powerful LLMs can output individual words only a little bit faster than human beings can read them, not to mention the often multimillion-dollar computational costs of *training* the models. The difference in quality between top-tier models and the models that try to economize much more on training cost or parameter count is large .

**Currently few teams are designing the space for compute & inference verifiability -**

[<mark style="color:purple;">EZKL</mark>](https://github.com/zkonduit/ezkl) and [<mark style="color:purple;">Giza</mark>](https://www.gizatech.xyz/) are two projects building this tooling by providing verifiable proofs of machine learning model execution. Both help teams build machine learning models to ensure that those models can then be executed in a way that the results can be trustlessly verified on-chain .

EZKL is open source and produces zk-SNARKS while Giza is closed source and produces zk-STARKS.

[<mark style="color:purple;">Modulus Labs</mark>](https://www.moduluslabs.xyz/) is also developing a new zk-proof technique custom tailored for AI models . They building a specialized zero-knowledge prover built specifically to reduce costs and proving time for AI models with a goal to make it economically feasible for projects to integrate models into their smart contracts at scale.

As with EZKL, Giza, and Modulus, they aim to completely abstract the zero-knowledge proof generation process, creating essentially zero-knowledge virtual machines capable of executing programs offchain and generating proofs for verification on-chain. RiscZero and Axiom can [<mark style="color:purple;">service</mark>](https://www.risczero.com/news/rust-but-verify) simple AI models as they are meant to be more general-purpose coprocessors while Ritual is purpose-built for use with AI models.

**zkML helps** [**reconcile**](https://www.youtube.com/watch?v=5_juhefoN2Y) **the contradiction between blockchains and AI, where the former are inherently resource constrained and the latter requires large amounts of compute and data . And there are many other techniques to produce proof for the consumers such MPC, TEE, FHE etc.**

> <mark style="color:purple;">With Koboto network we going to solve this by building in coordination and writing new economy all together ….</mark>
>
> <mark style="color:purple;">KOBOTO network leverages zkML , optimistic and TEE techniques for its user & consumers base meanwhile providing them the flexibility to choose the type of proof they want indeed.</mark>
>
> <mark style="color:purple;">Providing inference verifiability through our modular & dynamic stack gives the flexibility to the compute provider to run the data through their models and then generate a proof that can be verified on-chain to prove the model’s output for the given input is correct by running zkML or MPC within your off-chain docker container. In this case, the model provider would have the added advantage of being able to offer their models without having to reveal the underlying weights that produce the output. The opposite could also be done.</mark>
>
> <mark style="color:purple;">Being Dynamic in nature; In future koboto network will introduce embedded rag-based models which can be run within a smart contract</mark>
>
> <mark style="color:purple;">Though in future models in Koboto network will be capable to produce proofs , check our future roadmap.</mark>

[<br>](https://asvas-organization.gitbook.io/koboto-network-interface/introduction/barriers-to-ai-world/centralization-of-ai)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://kobotos-organization.gitbook.io/koboto-ai/introduction/barriers-to-ai-world/computational-overhead-and-inference-verifiability.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
