Features
1. Model Converter
In koboto we simplify for the model providers to bring their model from any format and convert it into koboto chain compatible stand. Here are these features how we synchronise model conversion from different formats -
CSSInferenceService : Serves closed source models, e.g. OpenAI’s GPT-4.
ONNX Inference Service : Serves ONNX models.
TGI Inference Service : Uses Hugging face’s TGI service to generate text completions.
Torch Inference Service : Serves any Torch model. It comes with built-in support for all scikit-learn models.
Hugging face Inference Client Service : Uses Hugging face’s Inference Client to support various Huggingface tasks .
2. Modularity
Koboto v1 let users to leverage the inference verifiability and choice to get the compute proof from anyone just by mentioning the identity.
Identity state will help us to manage services around. Every one carrying an intensity with a flag of type of contribution in the network..
Proof Requirements:
In koboto, some inference request may require additional verification or proof alongside the computation results.
When an inference request is delivered to a node, the node checks if any of its containers can provide proofs.
If a proof is required, the coordinator locks tokens from the node’s wallet and initiates the verification process using a Verifier contract.
Verification can happen either eagerly (immediately) or lazily (at a later time)
NO proof (If execution is what the user cares about) though it can be seen as individual example .
User can mention in his input parameters from which service they want to request the inference proof from .
3. Model Storage
Upload and Download Individual Files: Easily manage single files on the Arweave network.
Upload and Download Repositories: Handle entire directories containing multiple files, ideal for managing grouped artifacts.
Version mapping: Enables versioning for files when uploading/downloading repositories via tags.
CLI Support: Use command-line interface for streamlined operations without writing additional code.
Main Components
File Manager (file_manager.py): Handles the uploading and downloading of individual files.
Repository Manager (repo_manager.py): Manages the uploading and downloading of repositories, including handling manifest files and version mappings .
4. Payment Type
#. Subscription (Pay in one go, parameters managed by domain coordinator)
#. One time request (variational based on consumption of compute inference)
#. APY based request (change based on the model consumption)
Allowing modifications to a payment object to allow specifying payments
Through these components, consumers can now choose to pay for each Subscription and APY based compute response they receive, optionally restricting payment only to nodes that respond with valid proofs of execution.
5. Transaction Simulation
These functions perform arbitrary execution (for example, in cases where a succinct proof is applicable, proof validation would occur within this function). For this reason, it is crucial for koboto nodes to simulate transactions before they are broadcasted on-chain, to prevent transaction failure due to developers’ callback functions failing.
6. Transaction Type
#. Off-chain Jobs (aka web2
requests): These are jobs that are processed off-chain and the result is also delivered off-chain.
#. On-chain Subscriptions (aka web3
requests): Using koboto sdk (in future), users can request compute from their smart contracts. The compute is processed off-chain and the result is delivered back to their smart contract (on-chain).
**Delegated Subscriptions (aka
web2
toweb3
requests): These are useful when a user wants to request a compute via an HTTP request and have the result delivered to their smart contract .
***Delagated subscription are the discussion of future roadmap for koboto .
**** For specifying signatures from off chain contract to on chain our koboto developers can leverage — https://eips.ethereum.org/EIPS/eip-712
EIP-712: Typed structured data hashing and signing (ethereum.org)
https://etherscan.io/address/0xa612fca4652ef94ae3d0e0aefedb1932c5f1b61d#code
7. DYNAMIC MESSAGING PROTOCOL
We use Noise protocol framework for dynamic connection over the multi-agent, in order to achieve the desired inference. Agents initially form groups using a handshake protocol. During the various handshake patterns, encryption options & key exchange methods; they exchange cryptographic keys, establish trust, and define their roles within the group. Once grouped, agents collaborate to achieve a specific inference task. Through noise protocol security, privacy and the flexibility choices when the current goal is achieved or needs change, agents can break their existing connections. They then reconfigure by forming new groups with different agents or reusing existing ones.
8. DATASETS
Node can train data on edge or cloud devices and store there Crypto-AI specific models in the model storage
(In future we going to implement cold inference for fast responses in shorter time . ****Subjective to future roadmap)
Last updated