NEW MODELS AVAILABLE: In the recent voting miners decided to add two new models: OpenAI GPT-OSS-120b and Qwen3-235B-A22B-Thinking-2507 are now available via API

announcement page

22 AUG 2025

After Governance voting, Gonka chain supports new models.

The following models are now supported on the Gonka chain:

Qwen/Qwen3-32B-FP8 (80GB VRAM)

Qwen/Qwen3-235B-A22B-Instruct-2507-FP8 (320GB VRAM)

Community consensus expands the network's model set while preserving verifiability and stability.

22 AUG 2025

Announcing the Launch of Gonka Decentralized AI Network!

As of Friday, August, 22, 2025, the protocol is fully operational. This marks a significant step forward in creating an AI infrastructure built on the principles of open access, verifiable performance, and user ownership. This post provides a comprehensive overview of the live protocol.

Foundational Principles:

Self-Sovereign Interaction: You maintain full control over your digital assets. All on-chain actions are authorized by you, using your private key to sign transactions. This method provides cryptographic proof of ownership without ever exposing the key itself.

Autonomous Governance: The protocol is designed to be operator-free and self-governing. Protocol upgrades, economic parameters, and system configurations are managed by on-chain consensus, ensuring a stable and community-guided system.

Unified Security Model: The protocol's security and execution are integrated. The same network of nodes that secures the ledger via our non-wasteful Proof-of-Work also validates all AI inference outputs, creating a single, trust-based framework.

For Hosts:

The network is ready to utilize your hardware's computational power.

Permissionless Onboarding: You can connect any compatible GPU and begin serving inference without requiring approval.

Rewards for Useful Work: Earn our new native coin in direct proportion to the AI inference you successfully deliver.

Verifiable Performance: Our Sprint-based benchmarking system measures your GPU's actual computational capacity on AI tasks. This proven performance, cross-validated by the network, determines your consensus weight and ensures fair rewards.

Efficient & Respectful Operation: Our low-energy Proof-of-Work uses brief competitions to secure the network, preserving your hardware for profitable inference. The protocol allocates your compute only when required.

Guaranteed Service Uptime: An intelligent, timeslot-based scheduling system ensures a significant portion of the network is always reserved for serving paid inference tasks.

For Developers:

Build the next generation of AI on a decentralized, high-performance backend.

OpenAI-Compatible API: Integrate the network using a familiar API surface. Our End-to-End Developer Experience is supported by a complete portal with documentation, tutorials, and SDKs.

Live Decentralized Inference: Run jobs on open models, starting with QwQ 32B and Qwen 7B, executed on a distributed network of real GPUs.

Optimized Performance: The protocol features intelligent load-balancing based on proven node capacity and achieves reliable result verification with minimal overhead (a ~5–10% check rate).

Stable & Governable Economics: The network launches with stable, empirically-tuned economic parameters. These are fully governable by on-chain community consensus.

Protocol Architecture and Economics:

The network is a self-sustaining ecosystem with a complete economic design. This includes payments for inference, workload-based rewards, validator incentives, a staking-backed collateral system, penalties for dishonest behavior, and a dynamic reputation score.

Furthermore, the protocol is designed for continued improvement: 20% of all inference revenue is dedicated to funding future AI model training, with the on-chain primitives for a decentralized training MVP already in place.