
Zero-Knowledge Proofs (ZK) — as a next-generation cryptographic and scalability infrastructure — are demonstrating immense potential across blockchain scaling, privacy computation, zkML, and cross-chain verification. However, the proof generation process is extremely compute-intensive and latency-heavy, forming the biggest bottleneck for industrial adoption. ZK hardware acceleration has therefore emerged as a core enabler. Within this landscape, GPUs excel in versatility and iteration speed, ASICs pursue ultimate efficiency and large-scale performance, while FPGAs serve as a flexible middle ground combining programmability with energy efficiency. Together, they form the hardware foundation powering ZK’s real-world adoption.
GPU, FPGA, and ASIC represent the three mainstream paths of hardware acceleration:
GPU (Graphics Processing Unit): A general-purpose parallel processor, originally designed for graphics rendering but now widely used in AI, ZK, and scientific computing.
FPGA (Field Programmable Gate Array): A reconfigurable hardware circuit that can be repeatedly configured at the logic-gate level “like LEGO blocks,” bridging between general-purpose processors and specialized circuits.
ASIC (Application-Specific Integrated Circuit): A dedicated chip customized for a specific task. Once fabricated, its function is fixed — offering the highest performance and efficiency but the least flexibility.
GPUs have become the backbone of both AI and ZK computation.
In AI, GPUs’ parallel architecture and mature software ecosystem (CUDA, PyTorch, TensorFlow) make them nearly irreplaceable — the long-term mainstream choice for both training and inference.
In ZK, GPUs currently offer the best trade-off between cost and availability, but their performance in big integer modular arithmetic, MSM, and FFT/NTT operations is limited by memory and bandwidth constraints. Their energy efficiency and scalability economics remain insufficient, suggesting the eventual need for more specialized hardware.
Paradigm’s 2022 investment thesis highlighted FPGA as the “sweet spot” balancing flexibility, efficiency, and cost. Indeed, FPGAs are programmable, reusable, and quick to prototype, suitable for rapid algorithm iteration, low-latency environments (e.g., high-frequency trading, 5G base stations), edge computing under power constraints, and secure cryptographic tasks.
However, FPGAs lag behind GPUs and ASICs in raw performance and scale economics. Strategically, they are best suited as development and iteration platforms before algorithm standardization, or for niche verticals requiring long-term customization.
ASICs are already dominant in crypto mining (e.g., Bitcoin’s SHA-256, Litecoin/Dogecoin’s Scrypt). By hardwiring algorithms directly into silicon, ASICs achieve orders of magnitude better performance and energy efficiency — becoming the exclusive infrastructure for mining.
In ZK proving (e.g., Cysic) and AI inference (e.g., Google TPU, Cambricon), ASICs show similar potential. Yet, in ZK, algorithmic diversity and operator variability have delayed standardization and large-scale demand. Once standards solidify, ASICs could redefine ZK compute infrastructure — delivering 10–100× improvements in performance and efficiency with minimal marginal cost post-production.
In AI, where training workloads evolve rapidly and rely on dynamic matrix operations, GPUs will remain the mainstream for training. Still, ASICs will hold irreplaceable value in fixed-task, large-scale inference scenarios.
Dimension Comparison: GPU vs FPGA vs ASIC
Dimension | GPU | FPGA | ASIC |
Performance / Cost (Perf/$) | Strong: boosted by AI and gaming economies of scale; consumer and enterprise GPUs (RTX / A / H series) offer high cost-performance ratio. | Average: typically lower throughput than GPUs at the same price level. | Best: lowest amortized cost after mass production; dominates in long-term cost efficiency. |
Performance / Power (Perf/W) | Moderate: relatively high power consumption under ZK workloads. | Moderate–Good: certain designs outperform GPUs. | Best: custom-designed for MSM/FFT/hash operations with leading energy efficiency. |
Flexibility | Highest: rapidly adaptable to Plonky2, Halo2, HyperPlonk, etc. | High: reconfigurable but requires RTL/HDL expertise. | Lowest: logic hardcoded; needs abstract ISA layers to support multiple proving systems. |
In the evolution of ZK hardware acceleration, GPUs are currently the optimal solution — balancing cost, accessibility, and development efficiency, making them ideal for rapid deployment and iteration. FPGAs serve more as specialized tools, valuable in ultra-low-latency, small-scale interconnect, and prototyping scenarios, but unable to compete with GPUs in economic efficiency.
In the long term, as ZK standards stabilize, ASICs will emerge as the industry’s core infrastructure, leveraging unmatched performance-per-cost and energy efficiency.
Overall trajectory:
Short term – rely on GPUs to capture market share and generate revenue;
Mid term – use FPGAs for verification and interconnect optimization;
Long term – bet on ASICs to build a sustainable compute moat.
Cysic’s core strength lies in hardware acceleration for zero-knowledge proofs (ZK).
In the representative paper “ZK Hardware Acceleration: The Past, the Present and the Future,” the team highlights that GPUs offer flexibility and cost efficiency, while ASICs outperform in energy efficiency and peak performance—but require trade-offs between development cost and programmability.
Cysic adopts a dual-track strategy — combining ASIC innovation with GPU acceleration — driving ZK from “verifiable” to “real-time usable” through a full-stack approach from custom chips to general SDKs.
Cysic’s self-developed C1 chip is built on a zkVM-based architecture, featuring high bandwidth and flexible programmability.
Based on this, Cysic plans to launch two hardware products:
ZK Air: a portable accelerator roughly the size of an iPad charger, plug-and-play, designed for lightweight verification and developer use;
ZK Pro: a high-performance system integrating the C1 chip with front-end acceleration modules, targeting large-scale zkRollup and zkML workloads.
Cysic’s research directly supports its ASIC roadmap.
The team introduced Hypercube IR, a ZK-specific intermediate representation that abstracts proof circuits into standardized parallel patterns—reducing the difficulty of cross-hardware migration. It explicitly preserves modular arithmetic and memory access patterns in circuit logic, enabling better hardware recognition and optimization.
In Million Keccak/s experiments, a single C1 chip achieved ~1.31M Keccak proofs per second (~13× acceleration), demonstrating the throughput and energy-efficiency potential of specialized hardware.
In HyperPlonk hardware analysis, the team showed that MSM/MLE operations parallelize well, while Sumcheck remains a bottleneck.
Overall, Cysic is developing a holistic methodology across compiler abstraction, hardware verification, and protocol adaptation, laying a strong foundation for productization.
On the GPU side, Cysic is advancing both a general-purpose acceleration SDK and a full ZKPoG (Zero-Knowledge Proof on GPU) stack:
General GPU SDK: built on Cysic’s custom CUDA framework, compatible with Plonky2, Halo2, Gnark, Rapidsnark, and other backends. It surpasses existing open-source frameworks in performance, supports multiple GPU models, and emphasizes compatibility and ease of use.
ZKPoG: developed in collaboration with Tsinghua University, it is the first end-to-end GPU stack covering the entire proof flow—from witness generation to polynomial computation. On consumer-grade GPUs, it achieves up to 52× speedup (average 22.8×) and expands circuit scale by 1.6×, verified across SHA256, ECDSA, and MVM applications.
Dimension | ASIC Path (Cysic C1 / ZK Air / ZK Pro) | GPU Path (General SDK + ZKPoG) |
Positioning | Customized extreme performance for large-scale ZKP workloads | General-purpose acceleration compatible with mainstream proving systems |
Features | - C1 chip built on zkVM architecture - Hypercube IR optimizes circuit logic - 13× acceleration per chip, supporting real-time proofs | - Custom CUDA SDK supporting Plonky2 / Halo2 backends - ZKPoG enables full GPU pipeline (witness → polynomial computation) - 22.8× average CPU uplift (up to 52×) |
Product Form | - ZK Air (portable accelerator) - ZK Pro (high-performance system) | - General GPU SDK - ZKPoG end-to-end stack |
Advantages | Ultimate efficiency, hardware-friendly, specialized optimization | High flexibility, rapid iteration, low development barrier |
Limitations |
Cysic’s key differentiator lies in its hardware–software co-design philosophy.
Its in-house ZK ASICs, GPU clusters, and portable mining devices together form a full-stack compute infrastructure, enabling deep integration from the chip layer to the protocol layer. By leveraging the complementarity between ASICs’ extreme energy efficiency and scalability and GPUs’ flexibility and rapid iteration, Cysic has positioned itself as a leading ZKP hardware provider for high-intensity proof workloads — and is now extending this foundation toward the financialization of ZK hardware (ComputeFi) as its next industrial phase.
On September 24, 2025, the Cysic team released the Cysic Network Whitepaper.
The project centers on ComputeFi, financializing GPU, ASIC, and mining hardware into programmable, verifiable, and tradable computational assets. Built with Cosmos CDK, Proof-of-Compute (PoC) consensus, and an EVM execution layer, Cysic Network establishes a decentralized “task-matching + multi-verification” marketplace supporting ZK proving, AI inference, mining, and HPC workloads.
By vertically integrating self-developed ZK ASICs, GPU clusters, and portable miners, and powered by a dual-token model ($CYS / $CGT), Cysic aims to unlock real-world compute liquidity — filling a key gap in Web3 infrastructure: verifiable compute power.
Cysic Network adopts a bottom-up four-layer modular architecture, enabling cross-domain expansion and verifiable collaboration:
Hardware Layer:
Comprising CPUs, GPUs, FPGAs, ASIC miners, and portable devices — forming the network’s computational foundation.
Consensus Layer:
Built on Cosmos CDK, using a modified CometBFT + Proof-of-Compute (PoC) mechanism that integrates token staking and compute staking into validator weighting, ensuring both computational and economic security.
Execution Layer:
Handles task scheduling, workload routing, bridging, and voting, with EVM-compatible smart contracts enabling programmable, multi-domain computation.
Product Layer:
Serves as the application interface — integrating ZK proof markets, AI inference frameworks, crypto mining, and HPC modules, while supporting new task types and verification methods.

Zero-knowledge proofs allow computation to be verified without revealing underlying data — but generating these proofs is time- and cost-intensive. Cysic Network enhances efficiency through decentralized Provers + GPU/ASIC acceleration, while off-chain verification and on-chain aggregation reduce latency and verification costs on Ethereum.
Workflow: ZK projects publish proof tasks via smart contracts → decentralized Provers compete to generate proofs → Verifiers perform multi-party validation → results are settled via on-chain contracts.
By combining hardware acceleration with decentralized orchestration, Cysic builds a scalable Proof Layer that underpins ZK Rollups, zkML, and cross-chain applications.

Within the network, Prover nodes are responsible for heavy-duty computation.
Users can contribute their own compute resources or purchase Digital Harvester devices to perform proof tasks and earn $CYS / $CGT rewards. A Multiplier factor boosts task acquisition speed. Each node must stake 10 CYS as collateral, which may be slashed for misconduct.
Currently, the main task is ETHProof Prover — generating ZK proofs for Ethereum mainnet blocks, advancing the base layer’s ZK scalability.
Provers thus form the computational and security backbone of the Cysic Network, also providing trusted compute power for future AI inference and AgentFi applications.
Complementing Provers, Verifier nodes handle lightweight proof verification to enhance network security and scalability.
Users can run Verifiers on a PC, server, or official Android app, with the Multiplier also boosting task efficiency and rewards.
The participation barrier is much lower — requiring only 0.5 CYS as collateral. Verifiers can join or exit freely, making participation accessible and flexible.
This low-cost, light-participation model expands Cysic’s reach to mobile and general users, strengthening decentralization and trustworthy verification across the network.
Dimension | Prover Node | Verifier Node |
Role | High-intensity computation; generates Ethereum block proofs; forms the network’s execution and security layer | Lightweight validation of Prover outputs; enhances network scalability and reliability |
Hardware Requirements | High-performance GPU / ASIC servers | PC, server, or Android device |
Staking Requirement | 10 CYS | 0.5 CYS |
Incentive Model | CYS/CGT rewards + Multiplier speed boost; higher-capacity nodes earn more | CYS/CGT rewards + Multiplier; lower returns, designed for mass participation |
Network Scale (as of Oct 2025) | ~42,000 nodes | 100,000+ nodes |
Participation Traits | High barrier; limited to stable, long-term compute contributors | Low barrier; broad participation; lightweight verification tasks |
Network Status and Outlook
As of October 15, 2025, the Cysic Network has reached a significant early milestone:
≈42,000 Prover nodes and 100,000+ Verifier nodes
≈91,000 total tasks completed
≈700,000 $CYS/$CGT distributed as rewards
However, despite the impressive node count, activity and compute contribution remain uneven due to entry and hardware differences. Currently, the network is integrated with three external projects, marking the beginning of its ecosystem. Whether Cysic can evolve into a stable compute marketplace and core ComputeFi infrastructure will depend on further real-world integrations and partnerships in the coming phases.
Cysic AI’s business framework follows a three-tier structure — Product, Application, and Strategy: At the base, Serverless Inference offers standardized APIs to lower the barrier for AI model access; At the middle, the Agent Marketplace explores on-chain applications of AI Agents and autonomous collaboration; At the top, Verifiable AI integrates ZKP + GPU acceleration to enable trusted inference, representing the long-term vision of ComputeFi.
Cysic AI provides instant-access, pay-as-you-go inference services, allowing users to call large language models via APIs without managing or maintaining compute clusters.
This serverless design achieves low-cost and flexible intelligent integration for both developers and enterprises.
Currently supported models include:
Meta-Llama-3-8B-Instruct (task & dialogue optimization)
QwQ-32B (reasoning-enhanced)
Phi-4 (lightweight instruction model)
Llama-Guard-3-8B (content safety review)
These cover diverse needs — from general conversation and logical reasoning to compliance auditing and edge deployment.
The service balances cost and efficiency, supporting both rapid prototyping for developers and large-scale inference for enterprises, forming a foundational layer in Cysic’s trusted AI infrastructure.

The Cysic Agent Marketplace functions as a decentralized platform for AI Agent applications. Users can simply connect their Phantom wallet, complete verification, and interact with various Agents — payments are handled automatically through Solana USDC.
Currently, the platform integrates three core agents:
X Trends Agent — analyzes real-time X (Twitter) trends and generates creative MEME coin concepts.
Logo Generator Agent — instantly creates custom project logos from user descriptions.
Publisher Agent — deploys MEME coins on the Solana network (e.g., via Pump.fun) with one click.

Technically, the marketplace leverages the Agent Swarm Framework to coordinate multiple autonomous agents into collaborative task groups (Swarms), enabling division of labor, parallelism, and fault tolerance.
Economically, it employs the Agent-to-Agent Protocol, achieving on-chain payments and automated incentives where users pay only for successful actions.
Together, these features form a complete on-chain loop — trend analysis → content generation → deployment, demonstrating how AI Agents can be financialized and integrated within the ComputeFi ecosystem.
A core challenge in AI inference is trust — how to mathematically guarantee that an inference result is correct without exposing inputs or model weights.
Verifiable AI addresses this through zero-knowledge proofs (ZKPs), ensuring cryptographic assurance over model outputs.
However, traditional ZKML proof generation is too slow for real-time use.
Cysic solves this via GPU hardware acceleration, introducing three key technical innovations:
Parallelized Sumcheck Protocol:
Breaks large polynomial computations into tens of thousands of CUDA threads running in parallel, achieving near-linear speedup relative to GPU core count.
Custom Finite Field Arithmetic Kernels:
Deeply optimized across register allocation, shared memory, and warp-level parallelism to overcome modular arithmetic memory bottlenecks — keeping GPUs consistently saturated and efficient.
End-to-End ZKPoG Acceleration Stack:
Covers the full chain — from witness generation to proof creation and verification, compatible with Plonky2 and Halo2 backends.
Benchmarking shows up to 52× speedup over CPUs and ~10× acceleration on CNN-4M models.
Through this optimization suite, Cysic advances verifiable inference from being “theoretically possible but impractically slow” to “real-time deployable.”
This dramatically reduces latency and cost, making Verifiable AI viable for the first time in real-world, latency-sensitive applications.
The platform supports PyTorch and TensorFlow — developers can simply wrap their model in a VerifiableModule to receive both inference results and corresponding cryptographic proofs without changing existing code.
On its roadmap, Cysic plans to extend support to CNN, Transformer, Llama, and DeepSeek models, release real-time demos for facial recognition and object detection, and open-source code, documentation, and case studies to foster community collaboration.
Layer | Module | Core Function | Engineering Difficulty | Business Value |
Standard Product | Serverless Inference | Standardized cloud inference APIs integrating mainstream open models; lowers developer entry barriers | ⭐⭐ (Moderate — compute scheduling cost) | Foundational access point; meets rapid scaling needs; limited differentiation |
Experimental Application | Agent Marketplace | Decentralized AI agent market; connects trend analysis → logo generation → on-chain publishing | ⭐⭐ (Low–moderate — model & payment integration) | Application experiment; showcases AgentFi and on-chain payment fusion |
Strategic Capability | Verifiable AI | ZKP + GPU acceleration enabling real-time verifiable inference |
Cysic AI’s three-layer roadmap forms a bottom-up evolution logic:
Serverless Inference solves “can it be used”,
Agent Marketplace answers “can it be applied”,
Verifiable AI ensures “can it be trusted.”
The first two serve as transitional and experimental stages, while the true strategic differentiation lies in Verifiable AI — where Cysic integrates ZK hardware acceleration and decentralized compute networks to establish its long-term competitive edge within the ComputeFi ecosystem.
Cysic Network introduces the “Digital Compute Cube” Node NFT, which tokenizes high-performance compute assets such as GPUs and ASICs, creating a ComputeFi gateway accessible to mainstream users. Each NFT functions as a verifiable node license, simultaneously representing yield rights, governance rights, and participation rights.
Users can delegate or proxy participation in ZK proving, AI inference, and mining tasks — without owning physical hardware — and earn $CYS rewards directly.
Tier | Name | Price (USDC) | Supply (Units) | $CYS Allocation per NFT |
Tier 1 | Tesseract | 69 | 5,000 | 350 CYS / NFT |
Tier 2 | Monolith | 99 | 7,000 | 450 CYS / NFT |
Tier 3 | Allspark | 139 | 8,000 | 600 CYS / NFT |
Tier 4 | MotherBox | 189 | 9,000 | 750 CYS / NFT |
The total NFT supply is 29,000 units, with approximately 16.45 million CYS distributed (1.65% of total supply, within the community allocation cap of 9%).
Vesting: 50% unlocked at TGE + 50% linearly over six months.
Beyond fixed token allocations, holders enjoy Multiplier boosts (up to 1.2×), priority access to compute tasks, and governance weight.
Public sales have ended, and the NFTs are now tradable on OKX NFT Marketplace.
Unlike traditional cloud-compute rentals, the Compute Cube model represents on-chain ownership of physical compute infrastructure, combining:
Fixed token yield: Each NFT secures a guaranteed allocation of $CYS.
Real-time compute rewards: Node-connected workloads (ZK proving, AI inference, crypto mining) distribute earnings directly to holders’ wallets.
Governance and priority rights: Holders gain voting power in compute scheduling and protocol upgrades, along with early access privileges.
Positive feedback loop: More workloads → more rewards → greater staking → stronger governance influence.
In essence, Node NFTs convert fragmented GPU/ASIC resources into liquid on-chain assets, opening a new investment market for compute power in the era of surging AI and ZK demand. This ComputeFi flywheel — more tasks → more rewards → stronger governance — serves as a key bridge for expanding Cysic’s compute network to retail participants.
Dogecoin, launched in 2013, uses Scrypt PoW and has been merge-mined with Litecoin (AuxPoW) since 2014, sharing hashpower for stronger network security. Its tokenomics feature infinite supply with a fixed annual issuance of 5 billion DOGE, emphasizing community and payment utility. Among all ASIC-based PoW coins, Dogecoin remains the most popular after Bitcoin — its meme culture and loyal community sustain long-term ecosystem stickiness.
On the hardware side, Scrypt ASICs have fully replaced GPU/CPU mining, with industrial miners like Bitmain Antminer L7/L9 dominating. However, unlike Bitcoin’s industrial-scale mining, Dogecoin still supports home mining, with devices such as Goldshell MiniDoge, Fluminer L1, and ElphaPex DG Home 1 catering to retail miners, combining cash flow and community engagement.
For Cysic, entering the Dogecoin ASIC sector holds three strategic advantages:
Lower technical threshold: Scrypt ASICs are simpler than ZK ASICs, allowing faster validation of mass production and delivery capabilities.
Mature cash flow: Mining generates immediate and stable revenue streams.
Supply chain & brand building: Dogecoin ASIC production strengthens Cysic’s manufacturing and market expertise, paving the way for future ZK/AI ASICs.
Thus, home ASIC miners represent a pragmatic revenue base and a strategic stepping stone for Cysic’s long-term ZK/AI hardware roadmap.
During Token2049, Cysic unveiled the DogeBox 1, a portable Scrypt ASIC miner for home and community users — designed as a verifiable consumer-grade compute terminal:
Portable & energy-efficient: pocket-sized, 55 W power, suitable for households and small setups.
Plug-and-play: managed via mobile app, built for global retail users.
Dual functionality: mines DOGE and verifies DogeOS ZK proofs, achieving L1 + L2 security.
Circular incentive: integrates DOGE mining + CYS rewards, forming a DOGE → CYS → DogeOS economic loop.
This product synergizes with DogeOS (a ZK-based Layer-2 Rollup developed by the MyDoge team, backed by Polychain Capital) and MyDoge Wallet, enabling DogeBox users to mine DOGE and participate in ZK validation — combining DOGE rewards + CYS subsidies to reinforce engagement and integrate directly into the DogeOS ecosystem.
The Cysic Dogecoin home miner thus serves as both a practical cashflow device and a strategic bridge to ZK/AI ASIC deployment.
By merging mining + ZK verification, Cysic gains hands-on experience in market distribution and hardware scaling — while bringing a scalable, verifiable, community-driven L1 + L2 narrative to the Dogecoin ecosystem.
Collaboration with Succinct & Boundless Prover Networks: Cysic operates as a multi-node Prover within Succinct Network, leveraging its GPU clusters to handle SP1 zkVM real-time proofs and co-develop GPU optimization layers. It has also joined the Boundless Mainnet Beta, providing hardware acceleration for its Proof Marketplace.
Early Partnership with Scroll: In early stages, Cysic provided high-performance ZK computation for Scroll, executing large-scale proving tasks on GPU clusters with low latency and cost, generating over 10 million proofs. This validated Cysic’s engineering capability and laid the foundation for its future computer-network development.
Home Miner Debut at Token2049: Cysic’s DogeBox 1 portable ASIC miner officially entered the Dogecoin/Scrypt compute market. Specs: 55 W power, 125 MH/s hashrate, 100 × 100 × 35 mm, Wi-Fi + Bluetooth support, noise < 35 dB — ideal for home or community use. Beyond DOGE/LTC mining, it supports DogeOS ZK verification, achieving dual-layer (L1 + L2) security and forming a DOGE → CYS → DogeOS incentive loop.
Testnet Completion & Mainnet Readiness: On Sept 18, 2025, Cysic completed Phase III: Ignition
The testnet onboarded Succinct, Aleo, Scroll, and Boundless, attracting 55,000+ wallets, 8 million transactions, and 100,000+ reserved high-end GPU devices. 1.36 million registered users, 13 million transactions, ~223 k Verifiers + 41.8 k Provers = 260 k+ total nodes. 1.46 million total tokens distributed (733 k $CYS + 733 k $CGT + 4.6 million FIRE) and 48,000+ users staked, validating both incentive sustainability and network scalability.
Ecosystem Integration Overview: According to Cysic’s official ecosystem map, the network is now interconnected with leading ZK and AI projects, underscoring its hardware-compatibility and openness across the decentralized compute stack.
These integrations strengthen Cysic’s position as a foundational compute and hardware acceleration provider, supporting future expansion across ZK, AI, and ComputeFi ecosystems. Partner Categories:
zkEVM / L2: zkSync, Scroll, Manta, Nil, Kakarot
zkVM / Prover Networks: Succinct, Risc0, Nexus, Axiom
zk Coprocessors: Herodotus, Axiom
Infra / Cross-chain: zkCloud, ZKM, Polyhedra, Brevis
Identity & Privacy: zkPass, Human.tech
Oracles: Chainlink, Blocksense
AI Ecosystem: Talus, Modulus Labs, Gensyn, Aspecta, Inference Labs

Cysic Network adopts a dual-token system: the network token $CYS and the governance token $CGT.

$CYS (Network Token):
A native, transferable asset used for paying transaction fees, node staking, block rewards, and network incentives—ensuring network activity and economic security. $CYS is also the primary incentive for compute providers and verifiers. Users can stake $CYS to obtain governance weight and participate in resource allocation and governance decisions of the Computing Pool.
$CGT (Governance Token):
A non-transferable asset minted 1:1 by locking $CYS, with a longer unbonding period to participate in Computing Governance (CG). $CGT reflects compute contribution and long-term participation. Compute providers must maintain a reserve of $CGT as an admission bond to deter malicious behavior.
During network operation, compute providers connect their resources to Cysic Network to serve ZK, AI, and crypto-mining workloads. Revenue sources include block rewards, external project incentives, and compute governance distributions. Scheduling and reward allocation are dynamically adjusted by multiple factors, with external project incentives (e.g., ZK, AI, Mining rewards) as a key weight.
Co-founder & CEO: Xiong (Leo) Fan.
Previously an Assistant Professor of Computer Science at Rutgers University (USA); former researcher at Algorand and Postdoctoral Researcher at the University of Maryland; Ph.D. from Cornell University. Leo’s research focuses on cryptography and its intersections with formal verification and hardware acceleration, with publications at top venues such as IEEE S&P, ACM CCS, POPL, Eurocrypt, and Asiacrypt, spanning homomorphic encryption, lattice cryptography, functional encryption, and protocol verification. He has contributed to multiple academic and industry projects, combining theoretical depth with systems implementation, and has served on program committees of international cryptography conferences.
According to public information on LinkedIn, the Cysic team blends backgrounds in hardware acceleration, cryptographic research, and blockchain applications. Core members have industry experience in chip design and systems optimization and academic training from leading institutions across the US, Europe, and Asia. The team’s strengths are complementary across hardware R&D, ZK optimization, and business operations.

Fundraising:
In May 2024, Cysic announced a $12M Pre-A round co-led by HashKey Capital and OKX Ventures, with participation from Polychain, IDG, Matrix Partners, SNZ, ABCDE, Bit Digital, Coinswitch, Web3.com Ventures, as well as notable angels including George Lambeth (early investor in Celestia/Arbitrum/Avax) and Ken Li (Co-founder of Eternis).
In the hardware-accelerated prover and ComputeFi track, Cysic’s core peers include Ingonyama, Irreducible (formerly Ulvetanna), Fabric Cryptography, and Supernational—all focusing on “hardware + networks that accelerate ZK proving.”
Cysic: Full-stack (GPU + ASIC + network) with a ComputeFi narrative. Strengths lie in the tokenization/financialization of compute; challenges include market education and hardware mass-production.
Irreducible: Strong theory + engineering; exploring new algebraic structures (Binius) and zkASIC. High theoretical innovation; commercialization pace may be constrained by FPGA economics.
Ingonyama: Open-source friendly; ICICLE SDK is a de-facto GPU ZK acceleration standard with high ecosystem adoption, but no in-house hardware.
Fabric: “Hardware–software co-design” path; building a VPU (Verifiable Processing Unit) general crypto-compute chip—business model akin to “CUDA + NVIDIA,” targeting a broader cryptographic compute market.
Project | Technical Path | Hardware Direction | Positioning / Model |
Cysic | GPU → ASIC; tokenizes compute via ComputeFi | In-house ASIC (C1 + ZK Air + ZK Pro) plus large-scale GPU clusters | ComputeFi: assetized compute + real-time ZK proving network |
Irreducible (ex-Ulvetanna) | Math-driven: Binius (binary polynomial commitments) → hardware-aware | Early FPGA; now Binius + HW/SW co-design | Algorithm-first; hardware as “experimental validation platform”; research-infra flavor |
Ingonyama | Software-first: ICICLE CUDA libs for MSM/FFT on GPUs | No in-house hardware (leverages existing GPUs) | Open GPU acceleration toolchain for developers; not building chips |
Fabric Cryptography | HW/SW co-design: VPU between GPU flexibility and ASIC performance |
In ZK Marketplaces, Prover Networks, and zk Coprocessors, Cysic currently acts more as an upstream compute supplier, while Succinct, Boundless, Risc0, Axiom target the same end customers (L2s, zkRollups, zkML) via zkVMs, task routing, and open markets.
Short term: Cooperation dominates. Succinct routes tasks; Cysic supplies high-performance provers. zk Coprocessors may offload tasks to Cysic.
Long term: If Boundless and Succinct scale their marketplace models (auction vs. routing) while Cysic also builds a marketplace, direct competition at the customer access layer is likely. Similarly, a mature zk Coprocessor loop could disintermediate direct hardware access, risking Cysic’s marginalization as an “upstream contractor.”
Project | Core Positioning | Business Model / Product | Relationship to Cysic |
Cysic | ZK hardware acceleration + Prover/Verifier network | High-performance ZK proof generation on GPU/ASIC; operates prover/verifier node network | With Succinct: upstream prover; with Boundless: potential partner/competitor |
Succinct | General zkVM (SP1) + Prover Network | Open zkVM + decentralized Prover Marketplace; auto-routes optimal paths | Cysic is one prover among many, supplying high-perf compute |
Boundless | Open Proof Marketplace | Reverse Dutch Auction matching provers with tasks | Cysic’s provers can connect; competition emerges if Cysic builds its own market |
zk Coprocessors (Axiom, etc.) | Outsourced ZK compute module |
Business Logic
Cysic centers on the ComputeFi narrative—connecting compute from hardware production and network scheduling to financialized assets.
Short term: Leverage GPU clusters to meet current ZK prover demand and generate revenue.
Mid term: Enter a mature cash-flow market with Dogecoin home ASIC miners to validate mass production and tap community-driven retail hardware.
Long term: Develop dedicated ZK/AI ASICs, combined with Node NFTs / Compute Cubes to assetize and marketize compute—building an infrastructure-level moat.
Engineering Execution
Hardware: Completed GPU-accelerated prover/verifier optimizations (MSM/FFT parallelization); disclosed ASIC R&D (1.3M Keccak/s prototype).
Network: Built a Cosmos SDK-based validation chain for prover accounting and task distribution; tokenized compute via Compute Cube / Node NFTs.
AI: Released the Verifiable AI framework; accelerated Sumcheck and finite-field arithmetic via GPU parallelism for trusted inference—though differentiation from peers remains limited.
Potential Risks
Market education & demand uncertainty: ComputeFi is new; it’s unclear whether customers will invest in compute via NFTs/tokens.
Insufficient ZK demand: The prover market is early; current GPU capacity may satisfy most needs, limiting ASIC shipment scale and revenue.
ASIC engineering & mass-production risk: Proving systems aren’t fully standardized; ASIC R&D takes 12–18 months with high tape-out costs and uncertain yields—impacting commercialization timelines.
Home-miner capacity constraints: The household market is limited; electricity costs and community-driven behavior skew toward “enthusiast consumption,” hindering stable scale revenue.
Limited AI differentiation: Despite GPU parallel optimizations, cloud inference services are commoditized and the Agent Marketplace has low barriers—overall defensibility remains modest.
Competitive dynamics: Long-term clashes at the customer access layer with Succinct/Boundless (marketplaces) or mature zk Coprocessors could push Cysic into an upstream “contract manufacturer” role.
Disclaimer:
This article was produced with assistance from ChatGPT-5 as an AI tool. The author has endeavored to proofread and ensure the accuracy of all information, yet errors may remain. Note that in crypto markets, a project’s fundamentals often diverge from secondary-market price performance. The content herein is for information aggregation and academic/research exchange only; it does not constitute investment advice nor a recommendation to buy or sell any token.
Deployment Cycle | Fastest: available off-the-shelf with mature CUDA ecosystem. | Medium: weeks to months from board design to stable deployment. | Slowest: 12–18 months fabrication cycle. |
Scalability | Limited: constrained by PCIe interface and chassis form factor. | Strong: supports custom interconnects and pipelining. | Excellent: can be fully customized for workload and topology. |
Ecosystem & Tools | Most mature: rich CUDA, cuFFT, MSM libraries, and strong developer community. | Niche: limited toolchain maturity and talent availability. | Early-stage: requires in-house software stack; highly stable once mature. |
Best Use Cases | Production-grade ZK provers, rapid iteration, decentralized GPU networks. | Algorithm validation, prototyping, ultra-low-latency or custom interconnect scenarios. | Large-scale zkML, recursive proving, and long-term infrastructure. |
Key Risks | Rising energy and rack-space costs. | Talent scarcity, high per-board cost, weak economies of scale. | Algorithm changes, high upfront capital, and long payback cycles. |
Lower energy efficiency than ASICs; VRAM bottlenecks limit scalability; performance varies by GPU; more competitive landscape |
Use Cases | Long-term stable, high-throughput workloads: zkRollup mainnets, large-scale zkML, recursive proofs | R&D and flexibility-driven use: new ZK systems testing, cross-chain verification, small-scale zkML inference, identity authentication |
Strategic pillar; provides trusted compute power and long-term moat |
In-house VPU + boards (FC1000 / VPU8060 / Byte Smasher)
Platform play: chips + compiler + libs + cloud services |
Off-chain compute + on-chain verification APIs; devs avoid hardware complexity |
Short term: task source; long term: possible disintermediation |
Share Dialog
jacobzhao
No comments yet