Cover photo

Cortex’s Half-Year Progress and Roadmap

How we’re unifying AI frameworks and laying the groundwork for privacy-preserving machine learning

In the rapidly evolving landscape of artificial intelligence and Web3, one challenge remains paramount: how do we create AI systems that are both powerful and trustworthy? At Cortex, we’ve spent the past six months building the foundational infrastructure to solve this exact problem. Our work bridges the gap between cutting-edge AI frameworks and privacy-preserving technologies, creating a platform where machine learning models can be both high-performing and verifiable.

Half-Year Achievements: Laying the Foundation

Expanding Our Privacy Technology Horizon

The privacy technology space moves at lightning speed, and staying current requires constant research and evaluation. We’ve been deeply immersed in exploring the latest Zero-Knowledge Proof (ZKP), ZK Machine Learning (ZKML), and Trusted Execution Environment (TEE) solutions, including hands-on work with EZKL, Taiko, and ZKSync. This isn’t just theoretical research — we’ve been actively integrating and optimizing ZKRollup code to ensure our stack leverages the most efficient and advanced privacy-preserving technologies available.

Unifying AI Frameworks: TVM and PyTorch Integration

A major breakthrough this half-year has been our expansion of Model Representation Toolkit (MRT) frontend support. AI developers work across multiple frameworks, and we believe they shouldn’t be limited by interoperability constraints.

TVM Frontend Support

  • Achieved compatibility between Relay/Relax API and MRT Symbol

  • Successfully tested with CNN models including ResNet and YOLO

  • Created a seamless pathway from TVM to our verifiable inference pipeline

PyTorch Frontend Support

  • Implemented compatibility between ExportedProgram API and MRT Symbol

  • Validated with the same rigorous CNN model testing

  • Now supporting one of the most popular ML frameworks in the ecosystem

Runtime Revolution: Flexibility and Determinism

We completely refactored the MRT runtime code to support dynamic switching between PyTorch and TVM frontends. This isn’t just a quality-of-life improvement — it enables deterministic model inference, a critical requirement for verifiable AI systems where computation must be reproducible and consistent across different environments.

Beyond the major features, we’ve dedicated significant effort to polishing the developer experience:

  • Enhanced logging for better debugging and monitoring

  • Code refactoring for maintainability and performance

  • API adjustments based on real-world usage feedback

  • Model-specific issue resolution

Looking Ahead: Our Strategic Roadmap

Near-Term Objectives

Our immediate focus areas build directly on the foundations we’ve established:

Continuous Technology Evaluation
We maintain our commitment to staying at the forefront of privacy technology, continuously assessing new ZKP, ZKML, and TEE solutions as they emerge.

Large Language Model Support
The AI landscape is shifting toward LLMs, and so are we. We’re adding frontend components for transformers and llama.cpp, bringing verifiable AI to the large language model domain.

Core Infrastructure Completion
We’re finalizing the MRT Symbol structure with comprehensive Symbol, Function, and Graph constructs, creating a robust foundation for complex model representations.

Hardware Performance Optimization
Updating CVMRuntime to support the latest CUDA architecture ensures our solutions deliver maximum performance on modern GPU hardware.

2026: The Year of Scale and Performance

Q1-Q2 2026: Architectural Revolution

  • Comprehensive refactor of MRT lower architecture

  • Support for at least one LLM frontend framework

  • Foundation for enterprise-scale model deployment

Q3-Q4 2026: Hardware Evolution

  • CVMRuntime library overhaul

  • Upgrade to latest CPU/GPU architecture

  • Performance optimization for next-generation hardware

Why This Matters

The convergence of AI and blockchain isn’t just a technological curiosity — it’s becoming a necessity. As AI systems make increasingly important decisions, the ability to verify their computations becomes critical. Whether it’s ensuring a model hasn’t been tampered with, proving that inference was performed correctly, or maintaining privacy while leveraging collective intelligence, verifiable AI addresses fundamental trust challenges in our increasingly automated world.

About Cortex

Cortex’s main mission is to provide state-of-the-art machine-learning models on the blockchain in which users can infer using smart contracts on the Cortex blockchain. One of Cortex’s goals also includes implementing a machine-learning platform that allows users to post tasks on the platform, and submit AI DApps (Artificial Intelligence Decentralized Applications).

Cortex is the only public blockchain that allows the execution of nontrivial AI algorithms on the blockchain. MainNet has launched. Go build!

TestNet

| Block Explorer — Cerebro| Mining Pool | Remix Editor | Software | ZkMatrix

Social Media

| Website | GitHub | Twitter | Facebook | Reddit | Kakao | Mail | Discord

Telegram Groups

| English | Korean | Chinese | Russian|Turkish