
Fortytwo Secures $2.3M in Pre-Seed Funding Led by Big Brain Holdings
MOUNTAIN VIEW, California, March 6, 2025 Fortytwo, a decentralized AI network designed to scale intelligence beyond centralized infrastructure, has raised $2.3 million in a pre-seed funding round, led by Big Brain Holdings, with participation from CMT Digital, Escape Velocity Ventures (EV3), Chorus One, Mentat Group, and angel investors including Santiago R Santos, Keone Hon, Paul Taylor, and Comfy Capital, among others. Fortytwo is creating a new type of inference with small language models ...

Fortytwo Devnet Launch: Hitchhiking to AGI
Fortytwo’s devnet is live on the Monad testnet. We invite node operators, AI enthusiasts, and early adopters to help build the foundation for decentralized, ever-evolving AI reasoning – unlocking next-generation AI capabilities that advance with every new node. What is the Fortytwo Devnet? Fortytwo is a decentralized AI network where small, specialized models run on everyday devices, collaborating to achieve scalable and increasingly sophisticated reasoning. The devnet runs on Monad as its bl...

Fortytwo × Acurast: Swarm Inference Meets TEEs
At Fortytwo we are beginning a collaboration with Acurast to explore how swarm inference can run on their global network of 100,000+ smartphones, each equipped with Trusted Execution Environment (TEE) support. A TEE is a secure area of a device’s processor that isolates code and data during execution, preventing access even from the operating system or other applications. This setup allows us to research private AI operations that may, in some cases, offer stronger isolation than enterprise s...
<100 subscribers

Fortytwo Secures $2.3M in Pre-Seed Funding Led by Big Brain Holdings
MOUNTAIN VIEW, California, March 6, 2025 Fortytwo, a decentralized AI network designed to scale intelligence beyond centralized infrastructure, has raised $2.3 million in a pre-seed funding round, led by Big Brain Holdings, with participation from CMT Digital, Escape Velocity Ventures (EV3), Chorus One, Mentat Group, and angel investors including Santiago R Santos, Keone Hon, Paul Taylor, and Comfy Capital, among others. Fortytwo is creating a new type of inference with small language models ...

Fortytwo Devnet Launch: Hitchhiking to AGI
Fortytwo’s devnet is live on the Monad testnet. We invite node operators, AI enthusiasts, and early adopters to help build the foundation for decentralized, ever-evolving AI reasoning – unlocking next-generation AI capabilities that advance with every new node. What is the Fortytwo Devnet? Fortytwo is a decentralized AI network where small, specialized models run on everyday devices, collaborating to achieve scalable and increasingly sophisticated reasoning. The devnet runs on Monad as its bl...

Fortytwo × Acurast: Swarm Inference Meets TEEs
At Fortytwo we are beginning a collaboration with Acurast to explore how swarm inference can run on their global network of 100,000+ smartphones, each equipped with Trusted Execution Environment (TEE) support. A TEE is a secure area of a device’s processor that isolates code and data during execution, preventing access even from the operating system or other applications. This setup allows us to research private AI operations that may, in some cases, offer stronger isolation than enterprise s...
Share Dialog
Share Dialog


Today, almost every computer is connected to every other—at least potentially, through internet connection. Blockchain technology has used this connectivity to decentralize trust, computation, and finance.
But, as decentralized finance and services gain momentum, artificial intelligence—the most transformative technology of our time—remains firmly in the hands of a few corporations.
This centralization is now facing an unavoidable constraint: compute scarcity. The expansion of AI is requiring ever-larger data centers, which is hitting economic and physical limits.
Fortytwo presents an alternative—one that decentralizes intelligence itself.
By uniting the idle computing power of home devices, Fortytwo forms a global network of AI models, an emergent planetary-scale intelligence.
This method, called Swarm Inference, makes AI scalable, cost-efficient, and permissionless, evolving and improving with every new participant.
Right now, a handful of massive corporations control AI development—Big Tech.
With abundant resources, particularly capital, billions of dollars are being funneled into development of ever-larger, monolithic language models. These investments extend beyond model training to include the construction of new data centers, energy infrastructure, and semiconductor manufacturing.
We don’t see this as a sustainable path—especially as traditional model training approaches appear to have reached their limits. Further training of large language models (LLMs) now delivers diminishing returns, where even marginal performance improvements demand exponentially greater compute power and energy consumption.
At the same time, a new AI paradigm is emerging—reasoning models. Unlike conventional LLMs, these models prioritize more intelligent responses by allocating additional processing time per query. However, this shift introduces a tradeoff: fewer requests can be handled for the same computational resources. Such meaningful improvements require giving models more time to think, adding further to compute scarcity.
And compute is already scarce. OpenAI caps API calls at 10,000 per minute, effectively limiting AI applications to serving only around 3,000 concurrent users. Even ambitious projects like Stargate, a $500 billion AI infrastructure initiative announced recently by President Trump, may provide only temporary relief. According to Jevons’ Paradox, which observes that efficiency improvements often lead to increased resource consumption due to rising demand, as AI models become more capable and efficient, compute demands will likely surge through new use cases and broader adoption.
The result? Big Tech will continue to own and control artificial intelligence, deemed to be humanity's most powerful invention—and shape its evolution toward Artificial General Intelligence (AGI) and beyond.
But we believe in a different future—one where AI is a universally accessible resource, owned, operated, and maintained by a global community of AI node operators.
Instead of building ever-larger models, Fortytwo scales AI horizontally by networking small language models (SLMs) on consumer hardware. Recent advancements in SLMs have shown that they can already outperform state-of-the-art LLMs in specialized domains—coding, mathematics, creativity, vision, and many more.
At a time when centralized compute is scarce, idle compute on everyday devices is abundant. A MacBook Air with an M2 chip, for instance, only uses about a quarter of its power for routine tasks like video calls, spreadsheets, and web browsing. Collectively, consumer devices—Macs with Apple Silicon, PCs with dedicated GPUs, and others—hold an estimated 2,000 exaflops of underutilized compute, just sitting idle.
Fortytwo transforms this idle compute into a coordinated swarm. Nodes running small AI models collaborate, amplifying each other’s capabilities through peer evaluation and joint response preparation based on the network’s most valuable contributions. With Swarm Inference, Fortytwo is not constrained by the capabilities of even the most advanced models—each participant strengthens the collective intelligence of the entire system.
Joining the network is simple. No special skills or hardware are required. A node application runs seamlessly in the background on a standard MacBook Air, using only idle compute that is immediately available. The network is fully permissionless, allowing technical users to operate custom models—including personal fine-tunes—without whitelisting, centralized approval, or verification mechanics. Privacy remains intact, as all node customizations stay private and under the operator’s control.
The swarm thrives on diversity—the more variation within the network, the stronger it becomes.
William Gibson famously said: "The future is already here — it’s just not very evenly distributed."
Fortytwo is changing that. Instead of concentrating AI power in a few data centers, it distributes intelligence across a decentralized network, making AI accessible to everyone and enabling broader participation in the AI economy.
We invite those who share our vision to contribute—whether by running a node, spreading the word, or building with us. Web3 and AI developers are especially welcome to explore opportunities to join our team and help accelerate the advent of decentralized intelligence.
The network is launching soon. Run a node and join the AI swarm.
Today, almost every computer is connected to every other—at least potentially, through internet connection. Blockchain technology has used this connectivity to decentralize trust, computation, and finance.
But, as decentralized finance and services gain momentum, artificial intelligence—the most transformative technology of our time—remains firmly in the hands of a few corporations.
This centralization is now facing an unavoidable constraint: compute scarcity. The expansion of AI is requiring ever-larger data centers, which is hitting economic and physical limits.
Fortytwo presents an alternative—one that decentralizes intelligence itself.
By uniting the idle computing power of home devices, Fortytwo forms a global network of AI models, an emergent planetary-scale intelligence.
This method, called Swarm Inference, makes AI scalable, cost-efficient, and permissionless, evolving and improving with every new participant.
Right now, a handful of massive corporations control AI development—Big Tech.
With abundant resources, particularly capital, billions of dollars are being funneled into development of ever-larger, monolithic language models. These investments extend beyond model training to include the construction of new data centers, energy infrastructure, and semiconductor manufacturing.
We don’t see this as a sustainable path—especially as traditional model training approaches appear to have reached their limits. Further training of large language models (LLMs) now delivers diminishing returns, where even marginal performance improvements demand exponentially greater compute power and energy consumption.
At the same time, a new AI paradigm is emerging—reasoning models. Unlike conventional LLMs, these models prioritize more intelligent responses by allocating additional processing time per query. However, this shift introduces a tradeoff: fewer requests can be handled for the same computational resources. Such meaningful improvements require giving models more time to think, adding further to compute scarcity.
And compute is already scarce. OpenAI caps API calls at 10,000 per minute, effectively limiting AI applications to serving only around 3,000 concurrent users. Even ambitious projects like Stargate, a $500 billion AI infrastructure initiative announced recently by President Trump, may provide only temporary relief. According to Jevons’ Paradox, which observes that efficiency improvements often lead to increased resource consumption due to rising demand, as AI models become more capable and efficient, compute demands will likely surge through new use cases and broader adoption.
The result? Big Tech will continue to own and control artificial intelligence, deemed to be humanity's most powerful invention—and shape its evolution toward Artificial General Intelligence (AGI) and beyond.
But we believe in a different future—one where AI is a universally accessible resource, owned, operated, and maintained by a global community of AI node operators.
Instead of building ever-larger models, Fortytwo scales AI horizontally by networking small language models (SLMs) on consumer hardware. Recent advancements in SLMs have shown that they can already outperform state-of-the-art LLMs in specialized domains—coding, mathematics, creativity, vision, and many more.
At a time when centralized compute is scarce, idle compute on everyday devices is abundant. A MacBook Air with an M2 chip, for instance, only uses about a quarter of its power for routine tasks like video calls, spreadsheets, and web browsing. Collectively, consumer devices—Macs with Apple Silicon, PCs with dedicated GPUs, and others—hold an estimated 2,000 exaflops of underutilized compute, just sitting idle.
Fortytwo transforms this idle compute into a coordinated swarm. Nodes running small AI models collaborate, amplifying each other’s capabilities through peer evaluation and joint response preparation based on the network’s most valuable contributions. With Swarm Inference, Fortytwo is not constrained by the capabilities of even the most advanced models—each participant strengthens the collective intelligence of the entire system.
Joining the network is simple. No special skills or hardware are required. A node application runs seamlessly in the background on a standard MacBook Air, using only idle compute that is immediately available. The network is fully permissionless, allowing technical users to operate custom models—including personal fine-tunes—without whitelisting, centralized approval, or verification mechanics. Privacy remains intact, as all node customizations stay private and under the operator’s control.
The swarm thrives on diversity—the more variation within the network, the stronger it becomes.
William Gibson famously said: "The future is already here — it’s just not very evenly distributed."
Fortytwo is changing that. Instead of concentrating AI power in a few data centers, it distributes intelligence across a decentralized network, making AI accessible to everyone and enabling broader participation in the AI economy.
We invite those who share our vision to contribute—whether by running a node, spreading the word, or building with us. Web3 and AI developers are especially welcome to explore opportunities to join our team and help accelerate the advent of decentralized intelligence.
The network is launching soon. Run a node and join the AI swarm.
No comments yet