Thank you Stephen Grugett, Ali Habbabeh, Spenser Huang, Vaughn McKenzie-Landell, Michael Li, Eric Li, and Nick Preszler for the valuable feedback and review.
Prediction markets are fast emerging as the basis of a new information finance (InfoFi) paradigm. In this model, financially aligned incentives are harnessed to produce high-signal data, transforming raw speculation into a public good. Platforms like Polymarket and Kalshi have started demonstrating the appetite for “markets for everything,” spanning politics, sports, tech, science, and more.
For a continuously updated map of the prediction market ecosystem, see the Prediction Index.
Already, InfoFi concepts and products are being extended to cover narratives, reputation, opinions, and even parallel universes. There are now over a hundred projects developing prediction markets, ranging from general platforms to niche to experimental.
What makes InfoFi unique from other markets is that they are designed to be correct by construction in producing a specific signal (ie., probability X will happen). Because event contracts settle strictly on whether an event happens or not, the price maps directly to an implied probability that is hard to infer cleanly from other markets.
As such, while betting on prediction markets may be simple fun for many, the information they output makes them uniquely useful. The live outputs from these markets have the potential to be used as composable inputs across a wide range of other applications. Think AI models that query a prediction market for real-time odds, governance systems that consult markets before making decisions, or social apps that combat misinformation by using market probabilities to assess a post’s truth value.
In essence, InfoFi blurs the line between “financial exchange” and “information exchange,” turning predictions into an API for the world. By aligning financial incentives with open market creation, prediction markets let bettors generate precise, real-time forecasts on any question, turning “markets for everything” into “data for anything.”
Modern prediction markets date to the early 1990s Iowa Electronic Markets and 2000s-era InTrade, where contract prices reflect crowd-implied probabilities that routinely matched or beat polls and pundits. Yet they didn’t take off due to issues like legal risk, thin liquidity, and clunky settlement. Similarly, early crypto iterations in the last decade (eg., Augur, Gnosis) suffered from high gas fees and poor UX on Ethereum L1.
Momentum shifted in 2020-2021 around the 2020 US presidential election, when Polygon-based Polymarket delivered low-fee, simplified UX on event trading. Other projects followed: Kalshi launched as a US CFTC-regulated exchange in 2021, aiming to legitimize event trading in the US under strict KYC. In the meantime, continued progress in scaling prediction market infrastructure has further lowered barriers to consumer adoption.
Adoption took off during the 2024 US elections: Polymarket processed about $2.6B in November alone and >$9B for the year, and reached 460K+ active traders by January – showing the scale possible when infrastructure, UX, and topical events align.
The popular belief at that time was that prediction markets were election-bound and that volumes would soon fade away. But instead, engagement broadened as first-time users rotated into other macro topics, sports, and crypto markets. Since mid-2025, Polymarket has been doing $1B+ in monthly volumes, and the share of bettors active in non-election categories keeps climbing – indicating deeper structural growth underlying event cycles.
The beginning, not the end.
Around this time, the sector started expanding. In his seminal post, Vitalik argued that prediction markets are only the first application of a broader category of “information finance” (InfoFi) – three-sided markets including bettors on each side and external readers:
The key to InfoFi is leveraging market structures in a way that aligns incentives with the production of honest and disciplined signals for any quantifiable question. Bettors trade based on their idiosyncratic views, the aggregate output reflects the “wisdom of the crowd,” and others (humans or machines) can read the prices to infer knowledge about the world in real-time.
By now, prediction markets have proven their accuracy and value in isolated cases. The present challenges around liquidity, resolution mechanics, and UX are starting to be addressed by clever market structure design and economics. The concept of markets-as-information utilities is starting to form.
In the remainder of this report, I’ll look to the future, exploring what makes prediction market good data, implementation challenges and solutions, and potential opportunities arising as the space expands.
A prediction market, at its core, is a mechanism that pays people for being right and penalizes them for being wrong. It’s a simple but powerful mechanism that improves upon conventional information crowdsourcing, which now happens mostly on social media and social knowledge platforms (X, Reddit, Quora, etc.).
On these sites, participants are rewarded (eg., with likes, upvotes, comments) for being early, entertaining, or aligning with popular sentiment. But aligning with such social incentives (vs. being right) is known to encourage noise and misinformation. The effect is pronounced among power users who get the most views, which is clearly a problem. The authors of this study note, “once users form these sharing habits, they respond automatically to recurring cues within the site and are relatively insensitive to the informational consequences of the news shared, whether the news is false or conflicts with their own political beliefs.”
There’s no direct consequence for posting wrong information.
But in prediction markets, every opinion “posted” comes with a risk and reward profile. If you believe a specific outcome will happen, you can’t just say it, you have to put money on the line by actually buying shares on that belief. Having “skin in the game” has many benefits:
“Survival of the smartest”: Over time, those who are more wrong either improve or exit the market at a loss while those who are more right accumulate capital and influence across the platform. Performance-based selection is more meritocratic and objective than systems based on upvotes and the like, which can be gamed or reflect irrelevant qualities like the poster’s status, how sensational they write, or even sheer posting frequency
Wisdom of the crowds 2.0: Like other social knowledge platforms, prediction markets bring together diverse participants, each with their own private knowledge, perspectives, and models. But in prediction markets, financial incentives encourage those with particularly relevant knowledge to participate. Heterogeneity is preserved, but you boost the volume of informed signals contributed to the market, which then aggregates all of these inputs into a single probabilistic output. It’s wisdom of the crowds, with less noise. It’s no wonder that prediction markets repeatedly have been shown to beat professional forecasters, polls, and pundits in forecast accuracy. This is in part because, unlike polling, prediction markets weight inputs by confidence (a trader will bet more if they have strong belief and evidence) and track record (successful traders accumulate more capital to deploy).
Self-correction: One might think money will encourage bad actors to manipulate prices in their favor, but in practice the opposite will tend to be true as long as the market is less budget constrained than the manipulator. Prediction markets have a built-in self-correction mechanism: if someone tries to push a market away from the true odds (eg., to mislead observers), they create a profit opportunity for others to bet against the movement. Robin Hanson (who conceived futarchy) explores this further in
While all of the above sounds nice in theory, there are obviously practical challenges to implementation. In particular, solving the following three operational challenges will largely determine at what scale prediction markets will produce useful information for the world:
Thin liquidity
Dubious resolution
Lack of personalization
While AIs offer powerful solutions across all three areas, sustainable growth demands a combination of AIs and structural market design improvements.
Long-tail markets – including questions about niche subjects, regional politics, emerging technologies, and user-generated markets – struggle to attract sufficient trading volume. Most users find these markets either irrelevant or just not worth the effort. AIs can solve this problem at machine scale. AIs are willing to work 24/7 for less than $1 an hour and have more knowledge than an encyclopedia. Just a small liquidity subsidy would be enough to get thousands of AIs to swarm all over the question and make the best guess they can. At scale, this simple fact makes long-tail prediction markets a suitable game for AIs.
However, sustainable liquidity requires more than AI participation, especially in the near term. The ecosystem needs structural upgrades that attract professional capital and make the unit economics attractive for market creators and promoters alike. While there is no one-size-fits-all solution, here are several targeted strategies worth considering for specific event types:
Specialized market makers: It’s well known prediction markets suffer a liquidity issue, largely due to jump risk (YES/NO questions jump to either $0 or $1 at settlement). One option here is to actively onboard derivatives market makers comfortable taking inventory risk and modeling tail volatility. Expertise with binary options can translate well to binary prediction markets (both have high sensitivity near event threshold ) and keep market prices fair and orderly. eg., see Kalshi’s Market Maker Program, which already onboarded professional market making services from Webull and Robinhood.
Liquidity aggregation: Create a shared liquidity layer that standardizes markets across platforms, where any frontend can leverage the shared pool for depth (à la Hyperliquid’s builder codes). The system could be mutually beneficial – apps gain instant access to deep liquidity, and the shared liquidity layer benefits from activity routed through these apps. Fees can optionally be collected at each layer in the order flow.
Advanced AMMs: Order-book prediction markets implement categorical questions as bundles of separate binary markets, which splits liquidity across options; by contrast, LS-LMSR AMMs bootstraps categorical markets without fragmenting pools by dynamically pricing outcomes from a single pool (eg.,
Dubious resolutions – for example, Polymarket’s “Zelensky suit” decision, among many others – show how whale voting and vague wording can undermine platform trust. A practical fix could be an optimistic, multi-layer AI oracle (eg., see UMA’s Optimistic Truth Bot). In the ideal case, AIs would be at the center, and humans increasingly moved to the edges: First, specialized verifiable AIs work together in a self-correcting feedback loop to read and interpret information from pre-declared sources, compile an evidence file with citations and provenance proofs, and publish a signed, tamper-evident report with a confidence score. Only in edge cases when the AI flags ambiguity or receives a formal challenge after X iterations does the case escalate to a human panel (eg., Kalshi-style centralized adjudication or decentralized bonded jury of subject-matter experts). This design is faster and more scalable, and removes much of the human bias and incentive misalignment from the equation.
Complementary upgrades:
Locked specs: Lock question text, market mechanics, and resolution criteria the moment a market lists. Define data sources, interpretation, timestamps, and edge-case handling process in advance (ideally in a public rule sheet). Forbid retroactive “clarifications” or vague resolution criteria like “consensus of credible reporting.” The settlement process should be as close to a deterministic checklist as possible, with little room for interpretation.
Bonded and reputation-staked juries: Prediction market outcomes shouldn’t be purchasable by well-funded actors. To the extent humans must be involved, at least the final arbitration shouldn’t depend on a crude “one token, one vote” system. A more sophisticated system could use vetted subject-matter experts who post slashable bonds and disclose conflicts in advance. Repeated accuracy earns reputation points and work opportunities while bad calls or inactivity are disincentivized by losing these. Juror rotation can be used for anti-capture and diversity of views. Optional public identification (eg., linking social accounts) would add a supporting layer of social accountability.
The current UX for market discovery is poor. When users open Polymarket or Kalshi, the first thing they see are undifferentiated catalogs of questions that they must manually sift through to find something interesting for them. It’s obviously not ideal, and a barrier to engagement and retention that needs to be solved.
A dynamic “prediction feed” that ranks markets by predicted personal relevance – using interactions, content metadata, and peer signals – would be better than static lists. This is nothing new, as similar systems have been used for years on other platforms like TikTok, Netflix, Amazon, etc. where a small subset of the content catalog is relevant to each individual. On the backend, these recommendation engines rank items by predicted interest, update continuously as behavior changes, and deliver notifications on high-signal items (think personalized push notifications on your phone).
Here are a few ways this could play out:
AI recommender core: Learns from your watch/read time on market pages, page opens vs. trades, outcome preferences, and social graphs (if possible) to rank markets and topics based on predicted personal relevance.
Exploration/Exploitation: Contextual bandits actively explore/exploit across themes (eg, politics vs. tech), risk profiles (eg, long-shot vs. high-confidence markets), and time (eg, near term vs. long-dated) to lift overall engagement without trapping you in a narrow bubble. This ML technique effectively improves upon static A/B testing by adaptively balancing between “trying new things” and “show me what works.”
Multi-signal notifications: Personalized notifications on multiple signals (probability swings, liquidity spikes, market-relevant news, etc.), not just event recommendations. AIs define the delivery rules on your behalf – you don’t need to manually set alerts yourself.
AIs contribution doesn’t need to end there. They can even contribute to platform content creation (proposing and even creating the most timely and interesting events based on trending data) and information aggregation (providing feeds of event-relevant data on various events, blurring the line between prediction markets and the news; eg., Kalshi and X already integrated Grok for market summaries).
All together, AI + proper market design = Scalable InfoFi. AI supercharges scale – AI players fill out long-tail markets, AI-centric oracles facilitate fast and fair settlement, and AI recommenders deliver the right markets and signals at the right time for you. Market structure upgrades (liquidity aggregation and bootstrapping, expert juries, clear specs, etc.) make platforms more trustworthy and attractive for traders.
How can we use the data from prediction markets?
Prediction market data is a programmable public good: once the crowd prices an event, it maps to a specific probability that can inform smarter decisions everywhere else. Below are some ideas:
LLMs can call Polymarket or Kalshi APIs during inference, anchoring answers in up-to-the-second updates instead of stale training data on time-sensitive questions. For example, an AI financial advisor can fetch the market’s live odds of a 25 vs. 0 bps rate hike at the next FOMC meeting to generate better advice about a potential portfolio move.
Already, Claude’s Financial Analysis Solution can bridge Polymarket data with its AI applications using the Model Context Protocol (MCP). Beyond simple data retrieval, the Polymarket API also provides AI-powered analysis on prediction market data for tasks like market analysis, arbitrage detection, and trading recommendations.
Using prediction markets to guide decisions à la futarchy could make (DAO) governance and other forms of decision making more data-driven. Instead of simple token voting based on forum discussions, a community can create conditional prediction markets on the outcomes of proposals and use those to inform and even make choices – extending the concept of prediction markets to “decision markets” (see Butter and MetaDAO for experiments in this direction). To do this, you first assess the impact of each decision on a target metric, then you use the results to execute the decision. What makes this approach to governance attractive is that it orients decisions around outcomes rather than popularity, which is arguably more meritocratic.
DeFi protocols can subscribe to prediction markets to adjust parameters in real-time based on perceived risk. For example, a lending platform like Aave could subscribe to a market forecasting the likelihood of a certain asset crashing or depegging (“ETH < $2,000 this quarter”), then proactively tighten collateral requirements (lower ETH LTV; increase borrowing rates) if that probability spikes.
Rather than static news based on pundits or otherwise single or biased sources, prediction markets can give rise to market-driven news that updates in real-time based on impartial market projections. Adjacent News is one project pioneering this approach – they embed live odds beside headlines and are building toward dynamic content that adapts to market movements. Other news sources like Bloomberg, CoinDesk, and the Block are also increasingly citing prediction markets in their stories. In the coming years, I wouldn’t be surprised if we see the news become a lot more dynamic by combining live prediction market data with LLMs for dynamic content.
Prediction markets can augment traditional knowledge crowdsourcing forums (X, Reddit, Quora, Stack Exchange) by attaching financial incentives to answering quantifiable questions. Instead of asking “What will happen in the event of X?” and getting a select and thin thread of opinions, one could pose that question in a quantifiable way in a prediction market and let bettors arrive at an answer. What you get is the community’s collective intelligence distilled in the market price, which is more direct and useful than guessing what is signal vs. noise in a thread of opinions. Platforms like Manifold are developing this approach – users create any question and others (humans and bots) bet on them, producing a single probability judgment on the market creator’s question, updated in real time. X’s Polymarket integration could take a similar form (eg., enabling X users to both bet on and read whether a promise made by Elon in a new post will happen).
Traditional financial markets have various indices and data feeds (eg., S&P500, Nasdaq, etc.) that other financial apps consume – prediction markets can extend this concept by enabling indices for any quantifiable uncertainty. There’s a big B2B opportunity here to create composite indices of non-price signals that aggregate the outputs of multiple prediction markets into a single more robust probability with confidence bands. Imagine something like how Google Finance plugs into financial market indices, but for all sorts of non-financial things. There are already early versions in production like Metaforecast by Metaculus.
Each of the above examples shows prediction prices “transcending” their platforms and spreading into the world. While the list above is selective, it illustrates a diversity of ways InfoFi can turn collective speculation into a living data layer for other software. The result is better software that is more reactive to what the world is currently thinking.
With these developments, a huge opportunity landscape is emerging around InfoFi. Much like past blockchain primitives gave rise to entire ecosystems, prediction markets and InfoFi will spawn their own stack of infrastructure, services, and applications.
Next-gen platforms: General platforms that challenge Polymarket and Kalshi by improving on liquidity, resolution fairness, and UX. There’s still a lot of room to grow on these dimensions, and intriguing ideas are being explored such as implementing user-generated markets, unified liquidity, liquidation safeguards, multiple response types, and perps for leverage.
Vertical platforms: Domain-specific prediction markets. In this category, it’s likely to see traction among high-frequency, high market outcome value markets. These markets are more likely to drive user retention and liquidity. Sports is a good example (eg., Football.Fun). Speculating on asset prices themselves within specific time frames is another (eg., Limitless).
Content as markets: Embedding prediction markets into news, social media, streaming, and other everyday online experiences. See Myriad and
“The next information age won’t be driven by the 20th century’s media monoliths – it’ll be driven by markets.” - Polymarket
We are moving toward a future where collective foresight is accessible on demand, and our capacity for truth seeking is stronger than ever before.
Open markets have the potential to become our most powerful forecasting tool. InfoFi reframes the age-old practice of wagering into something profoundly useful: a decentralized mechanism for gathering and disseminating knowledge in real time. “Markets for everything, data for anything” captures a vision of the world where any question important to society can be turned into a market, and the output of that market can feed into any application that needs it.
In effect, we’re creating a new internet-native intelligence layer that distills crowd wisdom, expertise, and even machine insights into actionable probabilities. This decentralized intelligence arises from the interactions of many; each selfishly pursuing their own interests, yet collectively producing a public good.
I see prediction markets as a meaningful development in our evolution as an informed society, much like how the early web revolutionized information access. Progress is made in decisions, and prediction markets help us make better decisions. The more prediction market data is used, the more familiar the public will become with thinking of the future in probabilities, which would be a major contribution on its own.
Of course, the journey is just beginning. InfoFi today is where the web was in the early 1990s – full of promise but needing refinement, infrastructure, and broader acceptance. There will be challenges and bad actors, but over time the system will learn and improve. As more people discover the accuracy and utility of well-designed prediction markets, trust in them will grow and so will their use in applications.
Disclaimer
This article is prepared for general information purposes only. This post reflects the current views of its authors only and is not made on behalf of Lemniscap or its affiliates and does not necessarily reflect the opinions of Lemniscap, its affiliates or individuals. The opinions herein may be subject to change without this article being updated.
This article does not constitute investment advice, legal or regulatory advice, investment recommendation, or any solicitation to buy, sell or make any investment. This post should not be used to evaluate the making of any investment decision, and should not be relied upon for legal, compliance, regulatory or other advice, investment recommendations, tax advice or any other similar matters.
All liability in connection with this article, its content, and any related services and products and your use thereof, including, without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement is disclaimed. No warranty, endorsement, guarantee, assumption of responsibility or similar is made in respect of any product, service, protocol described herein.
Bonding curves: Bonding curves link token supply to price. In prediction markets, the combined market cap of YES + NO tokens needs to equal the total collateral pool. As more YES tokens are bought, its supply would increase relative to NO tokens, along with its price. This approach can bootstrap instant liquidity for user-generated markets that start from zero with no counterparty.
Institutional hedging: Funds can hedge event risk directly on prediction markets, bringing massive institutional flow. As a toy example, imagine a renewables fund fearing a Republican win in the next US elections – they can buy Republican-win contracts as a precise hedge against their expected loss if their fears materialize and calculate the exact value of their net hedge. This level of precision makes event contracts a potentially unique and attractive alternative to blunt or loosely-correlated instruments (eg., shorting a clean energy ETF).
L2 tooling: Terminals, dashboards, portfolio trackers, discovery engines, analytics, and the like. Some examples include Betmoar, Polydata, Hashdive, Aerospace, Polysights, Flipr, and Metaforecast.
Fairer oracles: As discussed above.
Liquidity provision: As discussed above.
Prediction data APIs: Creating the “Bloomberg Terminal” of prediction markets. “See APIs for non-price signals.” Services could include historical datasets, developer SDKs, and custom queries.
AI players: Automated trading agents that exploit mispricings, provide liquidity, and generally improve accuracy. Unlike pure liquidity provision, these AIs would be actual players and seek alpha: using data to predict outcomes better than other players (human or other AIs). More sophisticated AIs can execute complex hedging and arbitrage opportunities across related markets (automated quant trading for event contracts).
Market researchers: With more and more prediction markets coming out every day, there’s an opportunity to start building AI researchers that scour these markets to surface the most valuable information. This may involve deploying swarms of specialized agents that collect and interpret data individually, then “discuss” each other’s claims to return the best answer. See HashDive, PolyScope, Caddy.
Subjective markets: While this might seem at odds with prediction markets which are about objective truth, there has been development around “subjective markets” for reputation (Clout; Friend.tech), beliefs (opinions.fun), or narratives (Noise). These can be described as “proto-InfoFi” as they are creating market prices for specific variables – the future prominence of a person, belief, or narrative. However, reflexivity and bubble dynamics keep the prices from revealing clear information, opening up opportunities to improve their economics. Well-designed versions could be useful for talent discovery or directing capital to high-potential ideas and narratives.
Opportunity markets: Opportunity markets are “private prediction markets where those who find opportunities get paid by those who act on them.” The basic idea here is that sponsors (eg., labels, VCs) can seed liquidity in private prediction markets to get leads from outsiders on opportunities (eg., a rising artist or startup). Privacy prevents sponsors from inadvertently helping competitors.
Share Dialog
Hiroki Kotabe
Support dialog