Hello It's @b_cryptojake.
I’ve been developing DApps for about 3 years now. As of this writing (2025.10.01), a service called Legion and its YieldBasis deposit feature has been quite popular in Korean TG chats. The patterns of depositing stablecoins and yield farming are consistent with the recent low-risk DeFi trend that Vitalik often talks about, so I think it’s an interesting field to explore.
At the launch of most TGE sites or deposit services that attract a lot of attention, the sites are usually unstable. Legion is no exception. One user said they clicked for 1 hour and 40 minutes but still couldn’t even trigger the approve()
function.
It’s easy to assume that the main resource of a DApp team is smart contract development, so problems like this must be chain-related. But in most cases, it’s actually a front-end RPC request optimization issue.
So, I’ll share some tips on how to make your service more reliable during traffic spikes (temporary surges in transactions or view requests). – There’s also a Wagmi-based code snippet included!
Let’s talk about the Broken RPC situation, which most builders and users have probably experienced.
100K+ people flooding in within 10 minutes
Everyone pressing the same button (“Mint”, “Buy”, “Claim”, “Deposit”)
Multiple functions on the page trying to read variables from the contract
In these situations, the root cause is often frontend and RPC management, not the blockchain itself. Here are some common workarounds:
Quickly spin up a private node (Infura, QuickNode, Alchemy) and replace your RPC.
(But the 100M request cap can be filled in just 3 hours.)
After an announcement (ANN), deploy optimizations as fast as possible.
Even then, if you rely only on frontend cache invalidation, users won’t see fixes unless they refresh, so the damage may already be done.
Here’s a checklist you can follow on the frontend side:
RPC Fallback
Do you have a backup RPC to switch to when the main one fails?
If an RPC breaks and keeps throwing errors without responding, your code should automatically fall back to another RPC.
Most Wagmi hooks use @tanstack/react-query
for caching by default.
By caching view function requests, you can reduce the risk of hitting RPC rate limits.
If you’re calling multiple view functions from the same contract, bundle them into a single multicall()
request. This reduces traffic and avoids rate limits.
If you’re regenerating pages from the server every time without caching, it can slow things down significantly. Regional caching of SSR responses improves load times.
Use load testing tools (e.g. k6, Artillery) before launch to simulate traffic spikes.
(Off the record: I used to monitor service communities like Discord during zero-day launches. The result? Every post spammed with the 🚨
Here’s a simple way to shuffle RPCs and create a fallback setup with Wagmi:
// the target chain: binance smart chain
import { bsc } from 'wagmi/chains';
import { createConfig } from 'wagmi';
export const SUPPORTED_CHAIN = bsc;
const PUBLIC_RPC = [
'<https://bsc-dataseed3.binance.org>',
'<https://bsc-dataseed4.binance.org>',
'<https://bsc-dataseed1.defibit.io>',
'<https://bsc-dataseed2.defibit.io>',
'<https://bsc-dataseed3.defibit.io>',
'<https://bsc-dataseed4.defibit.io>',
'<https://bsc-dataseed1.ninicoin.io>',
'<https://bsc-dataseed2.ninicoin.io>',
'<https://bsc-dataseed3.ninicoin.io>',
'<https://bsc-dataseed4.ninicoin.io>',
];
function shuffleRPCList() {
const shuffledList = [...PUBLIC_RPC];
for (let i = shuffledList.length - 1; i > 0; i--) {
const j = Math.floor(Math.random() * (i + 1));
[shuffledList[i], shuffledList[j]] = [shuffledList[j], shuffledList[i]]
}
return shuffledList
}
const createFallbackTransport = (timeout = 7000) => {
const transports = shuffleRPCList().map((url) => http(url, { timeout }))
return fallback(transports, {
retryCount: PUBLIC_RPC.length,
rank: false,
})
};
export const config = createConfig({
chains: [SUPPORT_CHAIN],
transports: { [bsc.id]: createFallbackTransport() },
});
👉 fallback()
(provided by Viem) retries failed RPCs according to retryCount
, and you can decide whether to rank them or call sequentially.
As some of you may know, there is a website with a great utility called CoinTool.
You can use it to get an idea of whether our service's network will suddenly spike in gas prices, or whether it will provide relatively stable traffic.
For example, my example BSC employs Proof of Staked Authority (PoSA), so it has a high TPS, and the gas fee is always fixed at 1 Gwei, so you can see that it's not going to be unexpected!
If Binanace Alpha is confirmed at TGE, there's no problem with the above code because the BSC chain uses BEP-20.
On the other hand, let's look at the Linea chain I built in the past.
Linea has been relatively stable in the recent past, but gas prices tend to spike at certain times. If your recent history is choppy, you might want to consider more RPC-optimized logic.
All RPCs support multicall. With Wagmi, you can easily batch view functions.
Here’s my useBalance
hook:
// /src/service/useBalance.ts
import { useReadContracts } from 'wagmi'
import { erc20Abi, Address } from 'viem'
export function useBalance(tokenAddress: Address, walletAddress: Address) {
const { data, isLoading, isError } = useReadContracts({
contracts: [
{
address: tokenAddress,
abi: erc20Abi,
functionName: 'balanceOf',
args: [walletAddress],
},
{
address: tokenAddress,
abi: erc20Abi,
functionName: 'symbol',
},
{
address: tokenAddress,
abi: erc20Abi,
functionName: 'decimals',
},
],
query: {
enabled: !!tokenAddress && !!walletAddress,
},
})
if (!data) return { data: null, isLoading, isError }
const [balanceResult, symbolResult, decimalsResult] = data
const balanceBigInt = balanceResult?.result as bigint
const symbol = symbolResult?.result as string
const decimals = Number(decimalsResult?.result ?? 18)
const formatted =
balanceBigInt !== undefined
? (Number(balanceBigInt) / Math.pow(10, decimals)).toLocaleString('en-US', {
minimumFractionDigits: 2,
maximumFractionDigits: 4,
})
: '0.00'
return {
data: {
value: balanceBigInt,
formatted,
symbol,
decimals,
},
isLoading,
isError,
}
}
Instead of making 3 separate calls for balanceOf
, symbol
, and decimals
, this approach batches them into one request. This is especially useful in DeFi apps where polling is frequent (e.g., price quotes every second).
So you can see why most TGE sites (or service launches) feel like bullshit.
In Web2, load balancers can reliably handle massive traffic. In Web3, there’s an extra layer of blockchain RPC bottlenecks, so you need distributed RPC strategies.
As frontend developers, we don’t design tokenomics, but we do need to ensure that when a user presses “Claim,” it works reliably—even during a traffic surge.
I learned these lessons through painful trial and error. Hopefully, with this checklist, you won’t end up launching another “broken” DApp.
Thanks for reading. If you enjoyed this, stay tuned for the next post:
“Why Do We Keep Seeing 429 Errors? (A Crash Course on RPC Communication)” — where we’ll dive deeper into RPC.
Share Dialog
CryptoJake.eth
Support dialog
Comments