We've reached a point (or we'll soon will) with AI coding tools in which the economics of building a native app on both Android and iOS will be comparable or better than building a React Native app.
You won't need additional resources to build and maintain both versions of the app because the AI will be both your iOS and Android team. You'll also spend less time dealing with React Native differences between the platforms.
This will once again make the ability to release OTA updates the best selling point for React Native with Expo.
But, guess what, you can architect your native code in a way that will allow you to make many OTA updates without a new native build. Won't be as flexible as RN but can be good enough.
The Gemini and Apple deal is very interesting and of course it's causing people to say that now Google will have control of AI on both Android and iOS.
My understanding is that Gemini will be the foundation model from which Apple's models will be distilled/fine-tuning.
The end result will be Apple's, not Google's.
It's a completely different way to think about the software stack.
Google supplies the engine and Apple builds the car around it.
Similar to how Mazda used Ford engines for decades before they switched to use their own engine.
Apple could eventually switch out of the Gemini engine to their homegrown one.
The term web3 was aspirational, that's why I liked it.
However, it didn't sit well with anti-crypto web2 folks, for obvious reasons, and with crypto/defi purists because they don't care about the consumer web.
Did Beeple change his process? Is he using AI now?
I think a while back I casted about how gen AI would eventually let anyone make Beeple's dailys.
https://x.com/i/status/2005126892152828077
The biggest challenge I've encountered while coding with LLMs is tooling and dev environment.
Having multiple versions of the same codebase with different states on the same dev environment is a shit show.
To fully unlock the power of iterating with LLMs, each agent should run on its own isolated environment and the human driver should be able to swap between agent environments quickly and be able to pair-program/review code for each agent.
I can see that Cursor is trying to move us towards this world but using git worktrees is just a hacky and messy solution. Each agent needs its own environment (FS, DB, etc.) and the human driver needs to be able to tap in from their IDE or even their phone to run, review, revise, approve, etc.
Another reason why coding with LLMs feels so natural to me is because I'm used to writing a design doc before coding. Now it serves as a spec and prompt for the LLM.
I've also been used to starting coding by writing a comment with the high-level description of the things I want to implement. The difference is that now either the LLM generated this with planning mode, or I write it and the LLM just implements it.
The funny thing for me with LLMs is that during my time at Google, I reviewed more code than I committed.
Since I left Google I started writing more code but, since GPT 5 and Opus 4.5, I've been back to reviewing much more code than what I write myself.
It's just that this time it's written by LLMs instead of other SWEs.
Setting the higher level design, directing work and reviewing the outcome is a natural progression.
I enjoy writing code but I enjoy quick feedback loops and working at higher level design significantly more.
Reviewing code is more about the design than the small details. Especially when the small details can be automated away more and more by test suites, static analyzers and linters.
https://x.com/i/status/2006364898218082304
Just minted my Waifu Warplet! A cinematic battle scene where my Waifu and Warplet are allies fighting together in an epic arena. Waifu Warplets by @0xhohenheim
Check out my casting heat map generated via 2025 Farcaster Wrapped! 🎉
📊 2470 casts
❤️ 23949 likes
💬 6329 replies
🔥 67 day streak
My top Bestie was for 2025 @crystalseasons ♥️