Remember when every new AI release felt like AGI was just around the corner? That feeling is fading. AI progress has slowed. ChatGPT 5 landed with little excitement, and bigger models are no longer delivering bigger breakthroughs. The reason is simple: the internet has been scraped dry.
For awhile now, progress came from scale. Feed models more text, images, and video, and they got smarter. But now, most of the best digital content has already been mined, labeled, and compressed into today’s models. What remains is repetitive, low-quality, or synthetic. That is why recent updates feel underwhelming. The web is no longer a reliable source of fresh intelligence.
If AI is going to keep improving, it needs a new kind of data. That data will come from the physical world. This is where the richness of human experience actually happens, and where the next wave of meaningful progress will be found.
Physical AI refers to systems built on data captured directly from the real world through sensors, robotics, vehicles, wearables, and mixed-reality devices. Unlike digital text and images, this data carries motion, depth, touch, and environmental context that the internet can never replicate.
These devices generate evolving streams of information every day. The companies that control the hardware will own the most valuable datasets of the next decade.
AI advances have always depended on better data. Digital sources are now exhausted. The next leap forward will not come from more web scraping. It will come from real-world data that reflects how people actually live, act, and interact.
This is not just about volume. Physical data is richer, higher-fidelity, and more contextually grounded than millions of noisy, recycled web samples. It is real data rooted in lived experience.
We can already see Physical AI emerging across different industries and devices:
Robotics: Boston Dynamics, Tesla’s Optimus and Unitree are building machines that can move, lift, and navigate in human environments. Apple is preparing to enter personal robotics, reportedly integrating a new Siri that could connect robots into its hardware ecosystem. While Apple has lagged in the AI race, its strength in devices and design could make it a major player in this next phase. In medicine, Intuitive Surgical’s da Vinci system has already performed more than 11 million procedures, generating precision data about motion and outcomes that no simulation can match.
Although household robots are still early and costly, momentum is building. Meta has launched a new humanoid robotics division within Reality Labs, aiming to merge Llama models with physical machines. Figure AI, a fast-growing startup that has raised more than $850M to scale quickly, is racing in the same direction. The competition is accelerating, and humanoid robots are no longer science fiction.
Mixed Reality Glasses: Meta’s Ray-Ban Meta smart glasses have sold over 2 million units since October 2023, tripling revenue in early 2025. Google has invested $100M in Gentle Monster to make smart glasses fashionable, while Apple is betting big with the Vision Pro. Adoption of the Vision Pro is still limited by price and bulk, but each generation is getting lighter, cheaper, and more usable. Partnerships like Google × Gentle Monster and Meta × Ray-Ban signal that mixed reality glasses are finally edging toward mass adoption after years of skepticism.
On the startup front, companies are pushing the category further. Halo X, backed by $1M in seed funding, is developing always-on glasses powered by Google Gemini and Perplexity, designed to listen and respond in real time. Other early players like Madvision and Mentra are also building AI-first glasses. After several hype cycles and disappointments like Magic Leap, the sector feels at an inflection point. This time, smart glasses may truly go mainstream and when they do, they will unlock vast streams of physical-world data.
Wearables and Smart Devices: Health devices like Whoop, Oura Ring, and Apple Watch are amassing large-scale biometric datasets. Eight Sleep recently raised a $100M funding round to expand its smart mattress technology. At the same time, social-companion devices are emerging. Friend.com is shipping its first batches, and OpenAI hired Jony Ive to design a “friend device.” Most remain niche today, but the potential is massive once the right product-market fit is found.
And of course, the smartphone remains the ultimate smart device. Last week, Google announced new Pixel phones with Gemini deeply embedded into the OS, making AI assistance always-on and context-aware. Apple, while quieter, is preparing its own shift by integrating third-party AI models into the iPhone and iOS, signaling that it wants to be the hub for whatever AI you choose. Both moves reinforce the same idea: the phone is still the central node of our digital and physical lives, and now it is becoming the primary entry point for Physical AI.
Self-Driving Vehicles: Waymo has surpassed 100 million autonomous miles, operating ride services in five U.S. cities with more launches on the way. Tesla began its robotaxi service in Austin this summer, though scaling nationwide depends on regulators. Each new city unlocks more diverse and valuable datasets. Waymo and Tesla are expected to expand aggressively, with Waymo targeting New York City and Tokyo and Tesla moving quickly behind. Uber is also preparing to enter the space and recently backed AV startup Nuro, which raised $200M at a $6B valuation.
Specialized Robotics and Sensors: Beyond consumer devices, Physical AI is advancing through autonomous inspection systems. Companies like Skydio and Percepto deploy drones to monitor power lines, pipelines, and industrial sites with greater accuracy and safety than human crews. In heavy industry, robots from ANYbotics, Gecko Robotics and Boston Dynamics’ Spot are taking on inspection and maintenance roles, generating high-value operational data. At the same time, breakthroughs in tactile sensing are giving machines near-human dexterity, with robotic fingertips capable of detecting force, vibration, and temperature. These applications may seem specialized, but they are producing some of the most valuable industrial datasets in the world.
NVIDIA is doubling down on robotics and embodied AI with its Jetson Thor compute platform and Cosmos Reason, a new vision-language model designed for physical reasoning. Jetson Thor just launched this week and is expected to help power the future of robots. Unlike traditional LLMs, these systems are tuned to help robots and vision devices understand cause-and-effect, physics, and environmental context.
The foundation of AI is shifting from words on a screen to interactions with the physical world.
MIT’s work on “liquid networks” shows how AI trained in one environment can adapt to a completely different one, like drones shifting from forests to urban streets. This adaptability comes only from interaction with the physical world.
Embodied cognition research echoes the same truth. Intelligence is not abstract. It develops through sensing, moving, and responding to real environments. Machines will need to do the same and understand how humans adapt to different situations in real time.
Not all Physical AI data will be owned by Big Tech. Decentralized Physical Infrastructure Networks (DePIN) are showing how communities can build and own networks of devices that generate real-world data. Projects like Helium (wireless hotspots), Hivemapper (mapping from dashcams), DIMO (connected car data), and Reborn AI (AI agents powered by decentralized data and compute) illustrate how sensors, vehicles, and distributed systems can be deployed at scale without a single corporate owner.
While still early, DePIN points to a parallel path for Physical AI: one where the most valuable datasets are not siloed within Google, Tesla, or Meta, but distributed across decentralized networks that anyone can contribute to and benefit from.
Physical data captures motion, depth, touch, and real-world context that text and pixels alone cannot provide. Unlike static web content, sensors and devices generate fresh streams of information every day, ensuring that models stay current rather than recycling the past. Rich, high-fidelity data from real environments is far more valuable than millions of synthetic or low-quality samples. Most importantly, true intelligence emerges from lived, physical experience, not from abstract text or digital images.
AI has hit a wall. The digital well has run dry. The next wave of breakthroughs will not come from scraping the past. It will come from capturing the present.
Robots that move, glasses that see, wearables that sense, and cars that learn. This is how intelligence will evolve. Physical AI is not just an upgrade. It is the foundation of the next decade of progress, bringing AI closer to human experience and pushing AGI within reach.
The future of AI is not just digital. It is physical.
TJ
Share Dialog