You ask me who to follow in AI, but don’t even follow me—after I’ve spent two years sharing insights and building a presence on Farcaster. Then, there’s another person who literally copied my following list, recommended others follow me, but when I tried to return the favor? Realized I’m blocked. I guess they just wanted to give the impression they are in the know. I see why so many AI folks have churned ☹︎
I’ll still check in and post occasionally—this isn’t me quitting Farcaster or anything like that, just sharing some 💭
Attached below is one example, but I’ve received plenty of similar messages—some harsher towards farcaster, all eye-opening, all from AI folks. I also made sure they knew I’m still here, just not posting as often.
My experience hasn’t been all bad—many from Farcaster have reached out to say they only know what they do about AI because of me and have shared their gratitude. That support is why I still return.
FAST is a robot action tokenizer that simplifies and speeds up robot training. It enables:
> 5x faster training compared to diffusion models.
> Compatibility with all tested robot datasets.
> Zero-shot performance in new environments, including the DROID dataset, successfully controlling robots in various settings with ease.
> Simple autoregressive VLAs that match diffusion VLA performance.
> Mixed-data VLA training, allowing integration of non-robot data like web data, subgoals, and video prediction.
FAST compresses actions using discrete cosine transform, reducing redundancy and enabling efficient VLA training on high-frequency tasks. It scales to complex robot tasks with simple next-token prediction, converging in days instead of weeks.
A pre-trained FAST tokenizer based on 1M robot action sequences is available on Hugging Face, working across various robots and supporting mixed-data VLA training.
https://huggingface.co/physical-intelligence/fast
Open research and open source accelerate progress by fostering collaboration, transparency, and accessibility. They enable people to build on existing work, reduce redundancy, and solve problems more efficiently. Openness has always been a key driver of innovation and growth—open is the path forward!
Looks like they’re finally paying attention.
I still remember when I posted this. It wasn’t a direct response, but I shared it because someone mentioned Chinese models and how far behind they were after referencing a LinkedIn post.
Zero engagement—people were too busy copying and pasting everything they saw on X or elsewhere to look like they were in the know, rather than learning from those actually in the trenches.
AI evolution at its finest.
Never said, “Don’t sleep on Meta,” but I’ve been shouting “Don’t sleep on DeepSeek” for well over a year!
It’s incredibly rewarding to see my research and work validated with their latest release.
AgiBot World is an open-source dataset for robotic learning with over 1M trajectories from 100+ real-world scenarios, covering tasks like manipulation, tool use, and multi-robot collaboration.
https://agibot-world.com
rStar-Math shows SLMs can rival or surpass OpenAI o1 in math reasoning w/out distillation from larger models, using MCTS and three keys factors:
1. Code-Augmented CoT Synthesis: MCTS generates verified reasoning data to train policy SLMs.
2. Enhanced PRM: A novel training approach avoids naïve annotations, yielding a stronger process preference model (PPM).
3. Self-Evolution Framework: Four rounds of self-evolution refine reasoning with millions of synthesized solutions for 747k problems.
Performance Highlights:
> Achieves 90.0% on MATH, improving Qwen2.5-Math-7B by +31.2% and surpassing OpenAI o1-preview by +4.5%.
> Boosts Phi3-mini-3.8B from 41.4% to 86.4%.
> Solves 53.3% of AIME problems, ranking in the top 20% of high school competitors.
don’t sleep on small models.
https://arxiv.org/abs/2501.04519
This model didn’t quite pass my vibe check back in December, so I held off on sharing. That said, there’s still something to learn from this release, even if it’s not my top pick among SLMs right now.
A while back, while many of my peers were at NeurIPS, I attended the Humanoid Summit. Being involved in cutting-edge robotics was exactly the reset I needed to stay focused on the ultimate goal. It’s always inspiring—and a privilege—to support friends pushing the field forward. Perfect motivation heading into the new year.