

The highly anticipated release of GLM5 and Mini Max 2.5 were the big highlights this week . ByteDance also released Seedance 2.0 though indiciations are that currently that is accessible only within China. Some discussion from a member on productivity gains he made after switching to a dictation tool Wispr Flow which I had suggested to him.
There was also Matt Shumer's long post which went viral
In the non-AI world, two major events happening in the following week based on lunisolar cycle hence the relevant imagery used in the banner (prompt and model used is at the end of this newsletter).
To those who celebrate Chinese/Korean/Vietnamese New Year wishing you the appropriate salutations for each of the regions and for those who observe the month of Ramadan, I would like to say "Ramadan Mubarak and humbly request remembrance in your prayers"
Readers who think others in their family, friends and acquaintances who are curious in knowing more about rapidly evolving AI tools/services/use cases and would benefit from being subscribers of this weekly newsletter are encouraged to share this publication link to them and invite them to subscribe
The following messages were posted on the 'All Things AI' Telegram group from Sunday 8th Feb 2026 to Saturday 14th Feb 2026
1 | From serial entrepreneur Furqan Rydhan, founder of Thirdweb and Founders Inc comes Nebula Think of it as a form of hosted openclaw (except I don't believe that its using openclaw) underneath Currently free in early access |
2 | via Furqan Rydhan Thanks for posting! Think of it like a slack filled with highly autonomous agents that get things done. It’s in early access but already very powerful. Can connect to any service + implement any api. Has its own file system, trigger system, sub agents and code execution. |
3 | via Anthropic — Our teams have been building with a 2.5x-faster version of Claude Opus 4.6. We’re now making it available as an early experiment via Claude Code and our API. We granted all current Claude Pro and Max users $50 in free extra usage. This credit can be used on fast mode for Opus 4.6 in Claude Code. https://code.claude.com/docs/en/fast-mode |
4 | via Robby Yung Nice map of Google’s AI ecosystem. |
1 | Steve Yegge writes about Anthropic’s culture. Nothing concrete, mostly vibes. He says it has that early Google/Amazon lightning in a bottle energy. https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b |
2 | via Kevin Dent in response to Robby Yung message on 8th Feb === I don’t think there will be any absolutes in the agent race. I don’t even think it’s going to be a race. The way I look at agents right now is that they started off as children, and in doing so essentially started life on a learned experience. Now those children are young adults; ChatGPT is the fun one, Claude is the nerd, Gemini is the high potential student, but sometimes they burn down the lab. I also don’t believe in tech absolutes, browsers are a great example. I started off using Netscape! It was amazing until it wasn’t. Then I moved to IE, once again MSFT followed the pack and it was amazing, until it wasn’t. On and on it went with browsers such as Firefox, Chrome, dark web browsers etc. Much like iPhone iterations now, they all feel the same. My theory is that agents will become something similar to our social circle. There will be agents much like friends that we ask relationship advice and those that we ask for advice on fixing the dishwasher. |
3 | 33-page guide on how to build skills in Claude from Anthropic https://resources.anthropic.com/hubfs/The-Complete-Guide-to-Building-Skill-for-Claude.pdf |
1 | The various X tweets/threads have jaw dropping visuals ByteDance's Seedance 2.0 Generates Hyper-Real AI Videos in China ByteDance released Seedance 2.0 in beta on its Jimeng AI platform, allowing users to create 1080p video clips from prompts, images, or audio, with consistent characters, multi-shot stories, and even edits to swap actors or backgrounds. Demos range from Dragon Ball Z live-action fights and Arcane-style chases to realistic war scenes and dance videos that have drawn millions of views. While creators celebrate the tool's physics and rhythm mastery, filmmakers worry it threatens VFX jobs and raises deepfake and IP concerns access remains China-only for now. |
2 | via Fares Hey, I just wrote an article on OpenClaw let me know what you think |
1 | From Thomas Dohmke, former CEO of Github — The purpose of our new co mpany Entire is to build the world's next developer platform where agents and humans can collaborate, learn, and ship together. A platform that will be open, scalable, and independent for every developer, no matter which agent or model you use. Our vision is centered on three core components: 1) A Git-compatible database that unifies code, intent, constraints, and reasoning in a single version-controlled system. 2) A universal semantic reasoning layer that enables multi-agent coordination through the context graph. 3) An AI-native user interface that reinvents the software development lifecycle for agent–human collaboration. |
1 | Zhipu AI Launches GLM-5, Top Open-Source Model for Coding and Agents The 744 billion parameter mixture-of-experts model activates only 40 billion per use, trained on 28.5 trillion tokens with a 200,000-token context window. It tops open-source benchmarks like 77.8% on SWE-bench Verified for coding and leads agent leaderboards, matching closed models like Claude Opus. Available now on chat.z.ai for subscribers, with open weights on Hugging Face under MIT license and low pricing on OpenRouter at $0.80 per million input tokens. |
1 | It's raining models particularly from Chinese labs in the runup to Chinese New Year MiniMax releases M2.5, claiming the model delivers on the “intelligence too cheap to meter” promise M2.5 tops benchmarks like 80.2% on SWE-Bench Verified for GitHub issues and 76.3% on web navigation, edging out rivals like Claude Opus in their tests—though independent leaderboards await updates. Its Mixture-of-Experts design activates just 10 billion of 230 billion parameters per run, slashing costs to $0.30 per million input tokens and enabling cheap, always-on agents. |
Below is my personal website which aggregates links to many of my socials as well as the various content and community that I curate. Feel free to share this link to others who you think may find this content/community useful to them
The cover image of this newsletter via generated via the Google Nano Banana Pro model within the Freepik tool via the following prompt
Hyper-detailed A surreal, mesmerizing depiction of a massive purple full moon dominating the night sky, its surface glowing with intricate, textured patterns that seem almost alive. The moon radiates a warm, ethereal light, surrounded by dramatic dark clouds with luminous silver linings, creating an otherworldly and mystical atmosphere.
The highly anticipated release of GLM5 and Mini Max 2.5 were the big highlights this week . ByteDance also released Seedance 2.0 though indiciations are that currently that is accessible only within China. Some discussion from a member on productivity gains he made after switching to a dictation tool Wispr Flow which I had suggested to him.
There was also Matt Shumer's long post which went viral
In the non-AI world, two major events happening in the following week based on lunisolar cycle hence the relevant imagery used in the banner (prompt and model used is at the end of this newsletter).
To those who celebrate Chinese/Korean/Vietnamese New Year wishing you the appropriate salutations for each of the regions and for those who observe the month of Ramadan, I would like to say "Ramadan Mubarak and humbly request remembrance in your prayers"
Readers who think others in their family, friends and acquaintances who are curious in knowing more about rapidly evolving AI tools/services/use cases and would benefit from being subscribers of this weekly newsletter are encouraged to share this publication link to them and invite them to subscribe
The following messages were posted on the 'All Things AI' Telegram group from Sunday 8th Feb 2026 to Saturday 14th Feb 2026
1 | From serial entrepreneur Furqan Rydhan, founder of Thirdweb and Founders Inc comes Nebula Think of it as a form of hosted openclaw (except I don't believe that its using openclaw) underneath Currently free in early access |
2 | via Furqan Rydhan Thanks for posting! Think of it like a slack filled with highly autonomous agents that get things done. It’s in early access but already very powerful. Can connect to any service + implement any api. Has its own file system, trigger system, sub agents and code execution. |
3 | via Anthropic — Our teams have been building with a 2.5x-faster version of Claude Opus 4.6. We’re now making it available as an early experiment via Claude Code and our API. We granted all current Claude Pro and Max users $50 in free extra usage. This credit can be used on fast mode for Opus 4.6 in Claude Code. https://code.claude.com/docs/en/fast-mode |
4 | via Robby Yung Nice map of Google’s AI ecosystem. |
1 | Steve Yegge writes about Anthropic’s culture. Nothing concrete, mostly vibes. He says it has that early Google/Amazon lightning in a bottle energy. https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b |
2 | via Kevin Dent in response to Robby Yung message on 8th Feb === I don’t think there will be any absolutes in the agent race. I don’t even think it’s going to be a race. The way I look at agents right now is that they started off as children, and in doing so essentially started life on a learned experience. Now those children are young adults; ChatGPT is the fun one, Claude is the nerd, Gemini is the high potential student, but sometimes they burn down the lab. I also don’t believe in tech absolutes, browsers are a great example. I started off using Netscape! It was amazing until it wasn’t. Then I moved to IE, once again MSFT followed the pack and it was amazing, until it wasn’t. On and on it went with browsers such as Firefox, Chrome, dark web browsers etc. Much like iPhone iterations now, they all feel the same. My theory is that agents will become something similar to our social circle. There will be agents much like friends that we ask relationship advice and those that we ask for advice on fixing the dishwasher. |
3 | 33-page guide on how to build skills in Claude from Anthropic https://resources.anthropic.com/hubfs/The-Complete-Guide-to-Building-Skill-for-Claude.pdf |
1 | The various X tweets/threads have jaw dropping visuals ByteDance's Seedance 2.0 Generates Hyper-Real AI Videos in China ByteDance released Seedance 2.0 in beta on its Jimeng AI platform, allowing users to create 1080p video clips from prompts, images, or audio, with consistent characters, multi-shot stories, and even edits to swap actors or backgrounds. Demos range from Dragon Ball Z live-action fights and Arcane-style chases to realistic war scenes and dance videos that have drawn millions of views. While creators celebrate the tool's physics and rhythm mastery, filmmakers worry it threatens VFX jobs and raises deepfake and IP concerns access remains China-only for now. |
2 | via Fares Hey, I just wrote an article on OpenClaw let me know what you think |
1 | From Thomas Dohmke, former CEO of Github — The purpose of our new co mpany Entire is to build the world's next developer platform where agents and humans can collaborate, learn, and ship together. A platform that will be open, scalable, and independent for every developer, no matter which agent or model you use. Our vision is centered on three core components: 1) A Git-compatible database that unifies code, intent, constraints, and reasoning in a single version-controlled system. 2) A universal semantic reasoning layer that enables multi-agent coordination through the context graph. 3) An AI-native user interface that reinvents the software development lifecycle for agent–human collaboration. |
1 | Zhipu AI Launches GLM-5, Top Open-Source Model for Coding and Agents The 744 billion parameter mixture-of-experts model activates only 40 billion per use, trained on 28.5 trillion tokens with a 200,000-token context window. It tops open-source benchmarks like 77.8% on SWE-bench Verified for coding and leads agent leaderboards, matching closed models like Claude Opus. Available now on chat.z.ai for subscribers, with open weights on Hugging Face under MIT license and low pricing on OpenRouter at $0.80 per million input tokens. |
1 | It's raining models particularly from Chinese labs in the runup to Chinese New Year MiniMax releases M2.5, claiming the model delivers on the “intelligence too cheap to meter” promise M2.5 tops benchmarks like 80.2% on SWE-Bench Verified for GitHub issues and 76.3% on web navigation, edging out rivals like Claude Opus in their tests—though independent leaderboards await updates. Its Mixture-of-Experts design activates just 10 billion of 230 billion parameters per run, slashing costs to $0.30 per million input tokens and enabling cheap, always-on agents. |
Below is my personal website which aggregates links to many of my socials as well as the various content and community that I curate. Feel free to share this link to others who you think may find this content/community useful to them
The cover image of this newsletter via generated via the Google Nano Banana Pro model within the Freepik tool via the following prompt
Hyper-detailed A surreal, mesmerizing depiction of a massive purple full moon dominating the night sky, its surface glowing with intricate, textured patterns that seem almost alive. The moon radiates a warm, ethereal light, surrounded by dramatic dark clouds with luminous silver linings, creating an otherworldly and mystical atmosphere.
3 | via Prashish Hey everyone, wrote an article breaking down the architecture of a framework for multi agent collaboration: routing, memory, context sharing, and delegation. I hope this is helpful: |
2 | via Robby Yung Great article about Amanda Askell, Anthropic’s philosopher in residence. |
3 | via Robby Yung Nothing has launched a vibe coding app creation interface natively within their OS - |
4 | via Robby Yung Instaclaw from Virtuals: |
2 | xAI has just publicly posted the full 45 minute all-hands meeting that Elon Musk had with employees recently. An X poster asked grok to provide highlights of this meeting in bullet points which I'm sharing below ==== - xAI's rapid progress: #1 in voice/image/video gen, forecasting; 1M GPU equivalents; Grokkopedia to surpass Wikipedia. - Reorg into Grok (main/voice), Coding (self-improving), Imagine (multimodal), Macro Hard (company emulation). - Voice: From zero to leader in 1 year; integrated in 2M+ Teslas. - Coding: Aiming for SOTA in 2-3 months; direct binary creation. - Imagine: #1 in 6 months; 6B images/month; real-time video soon. - Infra: Massive Memphis cluster; orbital/moon data centers planned. - X app: 1B+ users; payments, chat open-sourcing; space exploration for universe understanding. ==== |
2 | via Tom Ho === I am skeptical of the benchmark cuz I am also a user of Kimi , Qwen and GLM along side with Claude, openAI and Gemini models When I tried glm 4.7 with incredible benchmarks it didn’t seem to perform as well as I had hoped in cursor. But I didn’t try it inside Claude code, it may be the toolset optimization. Claude opus and sonnet just seem to perform much better for programming |
3 | via Mau Codex 5.3 is pretty on par with opus 4.6 for coding from my experience. It’s faster too, can read bigger codebases rather quickly |
4 | Google Chrome Ships WebMCP - Major Breakthrough for AI Agents Google and Microsoft jointly launched WebMCP (Web Model Context Protocol) in Chrome 146 Canary. This proposed web standard lets any website expose structured, callable tools directly to AI agents through a new browser API (navigator.modelContext). This could be the "USB-C of AI agent interactions" - replacing expensive screen-scraping and DOM parsing with single structured tool calls. Dan Petrovic called this the biggest shift in technical SEO since structured data. Glenn Gabe called this a big deal. |
5 | via Fazri Zubair I would concur that Codex 5.3 is another level better. I've been using it for the last few days here and it is a marked improvement over 5.2 which was my go-to choice for most of my coding tasks with Opus for Planning. I might just stick with 5.3 for both Planning and Execution of my tasks. |
6 | via Fazri Zubair ![]() On another note, Yusuf introduced me to dictation-based prompting and I’ve been using Wispr Flow for emails, messages, and prompting my AIs for coding tasks this week. I was honestly skeptical at first because I’m pretty old-school and prefer typing, but I’ve seen a noticeable improvement in my workflow. The dictation tech is way better than I expected. What I really like is how it uses AI to auto-correct intelligently. If I change my mind mid-sentence or say “actually replace that with this,” it understands the intent and cleans it up in the final output. That’s been surprisingly powerful. It’s significantly increased my productivity this week. I’m planning to roll this out to my engineers as well. It’s a great way to improve both productivity and accuracy when working with AI models. You speak faster than you can type. 150+ words per minute is easily hit using this method. |
7 | For those regularly switching 'harnesses' or working on a repo with different teammates using different 'harnesses', do watch out for subtletlies in making sure you know where your harness is picking up skills wrt directory locations. Whilst there is a movement to standardize towards .agent/skills as the canonical location, Anthropic as of now still uses .claude/skills. Read docs and talk to developers of your agent harness as to what is the current canonical location and if there are fallbacks Sometimes what you think of as an IDE may have an implicit harness. Cursor, Kilo Code, Cline are examples such products that come to mind https://github.com/agentskills/agentskills/issues/15#issuecomment-3813596535 One can also use a tool such as rulesync to keep things aligned |
3 | via Prashish Hey everyone, wrote an article breaking down the architecture of a framework for multi agent collaboration: routing, memory, context sharing, and delegation. I hope this is helpful: |
2 | via Robby Yung Great article about Amanda Askell, Anthropic’s philosopher in residence. |
3 | via Robby Yung Nothing has launched a vibe coding app creation interface natively within their OS - |
4 | via Robby Yung Instaclaw from Virtuals: |
2 | xAI has just publicly posted the full 45 minute all-hands meeting that Elon Musk had with employees recently. An X poster asked grok to provide highlights of this meeting in bullet points which I'm sharing below ==== - xAI's rapid progress: #1 in voice/image/video gen, forecasting; 1M GPU equivalents; Grokkopedia to surpass Wikipedia. - Reorg into Grok (main/voice), Coding (self-improving), Imagine (multimodal), Macro Hard (company emulation). - Voice: From zero to leader in 1 year; integrated in 2M+ Teslas. - Coding: Aiming for SOTA in 2-3 months; direct binary creation. - Imagine: #1 in 6 months; 6B images/month; real-time video soon. - Infra: Massive Memphis cluster; orbital/moon data centers planned. - X app: 1B+ users; payments, chat open-sourcing; space exploration for universe understanding. ==== |
2 | via Tom Ho === I am skeptical of the benchmark cuz I am also a user of Kimi , Qwen and GLM along side with Claude, openAI and Gemini models When I tried glm 4.7 with incredible benchmarks it didn’t seem to perform as well as I had hoped in cursor. But I didn’t try it inside Claude code, it may be the toolset optimization. Claude opus and sonnet just seem to perform much better for programming |
3 | via Mau Codex 5.3 is pretty on par with opus 4.6 for coding from my experience. It’s faster too, can read bigger codebases rather quickly |
4 | Google Chrome Ships WebMCP - Major Breakthrough for AI Agents Google and Microsoft jointly launched WebMCP (Web Model Context Protocol) in Chrome 146 Canary. This proposed web standard lets any website expose structured, callable tools directly to AI agents through a new browser API (navigator.modelContext). This could be the "USB-C of AI agent interactions" - replacing expensive screen-scraping and DOM parsing with single structured tool calls. Dan Petrovic called this the biggest shift in technical SEO since structured data. Glenn Gabe called this a big deal. |
5 | via Fazri Zubair I would concur that Codex 5.3 is another level better. I've been using it for the last few days here and it is a marked improvement over 5.2 which was my go-to choice for most of my coding tasks with Opus for Planning. I might just stick with 5.3 for both Planning and Execution of my tasks. |
6 | via Fazri Zubair ![]() On another note, Yusuf introduced me to dictation-based prompting and I’ve been using Wispr Flow for emails, messages, and prompting my AIs for coding tasks this week. I was honestly skeptical at first because I’m pretty old-school and prefer typing, but I’ve seen a noticeable improvement in my workflow. The dictation tech is way better than I expected. What I really like is how it uses AI to auto-correct intelligently. If I change my mind mid-sentence or say “actually replace that with this,” it understands the intent and cleans it up in the final output. That’s been surprisingly powerful. It’s significantly increased my productivity this week. I’m planning to roll this out to my engineers as well. It’s a great way to improve both productivity and accuracy when working with AI models. You speak faster than you can type. 150+ words per minute is easily hit using this method. |
7 | For those regularly switching 'harnesses' or working on a repo with different teammates using different 'harnesses', do watch out for subtletlies in making sure you know where your harness is picking up skills wrt directory locations. Whilst there is a movement to standardize towards .agent/skills as the canonical location, Anthropic as of now still uses .claude/skills. Read docs and talk to developers of your agent harness as to what is the current canonical location and if there are fallbacks Sometimes what you think of as an IDE may have an implicit harness. Cursor, Kilo Code, Cline are examples such products that come to mind https://github.com/agentskills/agentskills/issues/15#issuecomment-3813596535 One can also use a tool such as rulesync to keep things aligned |
>800 subscribers
>800 subscribers

This Week in All Things AI - Inaugural Edition
Greetings friends, Whilst you may have subscribed to my low-volume newsletter which focused on informing what had changed in the various public Notion pages that I curate, I decided to experiment with a new publication/newsletter which I hope to send on a weekly basis that aggregates what was posted the previous week on the 'All Things AI' telegram group Some of the reasons that I want to do this experimentThere are folks who have mentioned to me that they are not on Telegram and don't wish t...

This Week in All Things AI - Week 35-2025
Sunday 24th August 2025 and Saturday 30th August 2025

This Week in All Things AI - Week 39-2025
Sunday 21st September 2025 to Saturday 27th September 2025

This Week in All Things AI - Inaugural Edition
Greetings friends, Whilst you may have subscribed to my low-volume newsletter which focused on informing what had changed in the various public Notion pages that I curate, I decided to experiment with a new publication/newsletter which I hope to send on a weekly basis that aggregates what was posted the previous week on the 'All Things AI' telegram group Some of the reasons that I want to do this experimentThere are folks who have mentioned to me that they are not on Telegram and don't wish t...

This Week in All Things AI - Week 35-2025
Sunday 24th August 2025 and Saturday 30th August 2025

This Week in All Things AI - Week 39-2025
Sunday 21st September 2025 to Saturday 27th September 2025
Share Dialog
Share Dialog
No comments yet