Current diffusion models for AI image generation do a great job of creating images by gradually reversing noise. They start with a noisy image and progressively clean it up, eventually generating high-quality visuals. But what if, instead of generating pixel-based images, the model could generate shader code—code that itself generates the image procedurally? This could open up a whole new way of thinking about AI-generated visuals. We don’t know if it would work, but it’s a direction worth ex...