I’m building a deterministic “model runner + visualization layer” so researchers don’t have to build their own front-end every time.
You load a model (code + parameters + optional CSV), run it in a constrained deterministic runtime, and it generates
standard outputs (charts/tables/stats).
Each execution is saved as a verifiable run bundle (model version + inputs + hashes), so anyone can replay the run and get identical results, making sharing and peer review much easier.
What would make this most useful in V1?