2 things claude really loves that normal human programmers don't really do (in my experience):
1. add tons of logging, console logs, etc.
2. catch every exception, but only log and return or re-throw
at least it makes debugging by console.log easier because they're -everywhere-
playing with a new game concept. https://zora.co/@iiii/live idea is trades move the playing field up and down and the last trader to trade wins the rewards from mystery boxes
Zora SDK Profile Query now returns `creatorCoin` information and can be queried with any base wallet address associated with the profile or with the user's username: https://docs.zora.co/coins/sdk/queries/profile
LLM Driven QA: If an LLM thinks naming or fields or API structure should be different on first pass, maybe there's something there. Granted, it may be based on previous incorrect training data but may be a interesting way to extract naming that may be normal for internal systems and slightly strange for external systems.
Another example of this is missing fields: I've had a few LLMS hallucinate fields that customers asked for and they were possible to add so we added them. It's essentially machine QA for developers rather than waiting to hear back feedback from developers as how to use an SDK.
Noticing these things developing and building examples for the Zora SDK.
I've found LLMs to be a hidden SDK design partner. Instead of being annoyed when the LLM messes up a field or API structure take it as feedback that maybe your users also want that field or find that name confusing. That's not just random noise – it could be a hidden insight.
While building the Zora SDK, LLMs have hallucinated features that customers actually wanted. It's like having a fast developer QA partner. It also gets confused looking up docs and helps determine what parts of documentation to improve.
The key is staying open to those unexpected suggestions and confirming these findings with others on your team because, sometimes, hallucinations are just that – hallucinations.