Cover photo

Auditing With A Wizard

Auditing With A Wizard

Bippity boppity boo I want to audit with you, you Audit Wizard. For those unfamiliar with Audit Wizard, I recently wrote a post that explains the platform in detail. Read it here to get some background or go check it out yourself at app.auditwizard.io.

Today, I want to share how I am currently using AW in my auditing process, especially within the context of the recent Titles Publishing contest on Sherlock.

Some background on Titles from the contest page:

“TITLES Protocol builds creative tools powered by artist-owned AI models. The underlying TITLES protocol enables the publishing of referential NFTs, including managing attribution and splitting payments with the creators of the attributed works.”

Pretty cool; essentially Titles is an infrastructure to produce original works of AI-generated art and allows others to create derivatives from those AI models while giving credit and the ability to split payments with the original producer.

But were there any bugs you ask? Yes, yes there were.

Process

Overall for this contest, using Audit Wizard was almost like having another auditor on my team. To get started, I found the Titles audit on Audit Wizard’s contest page and clicked it. Originally, I aimed to use Audit Wizard from start to finish, but due to some missing imports, I also set it up locally in my VS Code.

After my initial setup, I went through all the contracts in scope line-by-line within Audit Wizard, taking notes anytime something interesting came up. During this contest, I extensively utilized the different tags Audit Wizard provides within their notepad feature: blue for a normal note, yellow for info on how code was implemented or defining variables, red for a potential issue, and my custom tag ‘reported’ for finishing up the audit.

Screenshot of Audit Wizard app and notepad
Screenshot of Audit Wizard app and notepad

After reading through the contracts in Audit Wizard, I then moved to my VS Code and repeated the process. After my second pass through the code in AW, I used the whiteboard feature to sketch out the protocol, which greatly enhanced my understanding of the overall architecture and flow of creating Editions & Works.

Whiteboard diagram of the Titles Protocol
Whiteboard diagram of the Titles Protocol

Following the initial code review, it was time to go back through my notes and turn any issue tags into real findings then write up any PoCs that were necessary. To do this, I searched through any notes I tagged as an issue within Audit Wizard. Luckily for me, the three issues I found were all within the same contract and were related to how batch minting functions handled ether transfers. This made the write-up and PoCs relatively straightforward.

In the end, I submitted 4 findings, 3 of which were validated. Audit Wizard was an indispensable tool I had in my toolbelt for this audit.

What Went Well

Reading through the code twice and having both VS Code and Audit Wizard open simultaneously was incredibly helpful. It felt like having a second pair of eyes, allowing me to reference multiple sections of the codebase without constantly switching tabs. Additionally, I wasn't overly influenced by previous notes and could still converse with Audit Wizard’s AI if I had any further questions about the codebase.

It’s always beneficial to discuss the code with the AI, especially when I have specific questions about implementation or need reassurance. The whiteboard and notes were particularly helpful during this contest. In past audits, I found the note-taking process within Audit Wizard somewhat cumbersome, but I'm learning to better utilize it. For this contest, sketching out how a user creates with AI models, Editions, and Works on the whiteboard was very helpful and gave me a better understanding of the whole protocol.

What Could Be Better

While the "Create Finding with this note" feature helped in wording the findings better, the AI’s initial output usually required refinement to fit the format expected in a contest.

When discussing issues with the AI, it’s essential to be cautious. One of my findings wasn’t initially understood by the AI and required additional explanation to clarify the mistake.

The missing imports prevented me from writing and running tests within Audit Wizard. While this is a fixable problem, it can be tedious, so I opted to set things up locally, which was a better use of time than fixing all the imports. I had already planned on writing any PoCs or tests for the audit locally because the testing feature in Audit Wizard is a bit slower than doing it yourself in your own Foundry setup.

Wrap Up

I ended with three valid findings, all related to Ether transfer issues. Audit Wizard was incredibly helpful - effectively a co-pilot I could rely on throughout the contest.

The AI is useful but requires careful handling to avoid misunderstandings. The whiteboard helped visualize flows, and the missing imports didn't significantly affect my use of Audit Wizard. For my next audit, I plan to use the checklist feature, refine my note-taking process, and experiment more with testing in Audit Wizard.

If you're a smart contract security researcher or looking to write more secure smart contracts, you should be using this tool. Head to app.auditwizard.io to get started—it's free! The team is constantly updating the app, so it's likely there will be even more features by the time you read this. For example, they recently added the static analysis tool 4naly3er. Check it out and share your thoughts on Twitter; I’d love to hear about others' experiences using the platform.