<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>re:make</title>
        <link>https://paragraph.com/@remake</link>
        <description>Find Problems. Make Solution. Build. Deploy</description>
        <lastBuildDate>Sun, 05 Apr 2026 05:47:02 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>en</language>
        <copyright>All rights reserved</copyright>
        <item>
            <title><![CDATA[Building with AI: Test First. Test Later. ]]></title>
            <link>https://paragraph.com/@remake/building-with-ai-test-first-test-later</link>
            <guid>zpDrPICWXbB9EAFpFoMx</guid>
            <pubDate>Sat, 07 Jun 2025 12:29:22 GMT</pubDate>
            <description><![CDATA[AI has removed the initial barrier for entry into the code development. However, It has yet to do a lot of things in order for it to be more reliable on the code quality. Unfortunately the no-code platforms and the AI builder platforms have made people believe in fractured approach to code development. Their suggestion to their customer is -Just write the prompt, iterate into the fixes and trust the output. This is not how the AI based development should be. In fact this approach would only m...]]></description>
            <content:encoded><![CDATA[<p>AI has removed the initial barrier for entry into the code development. However, It has yet to do a lot of things in order for it to be more reliable on the code quality. </p><p>Unfortunately the no-code platforms and the AI builder platforms have made people believe in fractured approach to code development. Their suggestion to their customer is -</p><blockquote><p>Just write the prompt, iterate into the fixes and trust the output. </p></blockquote><p>This is not how the AI based development should be. In fact this approach would only make such platforms more rich for burning tokens as the users burn more of their efforts into the prompt and attempt to fix their broken apps. </p><p>If you invest your money into a platform where you prompt and create code while burning tokens then it helps to make use of variety of the AI out there for project documentations to build the code and then apps through it. </p><h3 id="h-what-documents-should-i-use-to-build-the-project" class="text-2xl font-header"><strong>What documents should I use to build the project?</strong></h3><ul><li><p>Project requirement document</p></li><li><p>Project directory structure</p></li><li><p>Project Tech Stack</p></li><li><p>Frontend Guidelines</p></li><li><p>Backend Guidelines</p></li><li><p>UI Guidelines</p></li></ul><p>You can use plenty of LLM models out there to build these documents. However my approach always has been using the different LLM model than the one that builds my code. </p><p>For example, say Claude 4 builds my code, I won't use same AI in same tool to build my documentation. Instead my approach always has been using gpt models or deepseek or even mistral to write the docs. </p><p>I do this approach because I have found out each LLM to be offering different output and restrictions/boundaries for project. And when I try to use same model that builds code being used for docs, it often tries to justify it's action while cutting the corners and not following it's own docs and rules. </p><p>I have no idea why this is the behavior. However when I use Claude and Gemini as a code writers, I make sure to feed them with either deepseek docs/rules or the gpt based docs/rules to build the project code on. </p><p>Once I limit the code builders with the project documentations, a lot of useless code, unwanted features, extensive detailing over unimportant features get vanished.</p><p>So this is "<em>write documentation with tests in mind"</em> approach. You write the test and then build the docs with the "<em>test in mind</em>" for the features and the project development. </p><p>I even make sure for the LLM model to pay attention to small stats like these.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/eb04d5ce3ace2c8c39f2a76c56cdb75c.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAGCAIAAAAt7QuIAAAACXBIWXMAAA7DAAAOwwHHb6hkAAAAEnRFWHRTb2Z0d2FyZQBHcmVlbnNob3ReVQgFAAABLElEQVR4nK2PsU4CQRRFhwJN9D8otuMLVifW+B10LIVIhZpYjBWC1U4xyiwMCZJH2BFmN1mcUA4JiTRT7KdgWONSmViY4Clu85J77kNpmsZRrLU2xgBMtNbW2u3/gTabj2bzulqtdjqdxlXD8+qU0izb7c/ZwaAsy6y1juOUy+VzfOG6Z6VS6bJScV2XkAdr7Xq93m4//z75F8HKGJSDMUYIFQqFk9Pj4lERY+x5dWNWB31grW3dtO5zbr+5q9Vq7cc2IYRSSghhjHHOe72X/qDPOZ+GYcCDoRCMMSGGeYqnblcIEXBOKX3X+ocgTVPf95MkiaOYc66UEkLEUaTUDABGo9EzY1LK8esYAMJwmiQLAFBKAUxm8zkASPkW8EBKGYZTIQbL5TJv3u0FX2kqx2tz0yf+AAAAAElFTkSuQmCC" nextheight="189" nextwidth="1030" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>I am building multiple WordPress plugins where I use the audit.html file which has variety of the tests and suggestions based on the priority levels. I ask cursor to set "resolved" for features that are marked completed. </p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/51de9f071532526f6846e57e8ba6801e.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAKCAIAAABaL8vzAAAACXBIWXMAAA7DAAAOwwHHb6hkAAAAEnRFWHRTb2Z0d2FyZQBHcmVlbnNob3ReVQgFAAACwklEQVR4nK2SwUsbQRTGh17FXooWLY1CW2ygkBQNhUYtq9XY4m6h2dsqgoG6QrJYNJQE0ToqTUiIbElc1OiWGrdgXQN2iFipl9yWehkLEgN1oR5CvcSCyXFLdmjwD+jvMHzMe8Ob994HRkZe1dTUNDc1t9xvsVqtFotlaGjI+H+A5aUliqIQQoVCQdf1XC539ussl8v9ODo6/32ez+ez2aymabqul8vloknZ5LJU+nNxQTQJVfXVTBAOh6/X1jLMC4QQxvj4+FjXdZpm7ty7e+NWneORw2KxtLa1cRyXXE2qqrr3dU8URUmSZBNRFCGEGxsphJAoimETCKGqbu/v72OMQTQSsVqtD+0VIIQ8z8/Ozglen629FQDQ7+6fmpoWBB9NMw6Hg2VZmmZsNpvT6XQ4HBRFcRy3trYqy7KqqrIsSyaqqhaLRcMwKh1IkuRy9XIcFwgGw+HQ67Gx5GoSQvi0vxfUXxvxjg4ODNA04/F4/H5/bCE2OTnp9flY1v3S7WZZdnBgMBAMen0+juN4flQQBJ7nN7e2MMa7mV1d1wFCSDApl8vVzehXUFWVTA9jrGnawbeDdDpNLhH6QoSqbivKJ0VREEKKCWno5CQPIISNDQ19fc80TctkMoFgQBAEWf4A4QzP8+MTE9lsliy/iq7rP09P8ybVEPkNEVczgd8/DgCw2+0cx0EIPcPDwKTuZj0AgIybphnXP1iWdblcTqeTNelo7+jp7aFphqKo7q4ummZomunu6vL7/RjjSoFoJGKx3I4n4hjjy1Ip9THV+KAJANDR2Tk99ZZ4Q5IkchKTVPXK8ko0EiWOEkVRluXE4mI8ESdT2tnZqRTQNA1CKIrvDcMoFAqH3w+9b3yPnz9ZT62/C4UgnEnE4wihdDqtKEo4HIotxIgdzVciQmjz86Y5/QxCSJIWSWh+br5YvDAM4y/qAUa+R0bfgAAAAABJRU5ErkJggg==" nextheight="574" nextwidth="1896" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Take an example of me doing this with my recent Expo android app. I wrote the PRD and docs and then put the AI for development of app and then made sure cursor follows the project documentation and guidelines for development of an app. </p><div data-type="twitter" tweetid="1929215039170638259"> 
  <div class="twitter-embed embed">
    <div class="twitter-header">
        <div style="display:flex">
          <a target="_blank" href="https://twitter.com/devnamipress">
              <img alt="User Avatar" class="twitter-avatar" src="https://storage.googleapis.com/papyrus_images/9a75c48cf56f584646751bab3c4bde16.jpg">
            </a>
            <div style="margin-left:4px;margin-right:auto;line-height:1.2;">
              <a target="_blank" href="https://twitter.com/devnamipress" class="twitter-displayname">Mahesh</a>
              <p><a target="_blank" href="https://twitter.com/devnamipress" class="twitter-username">@devnamipress</a></p>
    
            </div>
            <a href="https://twitter.com/devnamipress/status/1929215039170638259" target="_blank">
              <img alt="Twitter Logo" class="twitter-logo" src="https://paragraph.com/editor/twitter/logo.png">
            </a>
          </div>
        </div>
      
    <div class="twitter-body">
      I used approach similar to taskmaster to reduce bugs and strictly force to get a better expo app output. Make sure you do the following: <br><br>- Write PRD<br>- Setup code-review after every stage<br>- Setup pending tasks page after update<br>- Always question AI if feature is dropped<br>- Each 
      <div class="twitter-media">
      <img class="twitter-image" src="https://pbs.twimg.com/amplify_video_thumb/1929214388986327040/img/KKDeyxC-g3xGrrBP.jpg"> 
    </div>
      
       
    </div>
    
     <div class="twitter-footer">
          <a target="_blank" href="https://twitter.com/devnamipress/status/1929215039170638259" style="margin-right:16px; display:flex;">
            <img alt="Like Icon" class="twitter-heart" src="https://paragraph.com/editor/twitter/heart.png">
            2
          </a>
          <a target="_blank" href="https://twitter.com/devnamipress/status/1929215039170638259"><p>10:04 PM • Jun 1, 2025</p></a>
        </div>
    
  </div> 
  </div><p>I always make sure to ask an AI if it is on the project and not straying out of it's path. This way I understand if it's not feeding the garbage to the development. Like Claude is known to write garbage document and things which don't even exist and it does this to satisfy the prompt requirement. </p><h3 id="h-what-about-test-later-approach" class="text-2xl font-header">What about test later approach? </h3><p>I start every project with the PRD and the prompt approach. And I let every project build with the documentation guidelines and after the project is completed I ask AI to write 2 documents. </p><ul><li><p>code-review.html</p></li><li><p>pending-tasks.html</p></li></ul><p>Code review document should take whole project overview and read entire codebase and find the faults in the project and reviews and audits like it is finding faults for the developer. </p><p>In case of pending tasks document I make sure to let the AI think of pending features, pending to do list and the tasks which needs fixing. </p><p>In short, as a developer you should write tests first then build documentation around these tests for project. Then finally build the project and at the end, test again to see if it has developed the project as asked. </p><p>More you ask LLM models to refine the quality the more you can bring quality product out in the world. </p><h3 id="h-test-first-test-later" class="text-2xl font-header">Test first, Test Later. </h3><p>Happy Vibe coding!</p>]]></content:encoded>
            <author>remake@newsletter.paragraph.com (re:make)</author>
            <category>cursor</category>
            <category>claude</category>
            <category>ai</category>
            <category>gemini</category>
            <category>chatgpt</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/3a20f01bbd11f7b61ae78b5f16394691.jpg" length="0" type="image/jpg"/>
        </item>
    </channel>
</rss>