<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>Optimized by Otto</title>
        <link>https://paragraph.com/@otto</link>
        <description>Writings about open source software, technology, business and constant improvement.</description>
        <lastBuildDate>Mon, 06 Apr 2026 03:01:59 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>en</language>
        
        <copyright>All rights reserved</copyright>
        <item>
            <title><![CDATA[The Simple Art of Effective Decision-Making for Managers]]></title>
            <link>https://paragraph.com/@otto/the-simple-art-of-effective-decision-making-for-managers</link>
            <guid>kU9milAjaUYmXnfAIf5D</guid>
            <pubDate>Sun, 23 Jun 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[A large part of a manager’s role is to make decisions and be responsible for their outcomes. While there is ample advice on how to be successful in m...]]></description>
            <content:encoded><![CDATA[<img src="https://storage.googleapis.com/papyrus_images/a8d7bd58405aac1e598997367dbdb83c.jpg" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAQCAIAAAD4YuoOAAAACXBIWXMAAAsTAAALEwEAmpwYAAAFnklEQVR4nF2Ua1CSWRzG/267DA7WtE1ru6l5SSVFMVTwRgmagVfUFPOCpKB5Ce95SVBLS7FMUPESeEXFSJNYJElSMxHWWnXdRNeayVm7fNtv+2Uvszuv1Uy775z3w/lyfud5nv95AGw8kGWLhxNk8IuCs2mQeAmTVXu4tJXaNFDYp2q+/7hfb5pYXNOYNuZevPreuDZl2pgw/DSkXxap54v71GeFQ0fKxZjsOmAWAI0FATHgRgY7/MeTkf8oDpyI4EMDChNiczGc6oOlt0OFQ1VyzR2tQW1aN2ztKHXzC6vm3ruq/Iq6jZ23xvWt2dXNMb1BpjMKRrT0m8PfXG7dz62Bc7lAPQ++NHAmfQawxSPMAAaEs/exyqwKm4gNvWUD6h6t4eFz8/zqpnHdfEPUebGEHxgaFRRCz8gtmdTNykbGJfLxhc2drqmFysGpgMb+A4VCFKscIjIgKBbcKR9FwFF3sPcGrzCgMC1ic1BZtTaVHekdSrFqTm16oTWuVtYLz7GzAql0CoUSfoZ6lkp2dvP08g+mMZK7BkZ1xpWZtVft6nlO14RTtQQxChGRBIQwcPD5pMCZBMQICEv78nyRJe+G/43+6mGtfHbZsLWjUOuo4fEniQHM+MgybmLFxcTLmdGsuDMxdOpJb9/ohLQyQb30nuaeYb1mRHuqsd+qsAmVUoIkQYoEF/9PgBNB4B8N4RdQrPJDJS2xIsWt+8j1F9e3rzTcdDzhxYyPFPBS5M15E625iib2UEPGVV4i7UywM847LYsnn9TOrG2LVHNM8djh0lY0uxIiMxHD3chg6wmIU+7BQI6DGC6GU21T0X6hc7xT88S4/YYvFFnbuvj4+BZciOsSsBcGyl8uSM2z0oWBMik/NS2O7ngcG5OQsvH6rX5tS6ZbyumZtK/qQHMEwLgI5HjA7cWAJIyjIHtGNporcKqW5Ejvy6YNpu03XQMjjsexEdTAxiLm5C3O7vLo37/v/vPXe/PjO/Kr54vSInHuOFJQ8NLKz083Xg/qTbw+FZbficqqtYjNgVMJ4EGFY17/Aezj1jhVS3KlKtn0kml7t106ZG/vmM8MVTZzjINFf75bMq/NXy7l/bo8ornNFWTHnA4kuuF926TDz16+6Z8xFvapsfzur7LqIO5/AMSiWIjJRnMEdpUdWV0TPdpFw+Zr+YQG6+rKjQmckxa9X2z7Y0c9NdGP9/YxP2ozyCva+KywYPIxFy+xbNi0vSubNuRKVU7VEjS3BhjZCABHBTsE4AnYU0gmkZloduXh0lameEysmptZ25568gOB4B1H8RmuT/9RP6hUSI84e1p8bRdACTNqpN3XeR5ePskZOb+8++3h841OzQKrQ3m0vA2deQWiuUiobqf3QrbxQOaJFAk0Fiq1xLJQSG0cvKbQKRfX9M/Wi0rLySSvgtSwNmEdIykDkM8CLL9lsdkRtBA8wS+3oHh2xTxpWL+ufEQTDh0obLZklQKdjbQONvDDmOLAwRcINKAmWSTko3OuuvC7c+5Mdk0t6Fa2qhpa3NzxWKw7iehr4+gKXxwCQAPG2sbJzcHVk0AKKuPX6Z6bpdNLPNkD99oeTF49JPIgNBl86Ej92OD2FNjt5RzEgCgOJr3iYPHNkKahGsXDQf1yt1LDSEo/ZOuCxROtHVwArABQYGntRw6xc8GV8Oufbu6Mzq9cU0zTW4YPlNxG/IniIiPzIeGPDw1xyQ+IdISceAmTXfddZTtTPNY4ph2bNV4TdTt5EKnhseeS090Jfs4eJ5NY7KT0TFe8n7Czv3fyUaNCw+pQ2ldJ9udcQwo1LBUx3DXgs7Kz8UBoiIhYCGejUoqt8uudqiVM0WiF9F6T/EHnuLZ3YoovFDE5eYkZefkVdS29ColS2yRXlUvvpohHsfxuy0vXUamle03HQK5vT/hw8r+jWU2m0HZPUAAAAABJRU5ErkJggg==" nextheight="1536" nextwidth="3000" class="image-node embed"><p>A large part of a manager’s role is to make decisions and be responsible for their outcomes. While there is ample advice on how to be successful in many other managerial core areas, such as growing your people, the domain of high-quality decision-making seems less crowded. In this post, I summarize what I have found during my 20+ years as a manager to be a simple and effective way to approach decision-making.</p><div class="relative header-and-anchor"><h2 id="h-identify-options">Identify options</h2></div><p>The first step in making a decision is to identify that there is a decision to be made to begin with. Ask yourself: Does something have to be done in a particular way, or are there options? If there is only one option, is the decision about timing when to execute it? To be able to drive influence as a manager, one must be able to recognize the opportunities to make decisions.</p><p>A manager might also be explicitly asked to make a decision. People might be at crossroads and waiting for a manager to take the responsibility of choosing which way to proceed. In those cases, a manager should also start by finding out what the possible options are beyond what was presented initially.</p><p>The ability to grasp a process, break it down into smaller steps, and exhaustively find the options is a skill that can be honed. You can, however, accelerate acquiring this skill by constantly asking yourself, “Are there more options?”</p><div class="relative header-and-anchor"><h2 id="h-explore-and-rank-the-options-to-discover-what-is-actually-the-optimal-outcome">Explore and rank the options to discover what is actually the optimal outcome</h2></div><p>Once you have at least two options you can embark on the exploratory phase of collecting data on the various options. During this phase you can still discover more options, and then also explore them.</p><p>The reason you need at least two options to explore is that you need to be able to rank them on some metric. Hence, the exploratory phase should also lead you to uncover what are the metrics you actually care about. In addition to discovering options you thus also discover what is the optimal outcome of the decision. The need to make a decision often arises from some sort of problem that was faced, but quite often it is unclear what would actually be the best possible outcome. <strong>Effective decision-making hinges both on discovering the options, as well as on discovering what outcome is most valued.</strong> The alternative options as external factors dictated by the situation, while the discovery of the desired outcome comes from within and from one’s values.</p><p>Once you understand the options and the optimal outcome, it is fairly easy to rank them or use techniques such as a <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/SWOT_analysis">SWOT matrix</a> to compare the options. With complete information, most decisions become clear and most managers would end up making the identical decision. The differentiating factor is thus not the actual decision making given the facts, but mostly at what point are different people content with the data points collected. As a general rule, if you have the time and resources, continue to push for slightly more data points even after you reach a level where you initially thought you had enough information.</p><p>When prioritizing data collection, try to think ahead what kind of information would definitely confirm something, or what discovery would disprove an assumption. If you start to strongly lean towards some decision already during the exploratory phase, put effort in specifically seeking for views and information that would disprove it.</p><div class="relative header-and-anchor"><h2 id="h-understand-severity-urgency-and-finality">Understand severity, urgency and finality</h2></div><p>Obviously, when the stakes are high, a manager needs to spend a lot more energy on making the decision. To ensure the correct amount of effort is spent on each decision, be explicit about assessing the importance and urgency of a decision. Small decisions should be made frequently and without a delay. If a decision has significant impact, but is not urgent, use that to your advantage and postpone it to allow more information about options to be collected.</p><p>Additionally, consider the finality of a decision. If the decision is easy to revert, there should be less of a reason to delay it. The <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.youtube.com/watch?v=DcWqzZ3I2cY&amp;t=3599s">famous concept of “one-way and two-way doors”</a> helps to understand this well. No decision is ever totally free from consequences if reverted, but understanding where the decision sits on a scale of “picking a hat, haircut or tattoo” significantly helps in making a better decision.</p><div class="relative header-and-anchor"><h2 id="h-break-your-bias">Break your bias</h2></div><p>This is probably the biggest challenge, even for very experienced leaders. There is no source of absolute truth and everything we hold in our heads is a result of ingesting decades worth of information with varying degrees of trustworthiness. Even if there were some imaginary filter that ensures we only learn about things that are true, many snippets of information will eventually become outdated and false over time. Our brain is constantly in a flux of multiple levels of different thought processes, emotions and moods. We can make good decisions only by using our own brain, yet at the same time we need to be aware that we can’t fully trust our brain.</p><p>Setting aside the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Epistemology">epistemological</a> thoughts that there is no absolute knowledge, even when we could have access to the truth we might fail to recognize it if our brain is trapped by unconscious bias. To have a better chance at breaking free, everyone should familiarize themselves with the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/List_of_cognitive_biases">common cognitive biases</a> in order to recognize when running at risk of any of them. Listening attentively to people with opposing views is also a good way to break bias.</p><p>You should be particularly careful in your decision <strong>if you feel you knew the decision before collecting enough data</strong> points to support it. Don’t let your quick but stupid lower limbic brain system in the driver seat, but instead make a continuous effort to allow your brain cortex to process things and only then make the decision.</p><div class="relative header-and-anchor"><h2 id="h-sleep-on-it">Sleep on it</h2></div><p>The previous paragraph neatly leads us to the last, but not least, important advice on how to make good decisions: sleep on it. Ask anyone who worked with me and they will remember at least one case where I said this and postponed making the final decision by one extra day.</p><p>You should continue working on a decision until you think you have everything you need to make the decision, no further research or consultations are needed, and you could just announce it. However, if the decision is not urgent to the day, rather write down your decision as a draft, keep it for yourself and sleep on it. The next morning look at the draft, ask yourself if you still agree to it, and only then commit. Having that extra night of sleep not only ensures our energy levels are recharged and we are more likely to think clearly, but also allows our brain and unconscious to process the information and thoughts from the previous day. Sometimes you might wake up in the morning having realized that you missed something or that you actually value a certain outcome more than another.</p><p>If the decision is urgent and it can’t wait until the next day, you can still drastically increase the quality or at least the confidence of a decision by taking a break, going for a walk or at least taking a couple of deep breaths before committing.</p><div class="relative header-and-anchor"><h2 id="h-some-decision-is-better-than-no-decision">Some decision is better than no decision</h2></div><p>The saying “sleep on it” specifically means just one night of sleep, or perhaps a weekend, but not postponing a decision for too long. If a decision is postponed for weeks or months, the circumstances are likely to change, and it is no longer the same decision. Something can of course be postponed, but in those cases the decision should be explicit that the decision is to postpone. Avoiding making explicit decisions is just bad management.</p><p>It is said that the Finnish army taught its leaders during WWII that if they don’t know what decision to make, then always just “hook from the right’’. It was considered both detrimental to troop morale and tactically inferior to stay put in the same location on the battlefield. Keeping the troops moving and executing a maneuver, even with incomplete information about the enemy positions on their right flank and taking a huge risk, was considered a superior option compared to not making any decision at all.</p><p>Luckily, very few of us are forced to make decisions about matters of life and death, but all managers need to remember that making decisions is a core duty of theirs, and that taking some action that leads somewhere is better than not making a decision at all. If the decision was wrong, one should own it and learn from it, so that the next decision will be better. Indecisiveness is worse, and will eventually lead to a figurative slow death.</p><p>If you have principles you follow or anecdotes about making decisions, please share them in the comments!</p>]]></content:encoded>
            <author>otto@newsletter.paragraph.com (Otto Kekäläinen)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/a8d7bd58405aac1e598997367dbdb83c.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Should developers always just write code and never design documents?]]></title>
            <link>https://paragraph.com/@otto/should-developers-always-just-write-code-and-never-design-documents</link>
            <guid>LpMDgjvgIddbI8tqmFwY</guid>
            <pubDate>Thu, 30 May 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[In software engineering, most ideas can be implemented without writing any design document at all. This is particularly prominent in open source comm...]]></description>
            <content:encoded><![CDATA[<img src="https://storage.googleapis.com/papyrus_images/7611ade22df0640904e87b6c90af4d90.jpg" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGPElEQVR4nBXQa0wbhwHA8WsDJIQADsSvs33n9+Pw2b7z2Xe+p+/8OhvbGPCrxBjCo+NNQhgho4RAUSBNSdaUpd2SkKZElKZZ1ims6octqbpKnZZK0zRFijJNldJuWqZ92aaqHzsh/b//pD8AAIC6qaHlcB0AvHwAOBDy+zoyyeZm5TGlVqnWabQGGIINZqc9XBCLY1goEpGz1aHRuw8++f6HH27s/jpfGVy+cOnZt//66tnzp9+8OHlmiY8m4/FUOOizqpXaxkbABIG+NrtA4XYTbDObWRIv5FKQHtRowH1DqQFBSGEmAC2q4UqZwTNyMjU9O7e189HTr/9x7dYH+Vf6fnJu+a///PeL/37/9/98NzN/nuUjsWg0SmI2jVJ39AgAqVvtRhBHrAzh4UJ4KiZ0p6NWI6RWqTUqNWSyoXZnGHHYYbPd6eueXMnli2NT02/fuPWnp1//bGsnly/NL7z27PmLJ3/79ovHfx6dnGE4XhL5OIU5dWqzWglomhtaD72kqAOOHjpU/1JtlAnmUlFYZzDa29RtrAWlOJfVZ9IPRgMr1Uyms1ysnJiZPX19+4MvHj9Zu7KZyXaMj4/v3n9we/f+9t1fDQwN06GQQFNxCmuD1GatBkBtpgDqcFkMrUeO1ACAROPZRNjjI4R4xurCNDojaoaa62rOdInlRMyKYPFEcmh4eGHlwtbOR3OLy7GINDkx8envv3r4x788evxkdHyK9GNcEA8HEBTW7gOwRiXSBONHbZBeq2hM0H45JhIhjmGYcqeMB0gccSIWQ4QlYasb8QakSLRUKqxffefD3zxaXL0Y4UKjrw59+OC3793be+/e3uDQMIV7hQDKe8xeI7i/yALrJCYQpgnc7fS7HREKSyTiRIiVE5HR/mKlWh3sLU7PzGRLvRq9FcUCHMf1Hi+uvrm5cX17anY+yod6e4qb17c3t/arVqsBD8J4XQGb1gNrYaUCcNtNpM8VYQMJkcrEuHSEDfESSVGpmDA32ddRKIzMnj25sNyeK+oMFlebl2bY/p7O9dXzGzfuzC0sSoy/Uur++fYvb+7cv3Vvb2RkFEOspBcJey2ETQcdUwDqpoPKIwcNymaRxp0WA034vP4Qz3O9pexYX6eQO04Nna/8eLWn2q/Xmx0OhAwxr8jMhany+tV3RyZPSYy/IxU/u7K28Pr6wsp6uVzGEBvmdnGYk3ZBhtYmoAbYz6xRUjjaUvMy5na7vDiKOOcn+sN0wM4ku+Yuz69u9JW7DbDJh6IEjk0XqY1T+YScFAWBoYIMw/C8IAgix/MhivQiDrsZJhAzi8A6RT1gg0Dt0UaXBcLdDk3DQYPd02a1kgZVB4mcOVEolso/WtooVPqHCikc92t1kER6dle6r52roj4iiPl8qIeiKFGUGI53OBC71eK0wHoQxJxGtg0GG+sBlkBVTfUeh8Vm1OuVrYc1NpXKQFugjNc4lcRW+6T53sRkf2cqLp6deTWfiVVTwW/25i6eLjE0k44JEs8SBBEgCMJPoKjX48UYMmg2me0QyCGwtqkWYEkMczsECk9IXFc6bjKZgFp1XQsEwxYMccbYYKVYiIdDDGa5vTY43iNvnSv879Fr1RRJE95MKipHuAgbzMpiPiXJPJ2VpUI2KbEUhth4Nww2HgSYgFek/RJLtMeF7mxCDcLAASVQr2poAY0ODGWi2cpIPJ3Nx4jFkfR0b/r5wyvfPX73/UunryxNvH9t7cZPlzeWZycHyjLtY9wmiXSHKYwjcS7o4RBIp6gDYhFBEjmaYaRoLCbLQIMWOKAGFIYWvaMvF7+6NL59ef53O298+fHmlx9fe/Lwzh/2bn22c/nz28u7F8ffPNWzONw5lpdO5MSzJ4cnRgZSEVZk/OEQzlM+3mvWNdYCDB9OdHRny9XSwJjUXgDqta0aqKtdmhosp3l/V4SeqGSGC7FeOVSJB4oCmg5YBESLGRQo2NjO+KrdchvYxGOOmzffuXv3ztzs1PDA8amJ0WJX2m2F1YdrAUZKicmuRDqbyWbScjQT54sZqZKN4oi5uQbQNdYSZiVtVcUQrceg0DbVQseacYchJxJzo9XZsf5cgl1bOvX25ddv/uJqOS2aNM3FfMdbb13qK3calE3g0cP/BzlA0nz7N6llAAAAAElFTkSuQmCC" nextheight="628" nextwidth="1200" class="image-node embed"><p><strong>In software engineering, most ideas can be implemented without writing any design document at all.</strong> This is particularly prominent in open source communities. For example, the Linux kernel has 35 million lines of code that have been written and rewritten many times over alongside 30 years of mailing list discussions. Linux wasn’t created as a result of a grandiose design document by Linus Torvalds, but it evolved organically in small increments of actual running code.</p><p>In open source and in software engineering in general, ideas are presented most of the time directly as patches that add, delete, or change specific lines of code. Those patches are not only read but also directly built, run, tested – and contested. Design and implementation are intertwined, and decisions are relatively small, quick, and done in writing.</p><p>However, this is not always the best way to evolve software. I would argue that <strong>even in open source, design documents are needed</strong>, and it would be good to see more of them being written than what is the practice today. Let’s dive into why, when, and how to write design documents.</p><div class="relative header-and-anchor"><h2 id="h-top-3-benefits-reducing-risk-communicating-intent-and-growing-the-authors-ability-to-think">Top-3 benefits: reducing risk, communicating intent, and growing the author’s ability to think</h2></div><p>Design documents have three clear benefits. First of all, a design document helps to <strong>manage technical risk</strong> <strong>and</strong> <strong>organizational cost risk</strong>. If it takes several months or years to develop something, starting the process with a high-quality design document helps to map out unknown dimensions and decreases the technical risk of the idea being impossible to implement. If there is no design and something is developed right away, there is also a risk that it might be rejected after implementation by downstream users or collaborators, and thus all work put in would be wasted. This is the first benefit of design documents.</p><p>Secondly, design documents are an excellent medium to <strong>communicate the intent</strong> of the change to others. An engineering team might have stakeholders, executives, customers, maintainers of other dependent software packages, legal requirements, etc. While it’s perfectly possible to develop something without writing the design first, designing forces a clear articulation of what the idea is about and why it is needed.</p><p>Thirdly, <strong>writing documents grows the author’s ability to think</strong>. When writing, it becomes clear very quickly what parts of the idea are still vague, ambiguous, incomplete, or even misunderstood. It helps in revealing blind spots, giving ideas shape and detail, and thus increasing their quality. Dumping a part of your brain in a document, and then revisiting and restructuring those thoughts many times over will always lead to a better thought process and higher quality of outcome.</p><p>In open source, you rarely see people writing design documents or conducting formal review and approval processes. They do, however, exist, and open source developers should also practice their skills in writing design docs. A well-written design document not only conveys the merits of a great idea clearly but also shows that the author is a great thinker and fully understands what they are doing.</p><div class="relative header-and-anchor"><h2 id="h-why-are-design-docs-common-in-enterprise-but-rare-in-open-source">Why are design docs common in enterprise but rare in open source?</h2></div><p>The above benefits are universal and benefit any type of software development. In the enterprise software setting, these additional aspects are often true, making design documents more common:</p><ul><li><p>Making the change happen requires a significant investment, potentially multiple people working full-time for an extended time. A written description of how that time will be used and what it will produce is needed to get buy-in from the authority that funds the development.</p></li><li><p>Rolling out the change affects many other people and the software they develop and maintain. The whole change might be moot if there isn’t prior approval and commitment from the stakeholders to adapt to the change.</p></li><li><p>The change might involve technical risks or have security considerations, and a technical plan needs to be vetted and approved by risk bearers before implementation starts to avoid major technical catastrophes.</p></li></ul><p>Ad-hoc implementation may take place both in the enterprise setting and in open source, but ad-hoc is much more common in open source, as the person carrying the cost of the implementation work and the consequences is often the very same person. In contrast, in an enterprise setting, the costs and risks are carried by a larger group that needs to agree before any work starts.</p><div class="relative header-and-anchor"><h2 id="h-famous-series-of-design-documents-in-open-source-development">Famous series of design documents in open source development</h2></div><p>In open source communities, ideas that are too large to be put into code directly typically surface first as a mailing list discussion or in an issue tracker thread. Full-fledged design documents are rare, but they do exist, particularly in large projects. Examples include:</p><ul><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.rfc-editor.org/about/">Request for Comments (RFC)</a> documents from The Internet Engineering Task Force</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://peps.python.org/">Python Enhancement Proposals (PEP)</a></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://dep-team.pages.debian.net/">Debian Enhancement Proposals (DEP)</a></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://eips.ethereum.org/">Ethereum Improvement Proposals (EIP)</a></p></li></ul><div class="relative header-and-anchor"><h2 id="h-when-to-write-a-design-document">When to write a design document</h2></div><p>If in doubt, just start writing a design document. It is cheap, and you can always stop in the middle and never publish it. The mere fact that you are contemplating it is a sign that you probably have some unstructured thoughts floating around in your brain, and starting the writing process will benefit you a lot.</p><p>To decide if you should go all the way and actually finalize and publish a design document in contrast to just implementing the idea directly, consider the three benefits listed above: risk management, communicating intent, and building trust in the author’s ability. Are these benefits relevant for your idea? If so, publish a design document.</p><div class="relative header-and-anchor"><h2 id="h-how-to-start-a-design-doc-use-a-blank-sheet-of-paper">How to start a design doc: use a blank sheet of paper</h2></div><p>If you decide to start writing a design document, don’t use a template; simply start with a blank sheet. Writing down any idea — large or small — should start with a paper where the author notes down the core idea first. Don’t start from a template, and in particular, avoid the pitfall of grandiose thoughts that lead to convoluted designs and bloated documents.</p><p>I’ve seen many times over the past 15 years the pattern of design document templates repeat, and it has never resulted in high-quality outcomes. Various design document templates, and fancy-sounding software development methodologies in general, are surely a good business for large consulting companies, but I have never seen them actually increase the quality of software development. In the best case, templates result in good-looking documents that are thin on content, but in the worst case, they massively dumb down the authors and put them in a mode where they are exonerated from all responsibility about the contents.</p><p>Just start from scratch and focus on the core idea. If you can’t express it briefly and clearly in a one-pager, you need to spend more time thinking about the core idea. Polish the idea before you even consider polishing the document.</p><div class="relative header-and-anchor"><h2 id="h-how-to-finalize-a-design-doc-expand-and-iterate">How to finalize a design doc: expand and iterate</h2></div><p>Only once the core idea is crisp and clear, and there is agreement to invest in it, should the work to polish the design document start. If there is a template, this is the point in time when it makes sense to start applying the template.</p><p>Having a crisp one-pager first also helps write the design document in a way that first presents the solution, and only then the motivations for the solution. If people jump directly to write the final design document, it often starts with a lengthy analysis of the problem and motivation of the solution, and only then the solution. Such a structure reflects the thought process of the author but is not a good way to structure a design document. The design doc should always start with the solution, and after that, the rest of the document exists to motivate and support the solution.</p><p><strong>Writing out the full design document is all about constantly expanding and iterating it.</strong> To flesh out all aspects, it is good to have a list of questions and ensure that they are all addressed while working through the document:</p><ul><li><p>What is the title? If there are similar competing or earlier designs, what is the unique identifier of this specific design doc?</p></li><li><p>Who is the author? Who is responsible for the design being good and correct?</p></li><li><p>Who is going to implement it? Who is sponsoring or funding the design or implementation work?</p></li><li><p>What is the status of the document? Is it, for example, just a draft, or is it pending comments or review, or has it already been approved?</p></li><li><p>If the document is approved, who approved it and when? How is the approval tracked? Is it easy to prove what specific version of the design each approver read when giving their approval to it?</p></li><li><p>What is the proposed solution exactly? Are there diagrams, pictures, prototypes, or others that cover the key parts of the proposed solution?</p></li><li><p>What is the scope of the solution? Was something intentionally left out and why?</p></li><li><p>Why should this solution be done now? What happens if it is not done at all, or if it is postponed? Are there workarounds?</p></li><li><p>What assumptions is the design based on? What happens to those assumptions over time? Will they still hold?</p></li><li><p>What alternative designs were considered? Why is the proposed solution the best?</p></li><li><p>What are the known trade-offs and downsides of the proposed solution? How are those being mitigated?</p></li><li><p>What is the work estimate and cost of the solution? What is the long-term cost and total cost of ownership?</p></li><li><p>What is the development and testing plan? How will it be rolled out? Can the rollout be phased? Can it be rolled back if needed?</p></li><li><p>What is the quality assurance process? How is security being reviewed and assured?</p></li><li><p>What is the impact on performance? How does the system scale? Where are the limits of scalability? What is the maximum load to be used in load testing and benchmarking?</p></li><li><p>How will the solution be operated, monitored, and measured?</p></li><li><p>How is the success of the solution measured and validated? How does one know if the solution actually worked?</p></li><li><p>When the solution is ready, how will it be documented and communicated? Are there different audiences that need different communications (e.g., internal vs external, developers vs users)?</p></li></ul><div class="relative header-and-anchor"><h2 id="h-take-your-time">Take your time</h2></div><p><strong>Designing is not fast.</strong> Authors should not expect to be able to sit down one day and write a design document. A good idea takes time to mature, several revisions of writing down to become clear, and a lot of time spent on waiting and getting feedback. For this reason, designing can’t be the main task for anybody but should be done on the side of other work. The design of the next idea should typically be in progress already while implementing the previous one.</p><p>Jeff Bezos famously wrote in the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.aboutamazon.com/news/company-news/2017-letter-to-shareholders">﻿2017 letter to shareholders﻿</a>:</p><blockquote><p>The great memos are written and re-written, shared with colleagues who are asked to improve the work, set aside for a couple of days, and then edited again with a fresh mind. They simply can’t be done in a day or two.</p><p>― Jeff Bezos, founder of Amazon</p></blockquote><p>A complete and high-quality design document takes a lot of calendar time. A good design matures like a bottle of wine. It can’t be forced to take shape quickly. Designing is like practicing wisdom – give it time.</p>]]></content:encoded>
            <author>otto@newsletter.paragraph.com (Otto Kekäläinen)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/7611ade22df0640904e87b6c90af4d90.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Heartbleed and XZ Backdoor Learnings: Open Source Infrastructure Can Be Improved Efficiently With Moderate Funding]]></title>
            <link>https://paragraph.com/@otto/heartbleed-and-xz-backdoor-learnings-open-source-infrastructure-can-be-improved-efficiently-with-moderate-funding</link>
            <guid>vXyWSFzfmuJGtDkPKaGx</guid>
            <pubDate>Sun, 07 Apr 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[The XZ Utils backdoor, discovered last week, and the Heartbleed security vulnerability ten years ago, share the same ultimate root cause. Both of the...]]></description>
            <content:encoded><![CDATA[<img src="https://storage.googleapis.com/papyrus_images/cbb9001795a35901693a336b67fc0b7a.jpg" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAFgUlEQVR4nDWUy28b9xHHxxJJWVJEvSjz/RC5XL72vfv77fO3+9slRUl2LLuCZYsxadlybIsS6VJBZLtpWqQIUPSSor00TYEmPfSeU/6WHoog5+aUUx6wLRaUk8FgMKf54IvvzAAUa8A0oCJBTYG6Os1pUFehoQKPQNBBMkA2Q5oVQk4IkxAmEcOdM+mMTkElK7K1I6zfkIu7amFfSnelVE9JHIjJe2K8K8Unlb8CUGpAmZ/i9DVMl5C7pJEF2Q7xRkQyZxV7TrUjmh1CJKKTy4Yb0UlYpzOGv6D7caedsJpFK0gjUkX6bTHXk1I9Md4Tk3f5xF0x0ZWSPTF+AWBFqGlQQ3McCjVwuKHPCGZYtCKiOStOMG9hJ4rJnOHNY29W92cxnUN0QSVLyI3rbkz3iti5pRT2hOyukO3yyS6f7CmprpTsiolfAKwMjBhh5fmKAqp7SSWAfdBckB3A9JLTgmBrCdMl3X+TUcWZaeAFQV9TzAR2Szq5o647ClcU0H0x1RVT+9IkO3wSoMRNABVlmlUuVxRQ7BLMtAECgPVovBDLBACbANXZ6Ft0M6Z7MZPGTBrSWiAHUzK5LJOIRjOGdweXa6IcreOelO2JqX0h2ZGSHT4BwHA/O1xVQbJXksU+wNf/+Oy/n/6tD/AE4D9/+fM3//piCFAqlFfpZtKg+4f167dZc0symggRzXG0bbU6qKxW6ny4hjty4UBK39fSh1r6QEz+DLjE4RXZBnebBfgUCuOL+Oarr77+8ss3/T/Xqg2A2PbOmtYeDosfjOD6dvXmTnTXT7/fXOtriY/KM/tqsasVe2q2oxUPtExPTfekFEBZgJoENTTPGyHsZ+L5pwA/fPu/8fn565cvX7969frVq5++++4MoJQqZlrXGL15/aby+1PYu1o2iLpj1W6a/KmVf8EvHmDmEc4/UDOHuPCunj/UMgdKGi5WSIYanm6geU4Hs6UD/B1S5+Px6x9/fPn9D+Px+HMoWADLW7us5cseFc3gi4/hve4sNd1dk39O85+Q8EOjdEzYvsU8wtkHKPsAZd6IAGAlqCOoYaihmQYGRCvzq3cBPsty4/Pz8/H437E6BVje2GH8tuS3NJt29sR7u+wnQ4jpGiMqf70aum+UjpzKUM8NHOYI5x7h3EOcPcSZQy0DUJYmR9AwwoI5L1hgt/mphTbArQszPociBYhv3ij4WxzdQEFg2fRPv4keddY/ujf97B4cbyX+2Iz81kn/gVsZKKkBKQ+swjHO9nHuGGePUBagIkMdTYvWgmyvaGQR0yuCIcOUD6ABdAA4CRfb1xokkGhAPOftpv5Bf+10b3lwrfBOq9YPSs9J+pm8+qK2ckKYgV0aOsW+XXpirg+MwomRA6gq0NCmJtdrTBiyBd42t5amAHDjHRdAzpXKtI1cz/edskE4w7zt6fuu3qUYWDvMmQOv/tBgR4Q5dZkhKY1I8Y7D9ezq+3a+j/O/KOCMGdGcV5wlRBYtv1wRSgJaI210aY5XDeQHvu/caBppzclp9smGcBQ0RgGn6zrRxVOXHbnVU5cduuWTNwos5onFPDZKxxNAVYE6CgvWtGjNKk5Uc1aQs2q4ccfP275Yl3h/w6HO5ga5FeCOp3SJ+JjyJ35j6POrqqOq6hmtjLzqyK0eE/bxxdwzK1+XETSsxwYDUFOBwyHBuCyZ8+pEwbLuxgwvadGCt5Em7ajesqmzHdi3W0aPcAdO44krPPVqT93qgmAxgvrMYX5N2BEp/65Zv+poAtI/9BiC1LRo9e0yTJ4Ej8KStag4K5guYbqoe8sGvWL7ObdZcFplt+1v+FcDZ2/DfLcpDVqT6adB9cyvjgJhSLn3CHvmsc899kOv/JBwv7KVF17pBa2ceeyIFP8PvVpvao0ZyvsAAAAASUVORK5CYII=" nextheight="628" nextwidth="1200" class="image-node embed"><p>The XZ Utils backdoor, discovered last week, and the Heartbleed security vulnerability ten years ago, share the same ultimate root cause. Both of them, and in fact all critical infrastructure open source projects, should be fixed with the same solution: ensure baseline funding for proper open source maintenance.</p><p>Open source software is the foundation of much of our digital infrastructure. From web servers to encryption libraries to operating systems, open source code powers systems that millions rely on daily. The open source model has proven tremendously successful at producing innovative, reliable, and widely used software.</p><p>However, the Heartbleed vulnerability in OpenSSL and the recent backdoor discovered in the XZ Utils compression library have highlighted potential weaknesses in how open source software is funded, developed, and maintained. <strong>These incidents showed that even very widely used open source projects can have serious, undiscovered bugs due to lack of resources.</strong></p><div class="relative header-and-anchor"><h2 id="h-learnings-from-heartbleed">Learnings from Heartbleed</h2></div><p>Today, April 7th, 2024, marks the 10-year anniversary since <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.cve.org/CVERecord?id=CVE-2014-0160">CVE-2014-0160</a> was published. This security vulnerability known as <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Heartbleed">“Heartbleed”</a> was a flaw in the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.openssl.org/">OpenSSL</a> cryptography software, the most popular option to implement <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Transport_Layer_Security">Transport Layer Security (TLS)</a>. In more layman’s terms, if you type <code>https://</code> in your browser address bar, chances are high that you are interacting with OpenSSL.</p><p>The fallout from Heartbleed was immense, prompting widespread panic among developers, businesses, and users alike. About one-fifth of all web servers in the world at the time were believed to be vulnerable to the attack, allowing theft of the servers’ private keys and users’ session cookies and passwords.</p><p>The software bug existed in OpenSSL’s codebase for over two years before being discovered. While code reviews were in place, the bug wasn’t spotted and went into OpenSSL’s source code repository on New Year’s Eve, December 31st, 2011. At the time, the OpenSSL project was maintained by a small 4-person team with limited funding and basically working as volunteers, driven by just the importance of their mission.</p><p><strong>This was the ultimate root cause – a piece of software that had started as a hobby project (</strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/History_of_Linux#The_creation_of_Linux"><strong>just like Linux</strong></a><strong>) grew over time and became part of the Internet infrastructure, but there was no mechanism to ensure resources would grow to be able to maintain it well long-term.</strong></p><p>In April 2014, the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.linuxfoundation.org/">Linux Foundation</a> Executive Director Jim Zemlin seized the opportunity to get visibility and managed to get Amazon Web Services, Cisco, Dell, Facebook, Fujitsu, Google, IBM, Intel, Microsoft, NetApp, Qualcomm, Rackspace, and VMware to <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://arstechnica.com/information-technology/2014/04/tech-giants-chastened-by-heartbleed-finally-agree-to-fund-openssl/">all pledge to commit at least $100,000 a year for at least three years</a> to the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Core_Infrastructure_Initiative">Core Infrastructure Initiative</a>. The initiative continued for many years and eventually transformed into the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://openssf.org/">Open Source Security Foundation</a>. Also due to Heartbleed, the European Commission launched the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://joinup.ec.europa.eu/collection/eu-fossa-2">EU-Free and Open Source Software Auditing project</a> and spent at least a million euros on auditing OpenSSL, the Apache Server, KeePass, and other security-critical open source software.</p><p>This relatively modest funding, along with code audits and process improvements, allowed OpenSSL to become more secure and sustainable. Today the OpenSSL project is thriving: it is <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.openssl.org/blog/blog/2024/01/23/fips-309/">FIPS 140-2 certified</a> and has a healthy base of both financial and code contributors.</p><div class="relative header-and-anchor"><h2 id="h-learnings-from-the-xz-liblzma-library-backdoor">Learnings from the XZ / liblzma library backdoor</h2></div><p>While there are surely still more details to uncover in the coming weeks, when the news broke about the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/XZ_Utils_backdoor">XZ compression software backdoor</a> (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.cve.org/CVERecord?id=CVE-2024-3094">CVE-2024-3094</a>), it was immediately clear that <strong>it happened because XZ had become hugely popular and widely used but was still maintained by one single overworked person as a spare time project</strong>. A well-resourced malicious actor was able to manipulate and pressure the maintainer to give them commit access, and thus the software supply chain was compromised. We should not blame the original maintainer, but rather everyone else for not realizing how widely used XZ was, yet going by with very little support and resources.</p><p>A huge number of applications depend on XZ. Right now the priority should be to offer help to maintain it properly, both upstream and at various downstreams, such as in Linux distributions, so the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Software_supply_chain">whole software supply chain is secured</a>. It does not require a massive effort – just having a couple more maintainers to share the maintenance and review work should go a long way.</p><div class="relative header-and-anchor"><h2 id="h-would-we-be-better-off-with-closed-source-software">Would we be better off with closed-source software?</h2></div><p>In both cases, the vulnerabilities were fixed quickly because the world had access to the source code of the affected software. This is a major advantage of open source software: it allows anyone to inspect the code and find potential vulnerabilities, <strong>and submit fixes to them</strong>.</p><p>In the case of Heartbleed, Google’s security team reported it to OpenSSL first, but the Finnish national <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.kyberturvallisuuskeskus.fi/en/">NCSC-FI</a> has records of local cybersecurity company Codenomicon reporting it independently. In the case of XZ, a Microsoft employee and PostgreSQL developer Andres Freund found the backdoor while doing performance regression testing in a Debian Linux development version. It was a huge fluke of luck that the XZ backdoor didn’t go in any actual Linux distribution releases. Next time we might not be as lucky, so more reviews, testing, and validation are needed. It will need resources, but at least public review is possible – thanks to this infrastructure-level software being open source.</p><p><strong>Public scrutiny, testing, and validation are not possible for closed-source software.</strong> In fact, if closed-source code gets backdoored, it will go unnoticed for a much longer time. For example, the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/2020_United_States_federal_government_data_breach">2020 U.S. government data breach</a> was possible due to multiple backdoors and flaws that went undetected for a long time in closed-source software from SolarWinds, Microsoft, VMware, and Zerologon. In theory, companies always have money (unless they are bankrupt), but in practice, the pressure to channel that money into software review and testing varies wildly, and working without exposure to public scrutiny often incentivizes companies to skimp on security to maximize profits.</p><p>Thus, I firmly believe in open source software having a better overall security posture as long as there are reasonable resources. And if the source code is public, anybody can audit how active the maintenance is and thus also the <strong>fact if maintenance is funded itself is a public and auditable property of open source</strong>.</p><div class="relative header-and-anchor"><h2 id="h-pledge-for-funding-and-participation">Pledge for funding and participation</h2></div><p>Both Heartbleed and the XZ backdoor incident underscore the critical role that open source software plays in powering the digital infrastructure of today’s world. Such important and widely used projects shouldn’t be struggling to get by. It’s time for companies to step up and provide reasonable funding to the projects they depend on.</p><p>You don’t need billions to meaningfully improve open source security – the OpenSSL example shows that even modest funding increases can have an outsized impact. A tiny slice of the corporate IT budget pie could go a long way. Additionally, some of the <strong>government defense spending should be funneled into key open source software projects</strong> that our society relies on.</p><p>The incidents of Heartbleed and the XZ backdoor serve as sobering reminders of the vulnerabilities that may exist within our open source infrastructure today. However, they also present an opportunity for positive change. By investing in the security and maintenance of open source projects through moderate funding and support, we can enhance the resilience of our digital infrastructure and ensure a safer and more secure internet for all.</p>]]></content:encoded>
            <author>otto@newsletter.paragraph.com (Otto Kekäläinen)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/cbb9001795a35901693a336b67fc0b7a.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Communication Is the Key to Efficiency in a Software Engineering Organization]]></title>
            <link>https://paragraph.com/@otto/communication-is-the-key-to-efficiency-in-a-software-engineering-organization</link>
            <guid>l9Ji9aWl33kO9VDZtevs</guid>
            <pubDate>Sun, 31 Mar 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[For a software engineering organization to be efficient, it is key that everyone is an efficient communicator. Everybody needs to be calibrated in wh...]]></description>
            <content:encoded><![CDATA[<img src="https://storage.googleapis.com/papyrus_images/bdcd4599bd65fbae43b8215389c98644.jpg" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAFYklEQVR4nF2UW0zTVxzHfypoYFrbcmmB/mn7771/Cq3l0itrSy2FlnJrpVRuWnCiUqBoEeQmRe4UhQqiZp0iEFHEqFtG0D3MaLaXLdkethcTs8TF6cvmHnzYIsv5I5tZ8s3J+V/O93N+l3MAdqVAbBLS7mSkWHJESgEqBtRUoHMgnouUgP9f8VygstE/lFSgsNCS2GTSLXlLLEBvN0XZFAstoLKBzt4y4gFDAEnSLRHAECIxRcCWo09x+HsMFQMKhhzQiJFuGMBu1gcAjLTm/OfLkgFbAex9INYDYYA0IxBGNE8zQsZ+EOkAkyMkKx3oPBLDRiaxGLLdJKEHNPvXnY2smSK0Es8EsX5bZiFFWxqtKdmpLWOY3YmmykRTJc1UGa0tB8IMolxIy4uSGYAlB4YYKVEI8XySQeaDRH24d/57d4Fmm8oRpS2LMbop5iqGvT6hqAEva+SVNXJKjwrLj2H2hqSCugSTh6ItA7EO+Grga1C4qenAJICKowJsAVioSlQOMESAKZCEOlAU7NCW0yzVybbD8fYjmMvHdbdk1LbL6gK8Sn9aXbusOiB2+XBHA8fupWhKUKzSXFQVjACmFAE+iABDmYnHgSkDngrlV2mN0Top5iqm3ct1nuAfPCU41EkcH1AEJpSByczA5L6W8SzfiOKTs0KPX+hqwh1HEvSu6HRLjMwYJdUDnoUAe8h9k8lJRQ2XKEQAlHQbTefcazwQZz0k8rTxa0/LfcPqrrD4zKw8GFEEI7qheWto2TC8mNV1WeUPyQ91SSqaxaWNTIMLpAYQkQA6j8wKtgWgcxAgNXO7Ip+md2L5NSk2r8Djx2o60ltGMjrDku5ZSc8VzciiMbSsGrxhDi2bpu/Yw3c1fVc1/glVQ3d6RZPIVkfNsYPUCHgOioBC9u778sYLUOoJw44se4zWmVRQR1T4BDWBOG8Pq3mY3xHWDH2mHrxmGF/af+G2cXzFNnvfcOGuanQpMxjJ7Qxrj/Urq9ukJfU8c+UOwggsBWqkzfOEAqFzISkdRLrtinyqtmzvxwdEriahx0+r7eSdGJK0T+X0XVYPRAqnVwqmV6zhVVdkzTb3uW32vunCnbzQzazuOaVvWO3tlDgO42Z3XLYNdRRTQu4bATCg4wgg1kUpC3apHKmFdWKPn1HdQW8cEPgn0k6H1f1XrGPz+r4519RN19RN6/CN6siaM/KlNbyqHVlUBiPKzhnVsaCioombf5BrdkcRRkhOQ22JUoQKwEfHRGr4SOWg6Z1sxxGWu4Xh7U1rHec1j6l75vb3X3GNX7/304vnbzd+ebvx5MWblvn18pkV28VV/eB1w+A1ZfcledOw0uMTFNbiZjdDXQTJGUAhbyoUAZNAQcksMTlFKRYPz3mcUdXObgxK/aOywGRe74y9f+6H3//69uWfwbtfhx99//Tl21cbG20L666ZVePYoiYYye67rDgZ2ld7Slri5Vk8SbpidJIYEjKC3Ri6cEQ6kFsY2iKWyc12HqdVtdO8PbhvVNszm+07d+PJj4+fvbYPXPVeuhV88OTo/FePnv/x85t3ZZNLpTMr2X1XpR3hjMB5eX0XUdqAGZxxmuKdMhNgGZtdlIoOMK4BuSVJX8zK93DdzbzDXZKmoYyTobzu6fzTE988f929tFYTWvBfX2tbWJ96+F3v6uNXf280Rr4onl4xjy2azl3TdM5knhhM97SyLQcxQ3m0JJe8lDibADHgapAY9mQXYtZa3gEfUd+d1TqadWrS0HG+8MzU02e/tn16rzo0X3fxlnvqdnPkQfviw9/ebbTOr9tCS8Xnl8snFky9c9qWYcLt4+dXJ2sc28R6ZEtl/wNx7Jna2jJwPgAAAABJRU5ErkJggg==" nextheight="628" nextwidth="1200" class="image-node embed"><p>For a software engineering organization to be efficient, <strong>it is key that everyone is an efficient communicator</strong>. Everybody needs to be calibrated in <em>what</em> to communicate, to <em>whom</em> and <em>how</em> to ensure information spreads properly in the organization. Having smart people with a lot of knowledge results in progress only if information flows well in the veins of the organization.</p><p>This does not mean that everyone needs to communicate everything – on the contrary, it is also the responsibility of every individual to make sure there is the right amount of communication, not too much and not too little. From an individual point of view, it is also important to be a good communicator, both verbally and in <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://optimizedbyotto.com/post/writing-tips-for-software-professionals/">writing</a>, as that defines to a large degree how professionally others will perceive you.</p><p>Reflecting on the principles below may help <strong>both your organization and you personally</strong> to become a more efficient communicator.</p><div class="relative header-and-anchor"><h2 id="h-communicate-early-and-exactly">Communicate Early and Exactly</h2></div><p>Foster a culture where people share small updates early. <strong>When you introduce a change, describe it immediately.</strong> Don’t accept “I will document it later” from yourself or others. People are interested in the change when they learn about it, and should be able to immediately read up on git commit messages, ticket communications, etc. When you make a change that affects the workflow of others, announce it immediately. Don’t wait until other people run into problems and start asking questions.</p><p>When you announce a change, be exact. If the change has a commit id or a URL, or if there is a document that describes it in detail, reference it. Avoid abbreviations and spell out the names of things to avoid misunderstandings. Use the same name consistently when referencing the same thing. Don’t be vague if being exact requires just a couple seconds more of effort. If you know something, spell it out and don’t force other people to guess. In a larger organization, it might even make sense to have a written vocabulary to ensure that people understand the daily jargon and assign the same meaning to the words used.</p><p>Keep in mind that you, the announcer, are <strong>one</strong> person, but your audience consists of <strong>many</strong> people: if you take additional time and effort to be precise, you may save a great deal of repeated effort by many others to determine precisely what you were referring to. When you see other people putting effort into making easy-to-understand, brief, and crisp communication, thank them for it.</p><div class="relative header-and-anchor"><h2 id="h-use-the-right-channels">Use the Right Channels</h2></div><p><strong>Always keep communication in context.</strong> If a piece of code does something that needs an explanation, do it in the inline comments rather than in an external document. For example, if the user of a software application needs guidance, don’t offer it on a completely separate system that is hard to discover, but instead offer it via the user interface of the software itself, or in a <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Man_page">man page</a> or a <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/README">README</a> that a user is likely to come across when trying to use the software. If there is a bug that needs to be debugged and fixed, discuss it in the issue tracker about the bug itself, not in a separate communication channel elsewhere. If there is code review open on GitHub or GitLab, don’t discuss it on Slack or equivalent, but do it in the code review comments – as it was designed for. Always communicate about something as closely as possible to the subject of the message.</p><p><strong>Prefer asynchronous channels over synchronous channels.</strong> Chat systems like Slack or Zulip are better than a phone call. A well-written email is better than scheduling a 30-minute meeting. In an async channel, people can process incoming communication and respond in their own time. Sometimes a response requires some research and is not possible in real-time anyway. Having to juggle schedules can also be a wasteful use of time compared to working through a queue of tasks. Interrupting software engineers is very costly, as it can take tens of minutes before one gets back into “flow” and back to cracking a hard engineering problem. Also, as many teams today work across many time zones, you might need to wait until the next day for the reply anyway.</p><p>When using Slack and similar chat software, try to pick the most appropriate channel. Avoid private one-on-one discussions unless the matter is personal or confidential. The discussion in a public channel is more inclusive and allows others to join spontaneously or to be pinged in the discussion to join it. In Slack and similar chat systems that have threads, use them correctly to make it easier for participants to follow up on the topic while at the same time keeping the main channel cleaner. Avoid using <em>@here</em> on channels with more than 20 participants. Get into the habit of using <code>Shift+Enter</code> to write multiple paragraphs in one message instead of multiple short ones in succession which might cause unnecessary notifications.</p><p>In chat systems, do <strong>not</strong> send people messages that only say “<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://nohello.net/">Hello</a>” . Get straight to the point.</p><p><strong>Have a chat when it is appropriate.</strong> If you feel there is miscommunication and you can’t resolve it async with well-written and thoughtful messages, a short chat 1:1 or with a small group of people can bring a lot of clarity. An in-person meeting or video chat usually works best, as both parties can read each other’s cues to see that they understand and can follow the topic.</p><div class="relative header-and-anchor"><h2 id="h-teams-exist-to-channel-the-flow-of-information-in-the-veins-of-an-organization">Teams Exist to Channel the Flow of Information in the Veins of an Organization</h2></div><p>Team members interact mostly with others inside their team. It is not the responsibility of individual team members to know what people in other teams are doing. If something noteworthy is happening or is planned to happen, it is the responsibility of the team lead to communicate that upwards and laterally along the organizational lines. The team lead is also responsible for the inward flow of information and making team members aware of things that are relevant to the team.</p><p><strong>The reason teams exist is to limit the flow of information.</strong> Most organizations are divided into 5–15 person teams simply because if teams were very large with 20 or more people, the overhead of everybody communicating with everybody would eat up too much time.</p><p>With this in mind, please be considerate and try to avoid approaching individual engineers in other teams too often. <strong>Channel communication through managers and architects, who are responsible for gatekeeping and prioritizing things.</strong> In a large organization, if you notice that people are reaching out to you personally all the time, just politely refer those requests to your manager.</p><p>In particular, when doing cross-org communication for large groups of people, think about the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/github/how-engineering-communicates">signal vs noise</a> ratio. Free flow of information may sound like a noble principle, but a lot of information does not necessarily convert into real knowledge sharing. Write and share summaries instead of raw information. Be deliberate in selecting who should know what and when.</p><div class="relative header-and-anchor"><h2 id="h-make-meetings-intentionally-efficient">Make Meetings Intentionally Efficient</h2></div><p>Principles for <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://optimizedbyotto.com/post/tips-for-efficient-meetings/">good meetings</a>:</p><ul><li><p>If you organize a meeting, make sure it has an agenda in the meeting invite so attendees know what the meeting is about, and have a chance to prepare for the meeting in advance. The agenda also allows people to make a better-informed decision if they can skip the meeting or not</p></li><li><p>If the meeting makes decisions, those should be written down somewhere (e.g. design doc, issue, ticket, meeting minutes). People tend to forget, so there must be some way to recall what the meeting decided.</p></li><li><p>Don’t invite too many people. If there are more than 5 attendees in a 30-minute meeting, there will be no genuine discussions as there would be less than 5 minutes of speaking time per participant.</p></li><li><p>Don’t attend all possible meetings. If dozens of people are invited to the meeting and it seems like an announcement event rather than a discussion, maybe just skip it and read the announcement documents instead.</p></li></ul><div class="relative header-and-anchor"><h2 id="h-practice-efficient-statusprogress-communication">Practice Efficient Status/Progress Communication</h2></div><p>The purpose of progress information is to allow others to learn the state of an issue and allow them to adapt their own work in relation to that issue. Issue status information also helps the author themself to remember where and in what state they left something, essentially being communication to their future self.</p><p>Good principles to follow:</p><ul><li><p><strong>Avoid duplication.</strong> If an issue tracking system is in use in an organization, and an issue tracker entry has been filed, maintain it and do not disperse the information out in multiple places. Focus your energy on making sure the issue tracker is up-to-date so followers of that issue don’t need to ask about status or search for separate updates in email or old chat messages.</p></li><li><p><strong>Keep status information current.</strong> There is no point in a status-tracking system if the statuses are out-of-date. On the other hand, there is no need to update the status daily. A good rule of thumb is to update important status information immediately when it happens and less important statuses perhaps bi-weekly or monthly, depending on what the normal cadence of reviewing and prioritizing work is.</p></li><li><p><strong>Annotate status changes.</strong> If an issue was closed but there is no comment whatsoever on why it was closed, it will raise more questions than it answers. When updating the status of issues and in particular when closing them, add a comment on what changed and why the status changed. Use to your convenience the feature present in most issue tracking software (e.g. GitLab and GitHub) that they automatically close issues when a commit with a closing note lands on the mainline git branch, and those status updates are automatically annotated with a link to the change, including date and author.</p></li><li><p><strong>No news is bad news.</strong> In the context of status information and progress communication, people tend to view a lack of communication as a sign of a lack of progress. If something is blocked and there is no progress, a quick message noting “no progress” is better than silence and letting people stare at issues with no updates. Eventually people will start to worry and will reach out for updates, so skipping status updates to save effort might not actually save any effort.</p></li><li><p><strong>Remember the purpose.</strong> At the end of the day, progress is more important than communication. If a task is <em>small</em> and you work on it <em>alone</em>, issues/status reporting may be omitted completely. If you find yourself spending more effort on communication about an issue than working on the issue itself, something is wrong with the overall process and you should review it.</p></li></ul><div class="relative header-and-anchor"><h2 id="h-give-and-get-feedback">Give and Get Feedback</h2></div><p><strong>Be honest.</strong> Engineering is about building stuff that works, and a lot of effort goes into making sure stuff actually works. That requires a constant loop of testing and feedback. If you spot something that is off, report it. Don’t waste time on sugar-coating when communicating from a professional engineer to another, but just report the problem you’ve spotted and do it exactly.</p><p><strong>Be grateful for all the feedback you get.</strong> Thank the person for taking the time to give feedback. The more you get, the better. Never scold another engineer for giving you feedback you don’t like. Stay professional and just read/listen and try to understand it.</p><p>Not all feedback is valid, and <strong>as a professional you choose what feedback you act on</strong>. If you don’t understand the feedback, ask for clarification. <strong>Engineering is, and should be, full of debate</strong> about what is the wrong or right solution, so that the chances of landing on the actually right solution are maximized. Be professional and don’t take those discussions emotionally. Engage in them, and make sure all data points are analyzed. Intentionally challenge your own bias and preconceptions, and try to reach the best conclusion you can.</p><div class="relative header-and-anchor"><h2 id="h-less-is-more">Less Is More</h2></div><p>Delete stuff that is outdated or obsolete. Remove weasel words and duplicate texts. If you come across some documentation that is clearly outdated but might be needed for archival purposes, then add a banner to it stating that it is no longer up-to-date and kept only for archival purposes (in particular in a wiki where anybody can contribute to maintaining the contents). False information may cause more harm than no information.</p><p>Avoid <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Link_rot">link rot</a>. If a document is moved from one place to another, delete the old version and replace it with a link to the new one.</p><p>Always shorten and simplify code when you can do so without sacrificing other desirable qualities: correctness, readability, maintainability, and consistency with the rest of the codebase. For example, if you are writing or maintaining 10 unit tests which are 20 lines of code each, but differ only in a couple of inputs and outputs, then combine them into a single parameterized test.</p><p>Layering and abstractions are valuable techniques for writing reusable and correct code. However, <strong>too much</strong> <strong>abstraction</strong> can make code difficult to understand and reason about. An integrated development environment (IDE) can be a useful tool for quickly navigating a code base. However, if you <em>have to</em> use an IDE to follow the code’s logic, it is a telltale sign that the code is way too abstracted.</p><div class="relative header-and-anchor"><h2 id="h-a-good-coder-is-also-good-at-writing-in-human-languages">A Good Coder Is Also Good at Writing in Human Languages</h2></div><p>The primary goal for code is to be easy to understand and follow. Optimize for readability and maintainability. Do not optimize for speed or build layers of abstractions upfront. Instead, do such things only later on if and when you really need to.</p><p>Principles to make code easy to understand:</p><ul><li><p><strong>Start with good naming:</strong> Files, functions, variables should all be named in such a way that one can guess from the name what they do or contain. Don’t be afraid of changing a name if you realize that something else would describe it much better. Functionality and contents evolve – so should the naming.</p></li><li><p><strong>Use the project’s coding conventions.</strong> A suboptimal but consistent convention is better than mixing multiple conventions in one code base. Use correct indentation, line length, white space, etc. to enhance the readability of your code. Keep the flow easy to follow.</p></li><li><p><strong>Add inline comments</strong> in places that require some additional explanation of what the code does or why the code was written in a particular way. Good inline comments prevent the code from being deleted or refactored by somebody else, or by yourself a year later when you can no longer recall the details.</p></li><li><p><strong>Longer or higher-level documentation should go into README files.</strong> The convention of using README files in code repositories is a great application of coupling code and documentation. A README is easy to discover in a code repository, and very likely to get updated by the same commits that update the code itself. If a code repository completely lacks a README file, the code is most likely not intended to be long-lived and should be avoided.</p></li></ul><div class="relative header-and-anchor"><h2 id="h-software-engineers-need-to-excel-at-making-git-commits">Software Engineers Need to Excel at Making Git Commits</h2></div><p>Last but not least - remember that good software engineers write both <strong>great code</strong> as well as <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://optimizedbyotto.com/post/good-git-commit/"><strong>brilliant git commit messages</strong></a>. The more senior the engineer, the more they value both the code <em>and</em> the description of it, and the whole <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://optimizedbyotto.com/post/how-to-code-review/">feedback cycle</a> that eventually leads to making the world a better place – or at least one piece of software better. Practice this skill even if it requires a bit of extra effort initially.</p><div class="relative header-and-anchor"><h2 id="h-if-you-are-not-a-native-speaker-invest-in-improving-your-english-skills">If You Are Not a Native Speaker, Invest in Improving Your English Skills</h2></div><p>Most of the world’s population are not native English speakers – including myself. Yet, as English is the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Lingua_franca">lingua franca</a> of the software profession world, we all need to put effort into becoming more fluent in English. The best way to become fluent is to simply force yourself to read, write, listen, and speak English, and to do it in a way where you intentionally try to improve your English. Personally, I, for example, watch YouTube videos in well-articulated English, such as the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.youtube.com/@TheQIElves">British comedy panel show QI</a>, or listen to podcasts attentively, trying to pick up new expressions that help articulate ideas and opinions more accurately, and in general to expand my vocabulary.</p><div class="relative header-and-anchor"><h2 id="h-high-quality-communication-facilitates-high-quality-engineering">High-Quality Communication Facilitates High-Quality Engineering</h2></div><p>From an organizational point of view, <strong>it doesn’t matter how many amazingly smart engineers you hire if there are not proper mechanisms in place to ensure that the right amount of relevant information flows</strong> between the experts. Efficient communication is vital also for growing junior engineers quickly. You don’t want to have any engineers wasting time trying to solve problems that have already been solved. The whole organization will be <strong>vastly more productive <em>if</em> everyone is able to find information, easily and quickly at the time they need it</strong>.</p><p>Achieving it is not hard nor expensive – it just requires setting a couple of ground rules, reflecting on their meaning, and executing them consistently. Managers play a vital role in achieving a strong communication culture by leading with their example and showing that good communication is valued across the organization.</p>]]></content:encoded>
            <author>otto@newsletter.paragraph.com (Otto Kekäläinen)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/bdcd4599bd65fbae43b8215389c98644.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[8 Writing Tips for Software Professionals]]></title>
            <link>https://paragraph.com/@otto/8-writing-tips-for-software-professionals</link>
            <guid>0xeLiemWEXfAwnQUDShi</guid>
            <pubDate>Sun, 24 Mar 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[People usually associate advanced software engineering with gray-bearded experts with vast knowledge of how computers and things like compiler intern...]]></description>
            <content:encoded><![CDATA[<img src="https://storage.googleapis.com/papyrus_images/c5280cb6310d3fce9bfa81976f7bd6ca.jpg" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGa0lEQVR4nB3RjVOThx3A8d+xqz121zors8oYIu+HAWp4kYKuEJWXAEEw0Fq0WFCPMNAEBpFeUbuh1AplvFYKbSYUosEYCOGBEJ68PzwhDy9P3khIwssIAgJyeHTWm9ux6/6B733uvrC15tpenX+5bN9xWV8vGhw6hUhn1c6tGV0bLUIpk3PraPoFag7LM+q0u0+Yu1ewm8eRvf7Hypq7kaeP8CZ2Vixl7563wv7gUZQcdZOTX1tzh0ZLPuzt5+/nTwsLyor2g83l+Q2nefe5c3fTsbtqWNaj3/Ur2HXtTM7tk5f+klDAjb1QGng6xy8u9fchsfuCovcFRYcmZrNq2utESsEPrUIOnUOnBkTGQCQd6EWQxQqjUDz3uAHA226/CTj4HqzPW39esPb2iQtqGis6+FOY/MEjcf7txuSi25nltQz2nYT8L45nF4XRPzuZW/LpF3WljZ33+EONffLmAVVtH1Za10Jh/xUuVwOTCwwWZHEg48/R0VFxPvs933V3d3OD7QXrjW+awes4HAqFkOSor3mvN1eNq69KG7oL7nacr2zI4NwtqOmoF2s6lYZ2KVYvlNULZd/2ym7WtsTls4HdAkXNgaxKxpUrkMOFs9fh7DVgXIs4nfxxpE/MkQOw5TDSL5cBHNx/wHvPQf93zl8fGEW3N1+ML+2UN/Or+bIejZl4tqO2PxeMGR+i+ieqifHZJfGwrKvq8wZWeggzDwrvQ3FDxMX8i3k5wCwDxnVIK4QU1r74DOaHwUBo5cF55RD8JwinQcAJ6vWaNuvOiES4tfNvx8s3Cz/vul7vWjb+pZvfGLOvmJfW51Y2rAsuvXmWUI2MtVY0sejB5/KAVQuFdQeyL32emwZZLEi9BsmX4cwVSMiBbzt7fvu37nP3uyD0NByivFv1IJU3XJR/YV6v3NrdXXuzu/jyzdLL/yxv/7K+/Wpx/cWU3akkSESFaUjb1DimbeM2FSYFJDMh7xZcuguZBTmZiUD7GJIK4VQenMkDTCnN/Xt3cKcGzhaDfyyExcP73hcZdNtwj5PUb77ZXf/lv9aldevSc7trTTVpxEirdtLcj6o1BGldXCP0Om3bjYZ8WlB8MjDL4dObkFQAAZFw8jyksODDTFg1Yt9xL3sFh4DvMXjHEwCCDnk0VRSSg/y1tbW51RdGx7LB7sINVoy06C1OjLTghhnLgsvsXBzV6PRGG4bjaGvl1+fjgmM+gpRCyOT8uoH2GQTGArwNNgw1DXW3l2TkRR0+4fNezOH95TlnUIloCnnsnDHPb7zSmxx6swM3zepMDoy0yHWTlgWXhjCIUfWwEhsjSHzKjCq1/fXcO+eigkLCfrWfYMJeT4+34Ljv+zCjkJjRpzi/ubcyt/FqWm0lRyCSTNif2exz06IHDrvD6tpUTRpw0oaR1lF8So5PjGjGpWodosS0kwYdaeILJUJEMSjHuu5xy5NCPLx9AeDI79yZxwMvnAiESWkv/rgNE3QoJCKpApPiRt20DSNtxqUXdotxQtBkcywa5lbk+LScMKLY1IiGGMEIqVonQdWIXMsXS4UyuVqGor1isaCv46uyG2nUT2ICWYnhJSnHrtKOgkLAm9ZhZvvi+OyKmnQoCUvzMDFunjfYF2dWtg2YDHv4jcnhwkjbiJYY1hCIUt8v0yIKXe+gXIgo5Pi0elStFIgU/KdDvO7O3kF2/kV2IqUqO4abEVWSEgbi9jq9fpJwrCCYESWsFoeLzZNQq/l6y/y0dXHm2RYu4il59wnrP6Vq4gmCilH8CSLvEUmlakKpM4xoJhCZdliEoN2CllbeqRQG1e+PxYmhZWnU8vQIbmYESNruIT/UY6QVwYzDGKkhbaYZJ+Muz+ern/SmuQmz07ywKv++WsZrUJN2RDHGF8tEMnVPn4wvliFqvRjFxahOMIAWs0rDgoJSw724aREV6RFl9PCSREpFOhW6738parg99KhLPmnvl40NqPTyyRnTjJNe/aNXVZeGnNWbnXqTTXy3GOn5UYob+WJZc5eAJxz8fxofxafrvu+KiUsI9z2U/1EIJzXqKo1SSKOwEo6yE0PTPvABxsloceud3tovpUMyiWpKgKgHVASCkRNT5lO32o5U/QMlzFrDnFarfViR+/Tx4y4xKhxWIQpd/6hWKFNfK78Z7HuYQfXhpEbmxYcnfRCYHRPEOkW5Gh9C9fYAgP8BneR8ySJgdB8AAAAASUVORK5CYII=" nextheight="1608" nextwidth="3072" class="image-node embed"><p>People usually associate advanced software engineering with gray-bearded experts with vast knowledge of how computers and things like compiler internals work. However, having technical knowledge is just the base requirement to work in the field. In my experience, the <strong>greatest minds</strong> in the field are not just experts in knowledge, but also <strong>extremely efficient communicators, particularly in writing</strong>.</p><p>Following these 8 principles can help you maximize your efficiency in written communication:</p><div class="relative header-and-anchor"><h2 id="h-1-less-is-more">1. Less is more</h2></div><p>In a workplace setting, the ability to summarize something in three sentences is far more valuable than the ability to write fancy-looking research papers. Forget school assignments with minimum lengths – in reality, you need to put in effort to specifically keep it short.</p><div class="relative header-and-anchor"><h2 id="h-2-start-with-the-solution-or-the-ask">2. Start with the solution or the ask</h2></div><p>Unless you are a professional novel writer building up an arc of drama, your readers are most likely not captivated enough to read all of your text fully. Therefore, you need to put forward your main <strong>suggestion</strong> or <strong>request</strong> as early in the text as possible. In ideal cases, the main message you want to convey is already in the title.</p><div class="relative header-and-anchor"><h2 id="h-3-show-the-facts-with-examples">3. Show the facts, with examples</h2></div><p>If you are an expert, people will value your opinions. But it is always much more compelling if they are delivered with supporting facts, numbers, timelines, and references. Ideally, there is a reliable source to refer to or an indicator or statistic to look at, but a couple of anecdotal case examples also work well as both evidence and as a concrete story to showcase cause and effect.</p><div class="relative header-and-anchor"><h2 id="h-4-always-quantify">4. Always quantify</h2></div><p>A number is always more expressive than an adjective. Instead of a vague “<em>expensive</em>”, just write “<em>500 USD/h</em>” if the price is known. Don’t state that something is “<em>significantly faster</em>” as it does not actually mean anything. Saying, for example, “<em>travel time decreased to 5 hours (down 30% from 7 hours)</em>” paints a much more accurate picture.</p><div class="relative header-and-anchor"><h2 id="h-5-include-links-and-references">5. Include links and references</h2></div><p>Instead of a verbal reference like “<em>read the report for more,</em>” make a service to readers and include a direct URL they can simply click. When describing a system or a problem, include the documentation link or issue tracker identifier.</p><div class="relative header-and-anchor"><h2 id="h-6-explain-why-it-matters">6. Explain why it matters</h2></div><p>After stating facts, ask yourself “<em>so what?</em>”. Cater to readers who are not fully familiar with the domain by being explicit on <strong>why</strong> something matters and <strong>what it means</strong>, in as concrete terms as possible.</p><div class="relative header-and-anchor"><h2 id="h-7-ask-feedback-from-one-person">7. Ask feedback from one person</h2></div><p>Before sending out a text to a large group of recipients, ask one person to read it first. If your main message does not get across, iterate on your text until at least one person understands it in the way you intended. If the text has great significance, you might continue to ask for feedback from two or three more people, but remember that everyone has an opinion, and there is no guarantee that getting more opinions will converge on one opinion. Asking multiple people for opinions is not directly bad, but perhaps wasteful, as it quickly leads to diminishing returns.</p><div class="relative header-and-anchor"><h2 id="h-8-sleep-on-it">8. Sleep on it</h2></div><p>When it comes to your own text, <em>the most important opinion is your own</em>. A good way to figure out what <strong>you really want</strong> and value is to write a text, put it away, and then return to it one or more days later and ask yourself <strong>if you still</strong> really agree with it.</p><div class="relative header-and-anchor"><h2 id="h-sender-is-responsible-for-delivery">Sender is responsible for delivery</h2></div><p>Last but not least, remember <strong>it is the responsibility of the broadcaster to make sure the message was received.</strong> Don’t assume people received and saw your message, or that they read it, or that they understood what they read. You need to put in the effort into preparing your message and following up on how it was received.</p><p>Writing well is also a way to show respect for the reader’s intellect and time. Think about it this way: If you send a message to a hundred people and expect them to spend 6 minutes each reading it, you are spending 600 minutes (10 hours) of organizational time. If you spend 15 minutes extra to polish your message so it can be read and understood in just 2 minutes, you save the organization almost a full workday (600 minutes vs. 15 + 200 minutes equals 385 minutes or 6 ½ hours less).</p><p>It does not matter how good your idea is if the text describing it is bad. If you practice writing well, people near and far will become more receptive to your ideas.</p><div class="relative header-and-anchor"><h2 id="h-what-are-your-tips">What are your tips?</h2></div><p>Are you a seasoned professional who masters written communication? What are your tips? Please comment below!</p>]]></content:encoded>
            <author>otto@newsletter.paragraph.com (Otto Kekäläinen)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/c5280cb6310d3fce9bfa81976f7bd6ca.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Tab-tastic tips for streamlined web browser use]]></title>
            <link>https://paragraph.com/@otto/tab-tastic-tips-for-streamlined-web-browser-use</link>
            <guid>Jpqk55QPpvVrwtA3C2Ur</guid>
            <pubDate>Fri, 08 Mar 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[What is the single most common action you repeat over and over when using your computer? Let me guess – opening a new tab in the browser. Here are my...]]></description>
            <content:encoded><![CDATA[<img src="https://storage.googleapis.com/papyrus_images/de6f4555eec123b64044e17189b48607.jpg" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGfElEQVR4nAFxBo75APHx9Kuzt6+5vvT19unr7+rt8Onr7ujq7djc4Nfb3urr7Ozs7Onq6eLi4+fm5ubm5ebm5eHh4N/f3t7d2uPh4ZSYmY+Xn660u6Oprp2kqZWdo4GJjoGPlkZTUwAEABEaEgDy8vKvtbuyu8D19vju7vH39/f19fX29vbe3+DKzNDu7u7s7Ozr6urm5ubs6uvp6Obo6OTi4eDe3d3d3Nnj4uCOkZObn6LV1tfGyMfAwMC7u7yKkJNwfoV5hYo2Qj8qMy0A8vLzsrm9tL3D7e3w8/P1/fz89/f3/Pz85ubnyMzP8vLz7+7u7ezr6Ofm8Ozs7Ovo7Oro5OTj4eHh4N/d6efkjpCTm56g2trbx8jLxcXIk5WYKTItIiomP0pKXWtvZWxsAPX19bG7vr7Hy6uvtJKXm+7u8d/h493e39DS1M7S1uPl6Obo6ePm59rd3trd3tbY2dTW187R0snN0MXIyb3AwomQk5CXma6ytqKnq5edoYaOkVdgYCApIwAAAAELCTtBPAD39/exu763wMe8wMR4e3ze3+Dg4ubN0tXV293k6OvX3N+lrbKbpa2fqq6bpaqWoKiRnaeNmZ6FkJiJlJ2ptbmtuLyttbmgrbCcqqyUoKSIk5dveoBZZWk3QkIeJCASGRQA7Ozsr7W5qLW64eLh397a5evt2t/g09jb0tfY5Onqsbu/FzdFSmBoXHaBYHeJVniFXHl3VXh+XIWRPVplcnuAlKCioq+yucLFq7Ozh5GVeYeNGiYhAAYADxYWKzI1GSAZANDRz6qwtJmnq9/k5tDR0ePn6Ofo6OLj5N3g4uzx9K60uSxecHaGiVWBkWB8oUyFj16FaUyGloG2wUZjbEBGSnB5fYmVmra8vI2RkGVtcVljaDE7OxYeHBkdHQACAAAAAACqrKyMlJRxfoS9xcjPzc2lucVsjZyiu8ubo5Z/iHucpKkVUGNidnpBdYVQb5Y6eYhOeV45fItOlKgkSlc7PEBia295hYiLmJuJlJh+h4pia29WXmNNVlpHT1UuMjMaGxYAc32CZG1yUmBlkp2fwMC9Y4ibGTk/TmZzZHFfKC8gc3x/QWBtW2puSGhzT2N8QGJtSmVaPGBrLElWIy85KSoqKDAtXWVoX2tqbXh7doOHcHuCZW91XmltMz8+Jy8qNjYuAGVyeVdkbEtWXXeEhZ+mpoicp6emnkhNTqOqrYqFfnBxcsfIy7m9wbm5u62qp5OPjpuZm4mIiH96emppaisnJCIaD0JHSjxAOkVIRWVoZ1FcYlhkalxpbjI+PBQeEBsfFgBfbHJHVVo1QUhWXF5VV1WIgnvRwrR+cGK5raSyoZN4ZleMgHCAcWJrXlNkamxUYWU4OTkzKSE5LSQzKSBDMyZKNCc/My5+dW1/c2psXlNSS0U8QUU6RE49R08aIxoUHhAAT1xgY2dpl5CHu6uar56Lzbqo4NC/7NjH1szBxK+c5tLAzb2t0sKyxLeqtraxrq6qpqKcq56PnId2k3too49+kYuFb1dEo4lyj35ya15ScmVXVEU5QjgxNTg7JSksFxwXAEZMUm5mX4d/dox6bZqIepqHeaOOfJmFdYx4aop3aYdyZX5tX3pqXm9eUmhYS2NTSF9OQlZIP1dIO2BNQVtJPFREN049MldENVI+MUk2KTosJDQsKDQrJScmJhobHh8eHAA6PkMoMTY3PUJUSEBdXl91bGZ8c2lJSUk5O0AxMTMkIyYfHSAfGxsiGxkaEhIZEQ4XDwsVBgEVCgYQDRASDxISDxIREBQRCAkRDQ0cFhU8MCcXGR4WGB4YHSMYGBoWEg0AMjU5KC4zRkNBW1ROZmhoiYaCeXdzVV9nZG91VV1iQkdLOzxCMjM1JCEhIRQNHhEIHA8GGQ8IGRYVHh0eHhscIiAfHhseHBkcGRYVGxgZTT41Ix8fGBkdEhMWAAAAAAAAACssMSMoLkxEPVJPTnBta313cmxnZGlyeYOKkXN9gmNqb1VYXktOUzE2PyMiIx8bGR0aGRwcHiIiJCUkJiMiJCgkJiMiJCIhIx0cHhcTEz01LzMrJxgbIVdXV1BPUCgnKAAmJywtLzJPPzZGRUdzaWBiWU9kYWCAg4uSl5uMkZOCh4tydnttbnNaW2A+Q0k/Q0ctLTA3Nzo2NjcyMDMvLTAuLS8uLTAtLS8pKCoaFxU1KyVANSwaGyBhYWGTk5NsbGkpdgYR/dIF6AAAAABJRU5ErkJggg==" nextheight="402" nextwidth="768" class="image-node embed"><p>What is the single most common action you repeat over and over when using your computer? Let me guess – opening a new tab in the browser. Here are my tips for opening, switching and closing tabs everyone should know.</p><div class="relative header-and-anchor"><h2 id="h-opening-a-tab">Opening a tab</h2></div><p>This one most people know: press <code>Ctrl+T</code> to open a new tab. But did you know that you don’t always need to type an URL or start a web search? <strong>You can also jump directly to the content you wanted to view by using custom address bar shortcuts.</strong></p><p>All popular browsers support defining customer keywords so that what you type in the address bar can take you where you are going even faster. In Chrome (and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Chromium_%28web_browser%29">Chromium</a>) you can customize what shortcuts can be used in the address bar by opening <em>Settings &gt; Search engine &gt; Manage search engines and site search</em>. Below are my favorite custom searches.</p><div class="relative header-and-anchor"><h3 id="h-ask-with-perplexity-ai">Ask with Perplexity AI</h3></div><p>Want to quickly ask an AI for something? Just configure <code>@p</code> in your shortcuts to query <code>https://www.perplexity.ai/search?q=%s&amp;focus=internet</code> and you are never further than a couple key strokes away from asking <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.perplexity.ai/">Perplexity AI</a>.</p><img src="https://substack-post-media.s3.amazonaws.com/public/images/3e3465bf-fa45-42af-84d6-c361d4ad2043_942x704.gif" alt="Ask Perplexity AI a question directly from the browser address bar" title="Ask Perplexity AI a question directly from the browser address bar" class="image-node embed"><p>I used to always <em>google</em> everything I wanted to know, but nowadays I find myself doing it less and less. Instead, I type <code>@p &lt;question&gt;</code> in the address bar, press enter and immediately get the answer from Perplexity along with links to the information sources. No more wasting time on skimming through unrelevant search result pages!</p><div class="relative header-and-anchor"><h3 id="h-open-a-man-page-instantly">Open a man page instantly</h3></div><p>Yes, any <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Man_page">man page</a> can be accessed easily by running on the command-line <code>man</code> followed by the command name. But reading man pages in a browser window with nice fonts and in a separate window next to the command-line window is much more ergonomic and an easier way to craft commands. For this use case, I have configured the shortcut <code>@man</code> that jumps to the latest version of the man page in Debian using URL <code>https://dyn.manpages.debian.org/jump?suite=unstable&amp;language=en&amp;q=%s</code>.</p><img src="https://substack-post-media.s3.amazonaws.com/public/images/97304294-cdf4-418b-81eb-be772dd102a0_1200x627.png" alt="Custom search engine configuration view in Chrome" title="Custom search engine configuration view in Chrome" class="image-node embed"><div class="relative header-and-anchor"><h3 id="h-jump-to-any-google-drive-file-or-folder-quickly">Jump to any Google Drive file or folder quickly</h3></div><p>Oddly enough, Chrome does not have any built-in shortcut to Google Drive. Adding a shortcut with this url will achieve it <code>https://drive.google.com/drive/u/0/search?q=%s</code>.</p><img src="https://substack-post-media.s3.amazonaws.com/public/images/69c9ab53-bd00-4da6-bbe9-56bc2514c507_700x277.gif" alt="Search Google Drive directly from the browser address bar" title="Search Google Drive directly from the browser address bar" class="image-node embed"><div class="relative header-and-anchor"><h2 id="h-jumping-between-tabs">Jumping <em>between</em> tabs</h2></div><p>If you are like me and have dozens of tabs open simultaneously, learn to use keyboard shortcut <code>Ctrl+Tab</code>. This will jump to the next tab. Pressing <code>Ctrl+Shift+Tab</code> will do the same in reverse direction. By pressing <code>Ctrl+1</code> you can instantly jump to the first tab, and with <code>Ctrl+2</code> to the second tab and so forth. This is handy in particular if your first tabs are pinned and always have your e-mail or calendar and you need to open them frequently.</p><p>Too many tabs to cycle through them? No worries, you can always press <code>Ctrl+Shift+A</code> to open a dialog where you can search the tab based on the website title.</p><img src="https://substack-post-media.s3.amazonaws.com/public/images/f312281e-d9ac-4cfd-9b79-08ba0c4af5d1_333x376.png" alt="Searcing open tabs after pressin Ctrl+Shift+A" title="Searcing open tabs after pressin Ctrl+Shift+A" class="image-node embed"><p>In Chrome you can also type <code>@tabs</code> in the address bar to search your open tabs, or <code>@history</code> to search tabs and pages you recently closed.</p><div class="relative header-and-anchor"><h2 id="h-close-a-tab-or-reopen-a-closed-tab">Close a tab, or reopen a closed tab</h2></div><p>To close a tab, press <code>Ctrl+W</code>. Oops – if you accidentally close a tab, re-open it quickly with <code>Ctrl+Shift+T</code>. You can even press it multiple times to re-open several old tabs in the reverse order of closing them, basically <em>undo</em> for tab closing.</p><div class="relative header-and-anchor"><h2 id="h-bookmark-all-tabs">Bookmark all tabs</h2></div><p>What if you have too many tabs open and you need to close the browser window? In Chrome, there is a handy shortcut <code>Ctrl+Shift+D</code> that will bookmark all open tabs in a folder name you choose. Then you can safely close the window knowing that you will always find them in that specific folder in your bookmarks.</p><div class="relative header-and-anchor"><h2 id="h-keyboard-shortcut-summary">Keyboard shortcut summary</h2></div><p>Action Shortcut Open a new tab Ctrl+T Close a tab Ctrl+W Undo closing a tab Ctrl+Shift+T Jump one tab to the right Ctrl+Tab Jump one tab to the left Ctrl+Shift+Tab Open first tab, open nth tab Ctrl+1, Ctrl+2, … Search tab by website title Ctrl+Shift+A Bookmark all open tabs (e.g. before closing window) Ctrl+Shift+D Open link in a new tab without leaving current web page Ctrl+click</p><div class="relative header-and-anchor"><h2 id="h-what-is-your-tip">What is your tip?</h2></div><p>Knowing how to use a web browser efficiently should be considered a basic life skill in modern society. The <strong>above keyboard shortcuts work in all browsers</strong> and are as universal as <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://linuxnatives.net/2021/copy-paste-like-a-pro">Ctrl+C and Ctrl+V</a>.</p><p>What is your additional browser productivity tip? Share it in a comment below.</p>]]></content:encoded>
            <author>otto@newsletter.paragraph.com (Otto Kekäläinen)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/de6f4555eec123b64044e17189b48607.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Advanced git commands every senior software developer needs to know]]></title>
            <link>https://paragraph.com/@otto/advanced-git-commands-every-senior-software-developer-needs-to-know</link>
            <guid>Zfqb8Fu24rntSiheoX1k</guid>
            <pubDate>Thu, 29 Feb 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[Git is by far the most popular software version control system today, and every software developer surely knows the basics of how to make a git commi...]]></description>
            <content:encoded><![CDATA[<img src="https://storage.googleapis.com/papyrus_images/4f19857c3298d8235f121389815ac506.jpg" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGL0lEQVR4nCWUiVMTBhbGX2frWrBlXW3rqlMtoCCHUAaIChGQcCVAgISEEAI5kFwkHIkkyiUCFVoRtBukSG1AqsFgwEKkJhmoRVAHph6dMlq3HmPdsm7XOttWtuOu++2k+w+835vf971HcEgxrsSkDk41OsX3D0jSdsQrUuNxWICrRszXwXPIsvATb/TbTNt12cUlSpTSTtmTU62wcBYbc0s4KfTy8pVvB/NSk2CToZ21L5jaonyem+LQnYt+MeGsDG7197UZypTYqPDIbMY7D+uz8Hk5Flpxsdty2lG/iJanKHAu/l7YSMGJm5PzKSRFfrAPDu2WN1enl2gjN/n7E1UG+cKhhV2HczJYC3BChCEZ7FLCiAJuzfPmdEGU/01DOlxazJlxqae18wizvGlzWTOjfYKyKihgOwXF+e4UBnKVa2LSKKvytuPEcJwvEfEiAjCm+el4Da9tMObg6MSfWzBlgEsDpxZjKsKoAudV8Gjg1mJaB5exdshFCfk+GdJYfVu4UOsXkbwyNvtVZv7v4vgUlk5RaUHsoj+pDmXseR829QtDDPbF9/V9XORcYp+4JXc/4vQv5DX3YfQA3JVwaggjci/AqYFLhQEjs2mQ2LqN+u4V4uY/JAqX+UdSGGsFk78sOo02xRMj79XChmXCOorirShsE1gvLc2MYN58lBPhK24SjT/JO/Mw2/YNz7a4Ud+N3hJ4dISzcjiVGFPBrb65J4sihESvrVEc9RPUvJ5vWMtWUBiLwlh+rOLX8k3EM1GUgHbIQs121vGZvDYrv6n3yqQbM3VLunBGfBKza5pjvf0WV2mMWYcTYrj/Dxgp9SZxKOexITHQ5yUiopBdQTX9a3n6VVk6v1iOL1e7vKDeOzqcv17Zk3RsOrflI3Xde+V720t15lz9/s5TE/jaipM8U9wG1ub1VyThXj9T1ZjQEuxSjJd9VclK8n89dhV1bF8pXU3bfIhya3Ydu7qjw/0KV0cx+bQlc7ngXUabK6GhL1tjllfWqmvblQ0dMnNbsak1vaQiobb3vsuOz/VoSlUKsi2aAjiUuKD/raaD4tj1f0wNWsd9288jDEYd84Owl6Iio0NMdtouoXVxlFETsX84em9vsrw6W6oXVjSWmFpLa99X1nYoG9ol9UeSDJ3NV/8+8B8MjrlwWLDBK4HQxccFHWFYigHRV8roAU7gonEn+govl8U/UASrw1fTFjaFpq/d3e2vPxIj0iTlydile7iaunxto2RPa5n5oNp8UNJoUTgfNCyi5QcMP4XyCvif3nFV5YxmvIlOLj5VEk5LvAF4tJgqh1vXmLmNERZ8Wx3zQBG8ipkXtm94YxI/fFdOkkidIjOxZQauprakprXU2FTa0lNqm1dMP9N+CfnFX9Vz0F+HaPwfxFHfNqThiyo4dXBqCYNiOOQYkJzbKz0szZ6RMrA/DVYVLLzTCX5ruOrISssuvjxRpMqQVeepzUKtubBiv/TDC2VTT3XXoLn8TOr+m3TqZ+UsQmtPFcWGojnRe1ITepwv/w3QXwCn8nJZ3Pq3NjhK4jFt+v78yeJz9+rt07BKvsh4I5pXxjQfZ4lVSeLyzCI1v/aoyvPIsADDNehnf6ma+1U592Jbu4vBZH8pDoFLjZPVHRIOeorxmR6jSoJVBFsJBgpwiDPXfaDccUPq/rd08l+Sz35p9dzBWPV3hQHxmeK4un6mfG/eR7PGW2h9jIa7MF1/YVoA3/GIwVV0JW6EVYDZ+kFd4erAUCK6V7HT21THbsLHAtjEP/eZdCenC0afisf/mWP/C9f2jchxTzb5vHH2MdzN/9VsTWBlF7uWqhZQfdO7e/1d6G9ii6ZTtDUA7yZj3ozpFmNOGhG98wrdL4/BkBR2BRylhA/5OC16dKwqpeda3pmH+SPfpfXdyPxkgee4L7TfLXQ+qbj0Iy52wRCdlsoucv2onlmquoHkLg9ja4wnJwDucsw3//CBPiwgMGQZufmB6BPAXQF7Gc7IYZMRLFyvpbMFdy3G1N4buWfu5Y/+lW39OuuTWzlD3/IcD0o8z1SXn2GyC9qIRMYO3tCdTalF2k0+sORgvh7D5hqJgIgk68j7G6aMGNVPVPMemFIxvBsDRf8D4soxG/BExIwAAAAASUVORK5CYII=" nextheight="402" nextwidth="768" class="image-node embed"><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://git-scm.com/">Git</a> is by far the most popular software version control system today, and every software developer surely knows the basics of how to <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://optimizedbyotto.com/post/good-git-commit/">make a git commit</a>. Given the popularity, it is surprising how many people don’t actually know the advanced commands. Mastering them might help you unlock a new level of productivity. Let’s dive in!</p><div class="relative header-and-anchor"><h2 id="h-avoid-excess-downloads-with-selective-and-shallow-git-clone">Avoid excess downloads with selective and shallow git clone</h2></div><p>When working with large git repositories, it is not always desirable to clone the full repository as it would take too long to download. Instead you can execute the clone for example like this:</p><p>Copy</p><p><code>git clone --branch 11.5 --shallow-since=3m https://github.com/MariaDB/server.git mariadb-server</code></p><p><code>git clone --branch 11.5 --shallow-since=3m https://github.com/MariaDB/server.git mariadb-server</code></p><p>This will make a clone that <strong>only tracks branch <em>11.5</em> and no other branches</strong>. Additionally this uses the shallow clone feature to <strong>fetch commit history only for the past 3 months</strong> instead of the entire history (which in this example otherwise would be 20+ years). You could also specify <code>3w</code> or <code>1y</code> to fetch three weeks or one year. After the initial clone, you can use <code>git remote set-branches --add origin 10.11</code> to start tracking an additional branch, which will be downloaded on <code>git fetch</code>.</p><p>If you already have a git repository, and all you want to do is <strong>fetch one single branch from a remote repository one-off</strong>, without adding it as a new remote, you can run:</p><p>Copy</p><p><code>$ git fetch https://github.com/robinnewhouse/mariadb-server.git ninja-build-cracklib From https://github.com/robinnewhouse/mariadb-server * branch ninja-build-cracklib -&gt; FETCH_HEAD $ git merge FETCH_HEAD Updating 112eb14f..c649d78a Fast-forward plugin/cracklib_password_check/CMakeLists.txt | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) $ git show commit c649d78a8163413598b83f5717d3ef3ad9938960 (HEAD -&gt; 11.5) Author: Robin</code></p><p><code>$ git fetch https://github.com/robinnewhouse/mariadb-server.git ninja-build-cracklib From https://github.com/robinnewhouse/mariadb-server * branch ninja-build-cracklib -&gt; FETCH_HEAD $ git merge FETCH_HEAD Updating 112eb14f..c649d78a Fast-forward plugin/cracklib_password_check/CMakeLists.txt | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) $ git show commit c649d78a8163413598b83f5717d3ef3ad9938960 (HEAD -&gt; 11.5) Author: Robin</code></p><p>This is a very fast and small download, which will not persist as a remote. It creates a temporary git reference called <code>FETCH_HEAD</code>, which you can then use to inspect the branch history by running <code>git show FETCH_HEAD</code>, or you can merge it or cherry-pick or whatever.</p><p>If you want to download the bare minimum, you can even operate on individual commits as raw patch files. A typical example would be to <strong>download a GitHub Pull Request as a patch file and apply it locally</strong>:</p><p>Copy</p><p><code>$ curl -LO https://patch-diff.githubusercontent.com/raw/MariaDB/server/pull/3026.patch $ git am 3026.patch Applying: Fix ninja build for cracklib_password_check $ git show commit a9c44bc204735574f2724020842373b53864e131 (HEAD -&gt; 11.5) Author: Robin</code></p><p><code>$ curl -LO https://patch-diff.githubusercontent.com/raw/MariaDB/server/pull/3026.patch $ git am 3026.patch Applying: Fix ninja build for cracklib_password_check $ git show commit a9c44bc204735574f2724020842373b53864e131 (HEAD -&gt; 11.5) Author: Robin</code></p><p>The same works for GitLab Merge Requests as well – just add <code>.patch</code> at the end of the MR url. This will apply both the code change inside the patch, as well as honor the author field, and use the patch description as the commit subject line and message body. However, when running <code>git am</code>, the committer name, email and date will be that of the user applying the patch, and thus the SHA-sum of the commit ID will not be identical.</p><p>The latest git has a new experimental command <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://git-scm.com/docs/git-sparse-checkout">sparse-checkout</a> that allows one to checkout only a subset of files, but I won’t recommend it as this post is purely about best practices and tips I myself find frequently useful to know.</p><div class="relative header-and-anchor"><h2 id="h-inspecting-git-history-and-comparing-revisions">Inspecting git history and comparing revisions</h2></div><p>The best command to view the history of a single file is:</p><p>Copy</p><p><code>git log --oneline --follow path/to/filename.ext</code></p><p><code>git log --oneline --follow path/to/filename.ext</code></p><p>The extra <code>--follow</code> makes git traverse the history longer to find if the same contents existed previously with a different file name, thus <strong>showing file contents across file renames</strong>. Using <code>--oneline</code> provides a nice short list of just the git subject lines. To view the full git commit messages as well as the actual changes, use this:</p><p>Copy</p><p><code>git log --patch --follow path/to/filename.ext</code></p><p><code>git log --patch --follow path/to/filename.ext</code></p><p>If there is a specific change you are looking for, search it with <code>git log --patch -S &lt;keyword&gt;</code>.</p><p>To view the project history in general, having this alias is handy:</p><p>Copy</p><p><code>alias g-log=&quot;git log --graph --format=&apos;format:%C(yellow)%h%C(reset) %s %C(magenta)%cr%C(reset)%C(auto)%d%C(reset)&apos;&quot;</code></p><p><code>alias g-log=&quot;git log --graph --format=&apos;format:%C(yellow)%h%C(reset) %s %C(magenta)%cr%C(reset)%C(auto)%d%C(reset)&apos;&quot;</code></p><img src="https://substack-post-media.s3.amazonaws.com/public/images/e4dd99e0-62cc-4a41-8e4b-71f0235fb3ac_1253x769.png" alt="Custom git log format with all branches" title="Custom git log format with all branches" class="image-node embed"><p>The output shows all references, multiple branches in parallel and it is nicely colorized. If the project has a lot of messy merges, sticking to one branch main be more readable:</p><p>Copy</p><p><code>git log --oneline --no-merges --first-parent</code></p><p><code>git log --oneline --no-merges --first-parent</code></p><img src="https://substack-post-media.s3.amazonaws.com/public/images/a01ac772-d771-4358-8111-92430c968f64_1252x774.png" alt="Custom git log format with parent branches only" title="Custom git log format with parent branches only" class="image-node embed"><p>However, an even better option is to use <code>gitk --all &amp;</code>. This standard git graphical user interface allows you to browse the history, search for changes with a specific string, jump to a specific commit to quickly inspect it and what preceded it, open a graphical git blame in a new window, etc. The <code>--all</code> instructs <code>gitk</code> to show all branches and references, and the ampersand backgrounds the process so that your command-line prompt is freed to run other commands. If your workflow is based on working over SSH on a remote server, simply connect with <code>ssh -X remote.server.example</code> to have X11 forwarding enabled (only works on Linux). Then on the SSH command-line just run <code>gitk --all &amp;</code> and a window should pop up.</p><p>Copy</p><p><code>laptop$ ssh -X remote-server.example.com server$ echo $DISPLAY :0 (X11 forwarding is enabled and xauth running) server$ cd path/to/git/repo server$ gitk --all &amp;</code></p><p><code>laptop$ ssh -X remote-server.example.com server$ echo $DISPLAY :0 (X11 forwarding is enabled and xauth running) server$ cd path/to/git/repo server$ gitk --all &amp;</code></p><p>A typical need is also to compare the files and changes across multiple commits or branches using git diff. The nicer graphical option to it is to run <code>git difftool --dir-diff branch1..branch2</code> which will open the diff program of your choice. Personally I have opted to always use <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://meldmerge.org/">Meld</a> with <code>git config diff.tool meld</code>.</p><img src="https://substack-post-media.s3.amazonaws.com/public/images/c8b54e2b-898b-46a0-9545-70d873f42ae2_1254x778.png" alt="Demo of ‘git difftool’ with Meld" title="Demo of ‘git difftool’ with Meld" class="image-node embed"><div class="relative header-and-anchor"><h2 id="h-committing-rebasing-cherry-picking-and-merging">Committing, rebasing, cherry-picking and merging</h2></div><p>When making a git commit, doing it graphically with <code>git citool</code> helps to clearly see what the changes have been done, and to select the files and even the exact lines to be committed with the click of a mouse. The tool also offers built-in spell-checking, and the text box is sized just right to visually enforce keeping line lengths within limits. Since development involves committing and amending commits all the time, I recommend having these aliases:</p><p>Copy</p><p><code>alias g-commit=&apos;git citool &amp;&apos; alias g-amend=&apos;git citool --amend &amp;&apos;</code></p><p><code>alias g-commit=&apos;git citool &amp;&apos; alias g-amend=&apos;git citool --amend &amp;&apos;</code></p><p>Personally, I practically never commit by simply running <code>git commit</code>. If I commit from the command line at all, it is usually due to the need to do something special, such as change the author with:</p><p>Copy</p><p><code>git commit --amend --no-edit --author &quot;Otto Kekäläinen &lt;otto@debian.org&gt;&quot;</code></p><p><code>git commit --amend --no-edit --author &quot;Otto Kekäläinen &lt;otto@debian.org&gt;&quot;</code></p><p>Another case where a command-line commit fits my workflow well is during final testing before a code submission when I find a flaw on the branch I am working on. In these cases, I fix the code, and quickly issue:</p><p>Copy</p><p><code>git commit -a --fixup a1b2c3 git rebase -i --autosquash main</code></p><p><code>git commit -a --fixup a1b2c3 git rebase -i --autosquash main</code></p><p>This will commit the change, mark it as a fix for commit <code>a1b2c3</code>, and then open the interactive rebase view <strong>with the fixup commit automatically placed at the right location</strong>, resulting in a quick turnaround to make the branch flawless and ready for submission.</p><p>Occasionally a git commit needs to be applied to multiple branches. For example, after making a bugfix with the id <code>a1b2c3</code> on the main branch, you might want to backport it to release branches 11.4 and 11.3 with:</p><p>Copy</p><p><code>git cherry-pick -x a1b2c3</code></p><p><code>git cherry-pick -x a1b2c3</code></p><p>The extra -x will make git amend the commit message with a reference to the commit id it originated from. In this case, it would state: <code>(cherry picked from commit a1b2c3)</code>. This helps people reading the commit messages later to track down when and where the commit was first made.</p><p>When doing merges, the most effective way to handle conflicts is by <strong>using Meld to graphically compare and resolve merges</strong>:</p><p>Copy</p><p><code>$ git merge branch2 Auto-merging VERSION CONFLICT (content): Merge conflict in SOMEFILE Automatic merge failed; fix conflicts and then commit the result. $ git mergetool Merging: SOMEFILE Normal merge conflict for &apos;SOMEFILE&apos;: {local}: modified file {remote}: modified file $ git commit -a [branch1 e4952e06] Merge branch &apos;branch2&apos; into branch1</code></p><p><code>$ git merge branch2 Auto-merging VERSION CONFLICT (content): Merge conflict in SOMEFILE Automatic merge failed; fix conflicts and then commit the result. $ git mergetool Merging: SOMEFILE Normal merge conflict for &apos;SOMEFILE&apos;: {local}: modified file {remote}: modified file $ git commit -a [branch1 e4952e06] Merge branch &apos;branch2&apos; into branch1</code></p><img src="https://substack-post-media.s3.amazonaws.com/public/images/5d673a74-7211-4b4c-b359-9d537519c93e_1246x765.gif" alt="Demo of ‘git mergetool’ with Meld" title="Demo of ‘git mergetool’ with Meld" class="image-node embed"><p>One more thing to remember is that if a merge or rebase fails, remember to run <code>git merge --abort</code> or <code>git rebase --abort</code> to stop it and get back to the normal state. Another typical need is to discard all temporary changes and get back to a clean state ready to do new commits. For that I recommend this alias:</p><p>Copy</p><p><code>alias g-clean=&apos;git clean -fdx &amp;&amp; git reset --hard &amp;&amp; git submodule foreach --recursive git clean -fdx &amp;&amp; git submodule foreach --recursive git reset --hard&apos;</code></p><p><code>alias g-clean=&apos;git clean -fdx &amp;&amp; git reset --hard &amp;&amp; git submodule foreach --recursive git clean -fdx &amp;&amp; git submodule foreach --recursive git reset --hard&apos;</code></p><p>This will reset all modified files to their pristine state from the last commit, as well as delete all files that are not in version control but may be present in the project directory.</p><div class="relative header-and-anchor"><h2 id="h-managing-multiple-remotes-and-branches">Managing multiple remotes and branches</h2></div><p>The most important tip for working with git repositories is to remember at the start of every coding session to always run <code>git remote update</code>. This will fetch all remotes and make sure you have all the latest git commits made since the last time you worked with the repository.</p><p>Copy</p><p><code>$ git remote update Fetching origin Fetching upstream remote: Enumerating objects: 3, done. remote: Counting objects: 100% (3/3), done. remote: Total 3 (delta 2), reused 3 (delta 2), pack-reused 0 Unpacking objects: 100% (3/3), 445 bytes | 55.00 KiB/s, done. From https://github.com/eradman/entr e2a6ab7..6fa963e master -&gt; upstream/master</code></p><p><code>$ git remote update Fetching origin Fetching upstream remote: Enumerating objects: 3, done. remote: Counting objects: 100% (3/3), done. remote: Total 3 (delta 2), reused 3 (delta 2), pack-reused 0 Unpacking objects: 100% (3/3), 445 bytes | 55.00 KiB/s, done. From https://github.com/eradman/entr e2a6ab7..6fa963e master -&gt; upstream/master</code></p><p>In the example above, you can see that there isn’t just the <code>origin</code>, but also a second remote called <code>upstream</code>. Most people use git in a centralized model, meaning that there is one central main repository on e.g. GitHub or GitLab, and each developer in the project <code>pushes</code> and <code>pulls</code> that central repository. However, git was designed from the start to be a distributed system that can sync with multiple remotes. To understand how to control this one needs to learn the concept of tracking branches and learn the options of the <code>git remote</code> command.</p><p>Consider this example that has two remotes, <em>origin</em> and <em>upstream</em>, and the <em>origin</em> remote has 3 push urls:</p><p>Copy</p><p><code>$ git remote -v origin git@salsa.debian.org:debian/entr.git (fetch) origin git@salsa.debian.org:debian/entr.git (push) origin git@gitlab.com:ottok/entr.git (push) origin git@github.com:ottok/entr.git (push) upstream https://github.com/eradman/entr (fetch) upstream https://github.com/eradman/entr (push) $ cat .git/config [remote &quot;origin&quot;] url = git@salsa.debian.org:debian/entr.git fetch = +refs/heads/*:refs/remotes/origin/* pushurl = git@salsa.debian.org:debian/entr.git pushurl = git@gitlab.com:ottok/entr.git pushurl = git@github.com:ottok/entr.git [remote &quot;upstream&quot;] url = https://github.com/eradman/entr fetch = +refs/heads/*:refs/remotes/upstream/* [branch &quot;debian/latest&quot;] remote = origin merge = refs/heads/debian/latest [branch &quot;master&quot;] remote = upstream merge = refs/heads/master</code></p><p><code>$ git remote -v origin git@salsa.debian.org:debian/entr.git (fetch) origin git@salsa.debian.org:debian/entr.git (push) origin git@gitlab.com:ottok/entr.git (push) origin git@github.com:ottok/entr.git (push) upstream https://github.com/eradman/entr (fetch) upstream https://github.com/eradman/entr (push) $ cat .git/config [remote &quot;origin&quot;] url = git@salsa.debian.org:debian/entr.git fetch = +refs/heads/*:refs/remotes/origin/* pushurl = git@salsa.debian.org:debian/entr.git pushurl = git@gitlab.com:ottok/entr.git pushurl = git@github.com:ottok/entr.git [remote &quot;upstream&quot;] url = https://github.com/eradman/entr fetch = +refs/heads/*:refs/remotes/upstream/* [branch &quot;debian/latest&quot;] remote = origin merge = refs/heads/debian/latest [branch &quot;master&quot;] remote = upstream merge = refs/heads/master</code></p><p>In this repository, the branch <code>master</code> is configured to track the remote <code>upstream</code>. Thus, if I am in the branch <code>master</code> and run <code>git pull</code> it will fetch <code>master</code> from the upstream repository. I can then checkout the <code>debian/latest</code> branch, merge on <code>upstream</code> and do other changes. Eventually, when I am done and issue <code>git push</code>, the changes on branch <code>debian/latest</code> will go to remote <code>origin</code> automatically. The <code>origin</code> has 3 <code>pushurls</code>, which means that the updated debian/latest will end up on both the Debian server as well as GitHub and GitLab.</p><p>The commands to set this up was:</p><p>Copy</p><p><code>git clone git@salsa.debian.org:debian/entr.git cd entr git remote set-url --add --push origin git@salsa.debian.org:otto/entr.git git remote set-url --add --push origin git@gitlab.com:ottok/entr.git git remote set-url --add --push origin git@github.com:ottok/entr.git git remote add upstream https://github.com/eradman/entr</code></p><p><code>git clone git@salsa.debian.org:debian/entr.git cd entr git remote set-url --add --push origin git@salsa.debian.org:otto/entr.git git remote set-url --add --push origin git@gitlab.com:ottok/entr.git git remote set-url --add --push origin git@github.com:ottok/entr.git git remote add upstream https://github.com/eradman/entr</code></p><div class="relative header-and-anchor"><h2 id="h-keeping-repositories-nice-and-tidy">Keeping repositories nice and tidy</h2></div><p>As most developers use feature and bug branches to make changes and submit them for review, a lot of old and unnecessary branches will start to pollute the git history over time. Therefore it is good to check from time to time what branches have been merged with <code>git branch --merged</code> and delete them.</p><p>If a branch is deleted remotely as a result of somebody else doing cleanup, you can make git automatically delete those branches for you locally as well with <code>git config --local fetch.prune true</code>. You can run this one-off as well with <code>git fetch --prune --verbose --dry-run.</code></p><p>When working with multiple remotes, it might at times be hard to reason what will happen on a git pull or git push command. To see what tags and branches are updated and how without actually updating them run:</p><p>Copy</p><p><code>git fetch --verbose --dry-run git push --verbose --dry-run git push --tags --verbose --dry-run</code></p><p><code>git fetch --verbose --dry-run git push --verbose --dry-run git push --tags --verbose --dry-run</code></p><p>Using the <code>--dry-run</code> option is particularly important when running <code>push</code> or <code>pull</code> with <code>--prune</code> or <code>--prune-tags</code> to see which branches or tags would be deleted locally or on the remote.</p><p>Another maintenance task to occasionally spend time on is to run this command to make git delete all unreachable objects and to pack the ones that should be kept forever:</p><p>Copy</p><p><code>git prune --verbose --progress; git repack -ad; git gc --aggressive; git prune-packed</code></p><p><code>git prune --verbose --progress; git repack -ad; git gc --aggressive; git prune-packed</code></p><p>To do this for every git repository on your computer, you can run:</p><p>Copy</p><p><code>find ~ -name .git -type d | while read D do echo &quot;===&gt; $D: &quot; (cd &quot;$D&quot;; git prune --verbose --progress; nice -n 15 git repack -ad; nice -n 15 git gc --aggressive; git prune-packed) done</code></p><p><code>find ~ -name .git -type d | while read D do echo &quot;===&gt; $D: &quot; (cd &quot;$D&quot;; git prune --verbose --progress; nice -n 15 git repack -ad; nice -n 15 git gc --aggressive; git prune-packed) done</code></p><div class="relative header-and-anchor"><h2 id="h-better-git-experience-with-liquip-prompt-and-fzf">Better git experience with Liquip Prompt and fzf</h2></div><p>It is not practical to constantly run <code>git status</code> (or <code>git status --ignored</code>) or to press <code>F5</code> in a <code>gitk</code> window to be aware of the git repository status. A much handier solution is to have the git status integrated in the command-line prompt. My favorite is <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://linuxnatives.net/2020/liquid-prompt">Liquid Prompt</a>, which shows the branch name, and a nice green if everything is committed and clean, and red if there are uncommitted changes, and yellow if changes are not pushed.</p><p>Another additional tool I recommend is the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://linuxnatives.net/2021/save-time-command-line-fuzzy-finder">Fuzzy Finder fzf</a>. It has many uses in the command-line environment, and for git this alias is handy for changing branches:</p><p>Copy</p><p><code>alias g-checkout=&quot;git checkout &quot;$(git branch --sort=-committerdate --no-merged | fzf)&quot;&quot;</code></p><p><code>alias g-checkout=&quot;git checkout &quot;$(git branch --sort=-committerdate --no-merged | fzf)&quot;&quot;</code></p><p>This will list all local branches with the recent ones topmost, and present the list in an interactive form using fzf so you can select the branch either using arrow keys, or typing a part of the branch name.</p><img src="https://substack-post-media.s3.amazonaws.com/public/images/1d456daf-449b-431c-9889-ba6602cc2e5c_1246x765.gif" alt="Demo of Liquid Prompt and git branch selection with Fuzzy Finder (fzf)" title="Demo of Liquid Prompt and git branch selection with Fuzzy Finder (fzf)" class="image-node embed"><div class="relative header-and-anchor"><h2 id="h-bash-aliases">Bash aliases</h2></div><p>While git has its own alias system, I prefer to have everything in plain <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Bash_%28Unix_shell%29">Bash aliases</a> defined in my <code>.bashrc</code>. Many of these are explained in this post, but there are a couple extra as well. I leave it up to the reader to study the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://manpages.debian.org/unstable/git-man/git-push.1.en.html">git man page</a> to learn for example what <code>git push --force-with-lease</code> does.</p><p>Copy</p><p><code>alias g-log=&quot;git log --graph --format=&apos;format:%C(yellow)%h%C(reset) %s %C(magenta)%cr%C(reset)%C(auto)%d%C(reset)&apos;&quot; alias g-history=&apos;gitk --all &amp;&apos; alias g-checkout=&apos;git checkout $(git branch --sort=-committerdate --no-merged | fzf)&apos; alias g-commit=&apos;git citool &amp;&apos; alias g-amend=&apos;git citool --amend &amp;&apos; alias g-rebase=&apos;git rebase --interactive --autosquash&apos; alias g-pull=&apos;git pull --verbose --rebase&apos; alias g-pushf=&apos;git push --verbose --force-with-lease&apos; alias g-status=&apos;git status --ignored&apos; alias g-clean=&apos;git clean -fdx &amp;&amp; git reset --hard &amp;&amp; git submodule foreach --recursive git clean -fdx &amp;&amp; git submodule foreach --recursive git reset --hard&apos;</code></p><p><code>alias g-log=&quot;git log --graph --format=&apos;format:%C(yellow)%h%C(reset) %s %C(magenta)%cr%C(reset)%C(auto)%d%C(reset)&apos;&quot; alias g-history=&apos;gitk --all &amp;&apos; alias g-checkout=&apos;git checkout $(git branch --sort=-committerdate --no-merged | fzf)&apos; alias g-commit=&apos;git citool &amp;&apos; alias g-amend=&apos;git citool --amend &amp;&apos; alias g-rebase=&apos;git rebase --interactive --autosquash&apos; alias g-pull=&apos;git pull --verbose --rebase&apos; alias g-pushf=&apos;git push --verbose --force-with-lease&apos; alias g-status=&apos;git status --ignored&apos; alias g-clean=&apos;git clean -fdx &amp;&amp; git reset --hard &amp;&amp; git submodule foreach --recursive git clean -fdx &amp;&amp; git submodule foreach --recursive git reset --hard&apos;</code></p><div class="relative header-and-anchor"><h2 id="h-keep-on-learning">Keep on learning</h2></div><p>As a programmer, it is not enough to know programming languages and how to write code well. You also need to understand the software lifecycle and change management. Understanding git deeply helps you better prepare for situations where potentially hundreds of people collaborate on the same code base for years and years.</p><p>To learn more about git concepts, I recommend reading the entire <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://git-scm.com/book/en/v2">Pro Git book</a>. The original version is over a decade old, but the online version keeps getting regular updates by people contributing to it in the open source spirit. As an example, I <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/progit/progit2/pull/1850">wrote</a> a new section last year about <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://git-scm.com/book/en/v2/Git-Tools-Signing-Your-Work#_everyone_must_sign">automatically signing git commits</a>. Skimming through the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://git-scm.com/docs">git reference documentation</a> (online version of <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Man_page">man pages</a>) is also a great way to become aware of what capabilities git offers.</p><p>What is your favorite command-line git trick or favorite tool? Comment below.</p>]]></content:encoded>
            <author>otto@newsletter.paragraph.com (Otto Kekäläinen)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/4f19857c3298d8235f121389815ac506.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Learn to write better git commit messages by example]]></title>
            <link>https://paragraph.com/@otto/learn-to-write-better-git-commit-messages-by-example</link>
            <guid>fBxB8WtaSswPbKdOAA47</guid>
            <pubDate>Sun, 18 Feb 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[When people learn programming they – for completely obvious and natural reasons – initially focus on learning the syntax of programming languages and...]]></description>
            <content:encoded><![CDATA[<img src="https://storage.googleapis.com/papyrus_images/eab2d4850f2aa1ffd6802542a4feedaa.jpg" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGd0lEQVR4nBXUfVCa9wHA8V81TRNTUQrIu0F4BORdBBF4QEAfCCCCL6DGR2hQk/ga0SAhjRob62uiNVEbziTExgSL0ihRq03iRRsvdzFtWrslfXG9bdl2111v2+22/rH9sV3uvv9//vuCZDwRJGLM9jKpAsbhCXrEqDWaxbI8uUqTp9FrC46YS53FVajTfczT0Nx6+nTf0MCV0GRzS7O/s2P0Yn9sbmY5NntlqHsmdDEavry5Hl1fnP31b68WwuP/++8/4tEwSMISwJuYctSj0BZQqbRKt7uyvkGhyS+0FBdabfYK9HVVNU3+YOB874fT0x9/MieTyykUKpFMqa4oWVuOLURv9Z5tuxsN34tF7q9EP49H/vOvn8MTg79/uRO+Oga4Ujmekl7h9uaodBAE1XhrvW1+S6nL6CizV6BqA5KwLzkVS2g8HRibvj49e0cizUlITORwsshkshnRT16+OD423H3Wv/7Z0vzczc2NtdjczKvf7Y70nfvp5ZeLCxGQlaNOS2e6arwiuRqCoJOn2s70Dxe7qktRd0kVmox5BwCw/0CKGtYOh6ZHQ9dgjU7AYcNSvlaRrc5T9Ph9l0YHz3UF19fvzc7OLC3NX7828fOffxwZfP/733x5ZzYMDmKJb2EIVZ5ariRXwOfVNTW1dvWY7KVN/mCl2wPAfgIWb5AJfF5X79Dwwv2H9zYe1TqLywyKYJ1zyN8UmfpwYGTIH+icj30Smg4tLsWuhib2vt8NnvE/ergyF5kBpHQWjkIvO+pm8STZYlGrv7N7ZMxSXlHfekqRl8tjMU0qaSdqCw8GQx/fXtrY3N59sRCL2dRyP+q4Pdp1e7L/ViQ2PDW9eC/ePzwSiUZGRga+fr4TCHRubT5YXo4DljCbyGDZK9AMjjBXltPd1993+aMS1ONyOVOSDnKZ6VVHdHX2whujfbfiy+uPn2zv/va7n/7QcsxdYYRNWjlabpuMrKAN/vEbc9ORhfmF+Uujo3t7LwLBMyur8eXVFcCW5JGZbHtFNSOTB+fJ2oPngv3DNlc1xOEB8CaZRDrhMr9XXzM8ebNjYu7+068ePt356z//fTU8k5ycKuKwS2uOF3taVZoCvaVsKrKy/fzbs91dXzx+1NDc4j8TONXeDtI5IhIDMjqclHSWAVacHxjs6P2AxpeDRDxIoZBoh4+XGuwmg+7Ye2k6T8/1+MbTna+/20PrGkUymM/lCxSFuRqjHjHDiNXb3rX+ZBf1nqh0e3SIiSeSCEQSQGZxKIxMjbkET6abder6xhZWNgywh/elsUAKHVLba60wlSU4kFe6T2g0NlxY39r+dO2+3lyCFDnFKoNMa9IbLfkmhxKx64uP9n00421sE0rlErlSJJXxBGKAI9FIjEyt0Uyl0cusBq5YBlLSE0jsRCoPkLg4CQKrNKkyaypSmySzQbamiUisu38kR2PKN5WokWK4wFJocSiNDn6uXmUpP9k16Ov+gEhj0DPYXFE2VyAChzBYCoOj0hdALJa5UJdE5SQw5QdoPIAhvyFE3qLxU3haIHOQSzqoptrUfDQwftOBeiGxGi6wynVm2FgMm50ciTpTIJPrLUdbui9cCROpDAKFQaYzqQwIvH4FlSlWaZgs9ttEBqCIAFfHV+npEB+8k5kEOw+wVQDSAMNxwFQAprKo5QKVxSdDfK5ExZPni1UFkERNZHApjCyuVKm2VrX3T2WwhTgijUjLSMETgd7qzFbA5ZXV5EzhvrfxIqGYJciRyGQOi4nDy95P4wO6VKJCnDUnTtR6OtoaVeUnIYjrKjoS8J3iSZRUSEBj8iisLLHGKjc5ZYWlNndLllSJwaZh8ZRDGDwQSuVyJazRIxgC9VACoOMw7MMkpYBTadRc6jkn01nZWZLT7rKt6LW50Hhnw7t6o8WGIDU281Rve1GhBhIqcoRcJjlNiZRoHe9qHW64qJLG4gKw/2AK8TWQiktLxRJwOAIOh8ckJxOJJKVaj6I1XYHO1c8fvfjT3+/eXQ1NhTY2Hj948s30nbvB3gFntae81OlFUcRgECny+Vx+JoMukir4aoTE4st1Fl9XL1xwBE+iAQAAPSNTKJXpjcayqsqGlpZWn6/R5/cFz4cXVr/a+8vzH159+8df9n75dePZy8s3IjOL67EH28tbO8tbz9ae7q5sPYuubd1Z2wzHPrsZf9Dc2SPNVTvddbcWV2cX42Oh6976xv8DYIZBKMXKobEAAAAASUVORK5CYII=" nextheight="402" nextwidth="768" class="image-node embed"><p>When people learn programming they – for completely obvious and natural reasons – initially focus on learning the syntax of programming languages and libraries. However, these are just tools. The essence of software engineering is about automating thought, applying algorithmic thinking and anticipation of the known and unknown. The code might be succinct, but the reasoning behind it can be extensive, and it needs to show in the communication around the code. <strong>The more senior a programmer is, the more their success depends on their communication skills.</strong></p><div class="relative header-and-anchor"><h2 id="h-communication-is-important-even-in-programming">Communication is important – even in programming</h2></div><p>One could even claim that software development teams <strong>thrive or fall based on how quick and efficient the feedback cycle</strong> about the code is and how well the team shares information while researching and solving problems.</p><p>At the core of code-related communication is <strong>git commit messages</strong>. When a team member shares a new code change for others to review, the <strong>speed and accuracy of the reviewers</strong> depends heavily on how well the <strong>intent of the change</strong> was described and motivated.</p><p>In addition to reviews, a great <strong>git commit also has permanent utility</strong> as part of the code base. If it later turns out the commit had a bug, whoever is trying to fix it will have a much easier time reading in the commit what the change was supposed to do, and consequently understanding where it fell short, and will thus be able to rewrite the same change in the correct way. This leads to <strong>bugs being fixed much more quickly and with less effort</strong> – and most often the person doing the fix is a <em>future you</em> who no longer remembers what the <em>present you</em> were thinking while making that commit, and the future you just have to stare at the commit message and contents until it makes sense.</p><div class="relative header-and-anchor"><h2 id="h-common-mistakes">Common mistakes</h2></div><p>If you haven’t already, first read <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://optimizedbyotto.com/post/good-git-commit/">How to make a good git commit</a>. In addition to knowing what a good end result looks like, it might be useful to learn the <strong>typical mistakes</strong> developers make and to know explicitly <strong>what to not do</strong>.</p><p>Repeating and extending the recommendations from the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://git-scm.com/book/en/v2/Distributed-Git-Contributing-to-a-Project#_commit_guidelines">Git Pro book</a> (authored by, among others, GitHub co-founder Scott Chacon):</p><ul><li><p>Never exceed 70 characters in the git title, and preferably keep it under 50</p></li><li><p>Use imperative format, not past tense: instead of “Fixed” or “Added”, write “Fix” or “Add”</p></li><li><p>Write at least one sentence in the git message body</p></li><li><p>Separate the message body by one empty line from the subject line</p></li><li><p>A title is like an e-mail subject line – no dot at the end</p></li><li><p>The body should use full sentences that end with a dot</p></li><li><p>Wrap the message body around 72 characters, lines should not overly long</p></li><li><p>Don’t use diary-like language to explain what you did, but rather what the change does: if the description starts with “In this commit I..” or “I checked..”, there surely is a simpler way to express it clearly and universally</p></li><li><p>If you have multiple commit messages that have exactly the same title, you are surely also doing something wrong, as the change itself for sure isn’t identical</p></li><li><p>Writing “Update ” is never a good description of the change, as it just states the obvious – instead, describe the <strong>intent</strong> of what the change tries to achieve</p></li><li><p>Don’t use AI to write your commit messages – AI can only see what changed in the files, it cannot possibly know <strong>why you made the change</strong>, which is exactly the essence of the git commit message</p></li></ul><div class="relative header-and-anchor"><h2 id="h-example-1">Example 1</h2></div><p>In this example the improved version has a more descriptive title that captures both what the change was, as well why it was made. The git commit message is restructured to explain the same thing it less repetition.</p><p>Initial Copy</p><p><code>Fix security vulnerabilities found by FlawFinder Fixing security issues found by FlawFinder. Project code base contains a number of old-style unsafe C function usage. In this commit we are replacing string functions: `strcpy()` `strcat()` and `sprint()` with the safe new and/or custom functions such as `snprintf()` `safe_strcpy()` and `safe_strcat()` The FlawFinder log before changes: $ cat flawfinder-all-vulnerabilities.html | grep &quot;Hits =&quot; Hits = 14955 After the change: $ cat flawfinder-all-vulnerabilities.html | grep &quot;Hits =&quot; Hits = 14668 The number of fixes - 287</code></p><p><code>Fix security vulnerabilities found by FlawFinder Fixing security issues found by FlawFinder. Project code base contains a number of old-style unsafe C function usage. In this commit we are replacing string functions: `strcpy()` `strcat()` and `sprint()` with the safe new and/or custom functions such as `snprintf()` `safe_strcpy()` and `safe_strcat()` The FlawFinder log before changes: $ cat flawfinder-all-vulnerabilities.html | grep &quot;Hits =&quot; Hits = 14955 After the change: $ cat flawfinder-all-vulnerabilities.html | grep &quot;Hits =&quot; Hits = 14668 The number of fixes - 287</code></p><p>Improved Copy</p><p><code>Fix insecure use of strcpy, strcat and sprintf in Connect Old style C functions `strcpy()`, `strcat()` and `sprintf()` are vulnerable to security issues due to lacking memory boundary checks. Replace these in the Connect storage engine with safe new and/or custom functions such as `snprintf()` `safe_strcpy()` and `safe_strcat()`. With this change, FlawFinder static security analyzer reports 287 fewer findings.</code></p><p><code>Fix insecure use of strcpy, strcat and sprintf in Connect Old style C functions `strcpy()`, `strcat()` and `sprintf()` are vulnerable to security issues due to lacking memory boundary checks. Replace these in the Connect storage engine with safe new and/or custom functions such as `snprintf()` `safe_strcpy()` and `safe_strcat()`. With this change, FlawFinder static security analyzer reports 287 fewer findings.</code></p><div class="relative header-and-anchor"><h2 id="h-example-2">Example 2</h2></div><p>In this example the title was changed to used imperative format, and to more precisely tell what was changed in order to distinguish the commit from other similar ones that fix <code>cppcheck</code> failures. The message body explains what the change does instead of what “we” did, and shows the error message verbatim so anybody seraching for the error message will find this text.</p><p>Initial Copy</p><p><code>Fixing cppcheck failure We have an error while running CI in gitlab &quot;There is an unknown macro here somewhere. Configuration is required. If DBUG_EXECUTE_IF is a macro then please configure it.&quot; Add a workaround - change problematic string with false alarm before cppcheck run then revert it back.</code></p><p><code>Fixing cppcheck failure We have an error while running CI in gitlab &quot;There is an unknown macro here somewhere. Configuration is required. If DBUG_EXECUTE_IF is a macro then please configure it.&quot; Add a workaround - change problematic string with false alarm before cppcheck run then revert it back.</code></p><p>Improved Copy</p><p><code>Add certain DBUG_EXECUTE_IF cases to cppcheck allowlist Cppcheck failed on error: There is an unknown macro here somewhere. Configuration is required. If DBUG_EXECUTE_IF is a macro then please configure it. This is a false positive and safe to ignore. Extend filtering to exclude it from cppcheck results.</code></p><p><code>Add certain DBUG_EXECUTE_IF cases to cppcheck allowlist Cppcheck failed on error: There is an unknown macro here somewhere. Configuration is required. If DBUG_EXECUTE_IF is a macro then please configure it. This is a false positive and safe to ignore. Extend filtering to exclude it from cppcheck results.</code></p><div class="relative header-and-anchor"><h2 id="h-example-3">Example 3</h2></div><p>Here again the title was made more specific about which fix this is about exactly to distinguish it from other similar fixes, and since both the error message was known and the previous commit that caused it was identified, they are included in the git message to clearly justify the change, as well as make debugging similar things much easier in the future.</p><p>Initial Copy</p><p><code>Releaser fix releaser failed due to missing manifest path. Adding return statement to the function</code></p><p><code>Releaser fix releaser failed due to missing manifest path. Adding return statement to the function</code></p><p>Improved Copy</p><p><code>Add missing return to find_manifest_file() The refactor in f9f6d299 split load_manifest() into two functions by simply copy-pasting the lines. This omitted that the new function needs to have a `return` added, otherwise it might return `None`. This fixes the releaser failure about: line 214, in get_engine_name_from_manifest_file with open(manifest_filename, &quot;r&quot;) as manifest: TypeError: expected str, bytes or os.PathLike object, not NoneType</code></p><p><code>Add missing return to find_manifest_file() The refactor in f9f6d299 split load_manifest() into two functions by simply copy-pasting the lines. This omitted that the new function needs to have a `return` added, otherwise it might return `None`. This fixes the releaser failure about: line 214, in get_engine_name_from_manifest_file with open(manifest_filename, &quot;r&quot;) as manifest: TypeError: expected str, bytes or os.PathLike object, not NoneType</code></p><div class="relative header-and-anchor"><h2 id="h-example-4">Example 4</h2></div><p>This example shows make the title more specific by spelling out what component exactly is extended and with which variables. In the descriptiong use imperative ‘Add’ instead of ‘Adding’, and restructure the text to clearly say what is being done, followed by <em>why</em> it is useful, and include explanation about backwards compatibility to further champion that the change is safe to do. Also fix line breaks and add space between paragraphs.</p><p>Initial Copy</p><p><code>Add TLS version to auth plugin available variables The authentication audit plugins currently do not have access to the TLS version used. Adding this variable to list of available variables for audit plugin. Logging the TLS version can be useful for traceability and to help identify suspicious or malformed connections attempting to use unsupported TLS versions. This can be used to detect and block malicious connection attempts.</code></p><p><code>Add TLS version to auth plugin available variables The authentication audit plugins currently do not have access to the TLS version used. Adding this variable to list of available variables for audit plugin. Logging the TLS version can be useful for traceability and to help identify suspicious or malformed connections attempting to use unsupported TLS versions. This can be used to detect and block malicious connection attempts.</code></p><p>Improved Copy</p><p><code>Extend audit plugin to include tls_version and tls_version_length variables Add tls_version and tls_version_length variables to the audit plugin so they can be logged. This is useful to help identify suspicious or malformed connections attempting to use unsupported TLS versions. A log with this information will allow to detect and block more malicious connection attempts. Users with &apos;server_audit_events&apos; empty will have these two new variables automatically visible in their logs, but if users don&apos;t want them, they can always configure what fields to include by listing the fields in &apos;server_audit_events&apos;.</code></p><p><code>Extend audit plugin to include tls_version and tls_version_length variables Add tls_version and tls_version_length variables to the audit plugin so they can be logged. This is useful to help identify suspicious or malformed connections attempting to use unsupported TLS versions. A log with this information will allow to detect and block more malicious connection attempts. Users with &apos;server_audit_events&apos; empty will have these two new variables automatically visible in their logs, but if users don&apos;t want them, they can always configure what fields to include by listing the fields in &apos;server_audit_events&apos;.</code></p><div class="relative header-and-anchor"><h2 id="h-example-5">Example 5</h2></div><p>In this example the title can be simplified to summarize the change. The change was initially tested, but later came permanent. There was no change in the contents of the change however, and thus in this case it was better to not describe the lifecycle of the commit in the git commit messge title, but instead keep that information elsewhere among the developers, or perhaps inferred from the fact that the commit was initially on a development branch and only later applied on mainline.</p><p>Also avoid writing in “I have ..”, and instead use imperative format that describes what the change is and most importantly extend it to explain <em>why</em> the change was made. Who renamed what file or changed what line of code is always visible in the git commit anyway. The focus of the git commit should be on communicating the intent of the change, as <em>why</em> something was changed isn’t always obvious, yet incredibly important in order to assess if the change is correct or not.</p><p>Initial Copy</p><p><code>Switch to `new-archives` branch to test the new archives layout I have renamed `archives.html` to `.archives.html` to disable overriding. I have modified archives.md to output JSON.</code></p><p><code>Switch to `new-archives` branch to test the new archives layout I have renamed `archives.html` to `.archives.html` to disable overriding. I have modified archives.md to output JSON.</code></p><p>Improved Copy</p><p><code>Use new layout for archives page Update theme to version with new archives page and disable the old archives page. Ensure new archive page is also published as JSON so that the interactive search can use the JSON file as backend.</code></p><p><code>Use new layout for archives page Update theme to version with new archives page and disable the old archives page. Ensure new archive page is also published as JSON so that the interactive search can use the JSON file as backend.</code></p><div class="relative header-and-anchor"><h2 id="h-easiest-way-to-update-a-commit-message-git-citool-amend">Easiest way to update a commit message: <code>git citool --amend</code></h2></div><img src="https://substack-post-media.s3.amazonaws.com/public/images/ae3a88a1-8153-4e46-90fc-9edadf63249b_1317x617.png" alt="screenshot of ‘git citool’" title="screenshot of ‘git citool’" class="image-node embed"><p>Writing a good git commit message while preparing a code submission is much easier if you follow this process:</p><ol><li><p>Start writing code changes</p></li><li><p>Save intermediate changes with <code>git commit -am WIP</code></p></li><li><p>Test, polish, iterate</p></li><li><p>Run <code>git citool --amend</code> to polish the git commit message when the code change is final and you can see the whole change and thus are able to easily explain what you did and why</p></li><li><p>Rebase on latest main branch, e.g. <code>git fetch origin main; git rebase -i origin/main</code></p></li><li><p>Push to code review</p></li></ol><p>If you find yourself doing frequent rebases and amends, congratulations! It means that you have mastered the craft of preparing great code submissions.</p><div class="relative header-and-anchor"><h2 id="h-good-git-commit-messages-help-you-avoid-duplicate-effort">Good git commit messages help you avoid duplicate effort</h2></div><p>One extra benefit of having a great commit message is that you don’t have to rewrite anything when submitting the code for review. Every single code review system I have ever used will <strong>automatically use the git commit title and message as the review title and message</strong> (at least if the review is a single commit review).</p><img src="https://substack-post-media.s3.amazonaws.com/public/images/04d30928-c462-4948-9435-06963d6956b7_1254x732.gif" alt="Screencast of git commit message automatically being reused as the merge request description on GitLab" title="Screencast of git commit message automatically being reused as the merge request description on GitLab" class="image-node embed"><div class="relative header-and-anchor"><h2 id="h-why-you-should-always-polish-the-commit-message-even-if-the-commit-does-not-feel-important">Why you should always polish the commit message, even if the commit does not feel important</h2></div><p>If you are proud of your work and like doing things well, you will follow these guidelines by nature. However, some lazy ass might say that while they agree with the principles, they <em>don’t have time to follow them</em>. To that I always respond that <strong>humans tend to get really good at the things we practice</strong>. If you do the wrong thing over and over, you become an expert at doing things incorrectly. Is that really what you want?</p><p>Doing things correctly from the onset will steer you away from situations where you are knowingly doing lousy quality just to “save time”. Practicing doing something well will ultimately lead you to become <strong>a person who does that thing well, <em>effortlessly</em></strong>.</p><p>Have you run into situations where you find it challenging to write a good git commit title and message? Share your example in the comments below and I will try to help you formulate the text and capture the essence of the change in a concise title and description.</p>]]></content:encoded>
            <author>otto@newsletter.paragraph.com (Otto Kekäläinen)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/eab2d4850f2aa1ffd6802542a4feedaa.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[When everyone else is wrong]]></title>
            <link>https://paragraph.com/@otto/when-everyone-else-is-wrong</link>
            <guid>V5U2EP6Uoh0MBPulxKoL</guid>
            <pubDate>Fri, 19 Jan 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[The stock market is a powerful globally distributed forecasting system. Last Friday it was forecasting a rosy future as the MSCI World index got to a...]]></description>
            <content:encoded><![CDATA[<img src="https://storage.googleapis.com/papyrus_images/cc0a7fced4ba57c7b0c9871571f2c65a.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAPCAIAAAAK4lpAAAAACXBIWXMAAAsTAAALEwEAmpwYAAAD9ElEQVR4nGNggAEJeWVZdX0Im09M2tEj0MrR08zGxd7dz8bZ28rRMzQuNTQunRgUnZKTmFUUnZITnZLDYGbjAjE0IindzMULxOLgZ+QSEJVWEJdV5hIUV1TTFpGU5xGSSMkuzCqqIAal5pTomNq4+4UmZhUxFFQ0hMalZhVV/Pjx88ePH/+RwN9//5AZ34gA7z98/P////lLV6wcPX1DYkE+SMwq0je38wmJgljw/fu3H5SBjx8/TZu9wNjayTsoEmqBlaO7X2jMjx8/////T4aJ/0HgH4T97du3Neu3bNu9D2IBKIgyCysdPfydfYJnLV596dbdb79+/fr5k3ijX7x8tWb9llu37378+On///879x68cvXardt3rRw97d19XX1DGbyDIs1sXEJik9cfPNo6e0HvohXXrl2HuwgP+P//38ePnxavWPvw0eMjx05t2rbr0NHjG7fu+v///7mLl2ycPFV1jDUMzBh0TK11TG1CohIgMXnwwpVpC1f8/vUbHFw/v337hhnt8MDcsHXnvQePIFIQr0D8AYlkLSNLLSNLUBzYu/tFJaR++waK3t9//kxYvBLsif8PHz2ePW8xkpNB4N6DRw8fPT5/6cqyVevPX7oCsQmShP7////23fsX79+fu3jJ0cNfz8zWyNoRZIGZvXtQZDxE3a+fP49evtY+edb7t+82bt21cPlqiGUvXr569uLlzr2HNm3btXzV+g2btoNc8+v3r58/4ejJqze7z1589u7dlavXEJGMbAEEfP76de+5i83T5i7ftvvIjVtzl6y8dePG8lXrDxw4cu3a9V8/f166++Dgxaunbtw+cunawYtXrz94fOL6rRPXb+2/cOXZ67f///8/ffaCvrmdd1BkfGYhOJk6eYXFJKGF9e8/f/7////7z5995y9NXb721I3bd9+8+fj9+8ev307duw9R8O3Xr4/fvz/58OHbr1/ffv2Ca79x6469ux9yPvD0CgrbuHXXhq07Z8ye39bZu2HrziUrVjc0ty9esXbh8tWFtY2TZi9YtGpDRkV9UlnNqi07p82Y09k3acOm7bPnLmpq6VizfsuSFavLqurnL16+fc/Bkso6OTUdqAWRyVkeQdFufmHmDp7Wrn4icqr8kgrWrn7KWsZi8mq+4fF27n48wpI6pjZWjp5CEnJK6nqOHoH84rKSihoBEUnG1s7yarq+4fHOPsGSihpm9u5m9u7sAhJ6ZrbQwi40Lp1PSoVDVF5YXgOMNA2snIXlNQTl1ENi0xKziryDIhXUdBTUdOVUtTUMzORUtRXUdBXUdHRMrUPjUhOzijILKyFm5ZfX2bv7ickoyalqi8ko27v7JWYVAQBCB5UUKhxcJgAAAABJRU5ErkJggg==" nextheight="450" nextwidth="958" class="image-node embed"><p>The stock market is a powerful globally distributed forecasting system. Last Friday it was forecasting a rosy future as the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.investing.com/indices/msci-world">MSCI World index</a> got to a new all-time high. But it does not make sense.</p><p>There is no single entity controlling stock prices. They represent the collective best guess of the whole economic world on what the value of each company should be. <strong>Historically the market has been pretty accurate.</strong> For example, when the news broke about COVID-19 spreading, stocks plummeted in anticipation of a global crisis. And indeed, a pandemic followed that forced a large part of the economy to a standstill, and major government intervention was necessary to avoid total mayhem. As another example, in the first month of the Russian invasion of Ukraine in February-March 2022, the stock price of Cheniere Energy went up almost 50%, as the market predicted that this American liquified natural gas producer would hugely benefit from Russian competitors being blockaded. Again, the market was right – the by the end of 2022, Cheniere’s profits had doubled.</p><p>The MSCI World index has grown about 30% since the low point in the fall of 2022. The U.S. focused S&amp;P index is even more extreme, having grown almost 40% since the low point in fall of 2022, ending last Friday at 4840, slightly above the previous closing high record 4797 set on Jan 3, 2022. Investors participating in the market are collectively forecast that the U.S. economy has recovered and things will be fine. <strong>There is just one problem.</strong> Everyone is wrong.</p><div class="relative header-and-anchor"><h2 id="h-the-us-market-is-looking-way-too-good">The U.S. market is looking way <em>too</em> good</h2></div><p>The S&amp;P will soon probably surpass 5000 points, but this party can’t last for long. <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.reuters.com/markets/us/sp-500s-wild-ride-an-all-time-high-2024-01-19/">Reuters has a great summary</a> of economic events overlaid on the S&amp;P 500 in past two years:</p><img src="https://substack-post-media.s3.amazonaws.com/public/images/6080515a-6d9f-4173-b8e2-e0ee616d58ee_915x699.png" alt="Reuters graph on S&amp;P 500 development in 2022–2024" title="Reuters graph on S&amp;P 500 development in 2022–2024" class="image-node embed"><p><strong>The situation seems very contradictory.</strong> Increased inflation should affect the purchasing power of consumers and companies negatively. While salary increases help consumers survive, and higher sales prices help companies offset increased costs, undeniably the overall effect is still negative. The <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://fred.stlouisfed.org/series/USACPALTT01CTGYM">U.S Consumer Price Index</a> growth has slowed down a bit since the peak in 2022, but prices are still growing faster than in all of 2010’s. Due to inflation, the U.S. Federal Reserve increased in July 2023 its <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://fred.stlouisfed.org/series/DFF">interest rate to 5.5%</a>, and still keeps it there. High interest rates discourages companies from taking loans and making investments, so for the time being the economy should slow down, not accelerate.</p><div class="relative header-and-anchor"><h2 id="h-is-the-money-printer-is-on-again">Is the money printer is on again?</h2></div><p>Following the insolvency of the Silicon Valley Bank on March 10th, 2023, the U.S. Federal Reserve announced it will guarantee the 200+ billion dollars the bank had lost. Soon it repeated it to a few more banks. Essentially the Fed printed 300+ billion dollars in March 2023. When you overlay the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://fred.stlouisfed.org/series/SP500">S&amp;P 500</a> and the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://fred.stlouisfed.org/series/TOTRESNS">U.S Federal Reserve balance sheet</a> the correlation seems pretty strong:</p><img src="https://substack-post-media.s3.amazonaws.com/public/images/ae017b44-42fd-4d97-af02-bda2da1b19eb_958x450.png" alt="The S&amp;P 500 index and the U.S. Federal reserve balance in 2022–2024" title="The S&amp;P 500 index and the U.S. Federal reserve balance in 2022–2024" class="image-node embed"><p>The upward trend in November and December might be related to U.S. lawmakers passing so-called “appropriation bills” to allow the U.S. government to continue overspending. Similar bills and resolutions are <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.pgpf.org/blog/2024/01/continuing-resolutions-were-designed-to-be-stopgap-measures-but-now-we-average-five-a-year">likely to repeat on March 1st and 8th, 2024</a>. All of this makes me think the stock prices are out of touch with the underlying companies growth, and mostly just a function of how much money is being pumped into the U.S. economy by the government and the Fed.</p><p>This is not exactly anything new. Taking on <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://fred.stlouisfed.org/series/GFDEBTN">more national debt</a> seems to be the U.S. policy, no matter which president or party is on power:</p><img src="https://substack-post-media.s3.amazonaws.com/public/images/8b041b6f-93c1-4a78-b106-812c7eaf5669_958x450.png" alt="The U.S. government public debt growth in 2003–2023" title="The U.S. government public debt growth in 2003–2023" class="image-node embed"><p>What is new is the scale it has reached now. The sum is rapidly approaching <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://fiscaldata.treasury.gov/americas-finance-guide/national-debt/">34 trillion</a> U.S. dollars. That is 34 000 000 000 000 (12 zeros!). <strong>The sum is insanely large.</strong> Think about the most expensive single item one could buy: a USS Gerald R. Ford class nuclear powered aircraft carrier has a price tag of 12 000 000 000 (billion) dollars. The U.S. has 11 aircraft carriers and all other countries combined have 6–9, so in total under 20 in the world. With 34 trillion one could buy 2833 aircraft carriers á 12 billion.</p><p>The <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.cbo.gov/publication/59946">U.S. budget projections for 2024-2034</a> is a grim read. The deficit in 2024 is 1.6 trillion USD, and is forecasted to fluctuate between 1.6 and 2.6 trillion for the next 10 years. The interest payment for the national debt is 870 billion in 2024, and 951 billion in 2025 – the first year when U.S. debt interest payments exceeds the U.S. national defence budget!</p><div class="relative header-and-anchor"><h2 id="h-is-this-sustainable">Is this sustainable?</h2></div><p>In 2024 the U.S. government is going to issue 1.6 trillion USD worth of completely new bonds. In addition, the U.S. government also needs to issue several trillion worth of new bonds in order to pay old lenders for bonds that mature in 2024. Somebody needs to buy this 5–10 trillion of U.S. bonds, and it is very unlikely the normal money markets would be able to absorb this supply. Thus, the Federal reserve bank system will need to step in and “print money” to buy up the bonds that don’t sell on the free market.</p><p>The USA is not alone in having a lot of debt. The national debt in Greece is around 150% of their GDP, and Japan has been at over 200% for a decade. However, the U.S. has a huge GDP, so having 120% debt-to-GDP ratio is exceptional in absolute numbers. Consider that <strong>when comparing national debt per capita, Japan and the U.S. both hover around 100k USD</strong> (depending on <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://countryeconomy.com/national-debt?anio=2022">source</a>), so actually the U.S. situation is already very worrying.</p><p>What is extra concerning, is that for the past decades the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://fred.stlouisfed.org/series/GDP">GDP growth in the U.S.</a> seem to correlate with their <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://fred.stlouisfed.org/series/GFDEBTN">government overspending and taking on debt</a>:</p><img src="https://substack-post-media.s3.amazonaws.com/public/images/d2bd2e95-96cb-4959-ace0-1f4f3f6944dc_958x450.png" alt="The U.S. government public debt and gross national product growth correlation in 1980–2023" title="The U.S. government public debt and gross national product growth correlation in 1980–2023" class="image-node embed"><p>This begs the question: how much of the economic growth in the U.S. has actually been based on printing money to begin with? How much has there been improvement in real productivity? Due to how GDP is accounted, public debt automatically increases it, so some correlation is expected, but a very strong correlation as seen in the graph above could indicate that the real economy is not growing, only debt is. The stock market is supposed to grow over time as a function of the GDP and overall productivity growing, but is that really happening now?</p><p>One interesting observation when inspecting the S&amp;P 500 and the MSCI world index <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.msci.com/constituents">constituents</a>, is that they are both dominated by just seven large U.S. tech firms. However, their effect on U.S. GDP is limited, as their corporate profits are accounted in Ireland due to tax planning. This has led to the GDP of Ireland to skyrocket in the past 10 years <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://fred.stlouisfed.org/series/PCAGDPIEA646NWDB#0">from 50k to 100k USD per capita</a>. If the U.S. government runs out of money, it will be much more likely to crack down on tax evasion practices and force the large tech companies to pay more corporate tax in the U.S. Seems like a major disruption factor to me. Yet their stock prices keep growing.</p><div class="relative header-and-anchor"><h2 id="h-expectations-on-the-magnificent-seven">Expectations on the “magnificent seven”</h2></div><p>The extremely high valuation is justified only if these seven companies are about to make a huge productivity leap that lifts the entire U.S. economy, including creating 1.7 trillion in additional tax revenue to close the deficit gap. <strong>Seems the stock market is predicting exactly that</strong>. Sure, artificial intelligence will help increase productivity everywhere, but I find it very hard to believe that AI productivity gains would accumulate on these tech companies to such a degree that their astronomical valuations would be justified. To me it seems that <strong>the market is now just plain wrong.</strong></p><p>According to the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.world-exchanges.org/">World Federation of Exchanges (WFE) exchanges</a> there are over 58 000 stock listed companies in the world. There should be plenty of options for the stock market to bet on. Yet the market predicts that the U.S. stocks in particular would be future winners.</p><p><strong>My prediction for 2024 is a large correction across the U.S. stock market.</strong> If it does not happen already during the spring, surely latest after the U.S. presidential elections in November when <em>keeping appearances</em> ends and policy makers are ready to face the reality.</p><p>What is your read of the situation? Post in the comments below.</p>]]></content:encoded>
            <author>otto@newsletter.paragraph.com (Otto Kekäläinen)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/cc0a7fced4ba57c7b0c9871571f2c65a.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Make habits, not goals]]></title>
            <link>https://paragraph.com/@otto/make-habits,-not-goals</link>
            <guid>qmM7WLuCty6yBPbYLH7A</guid>
            <pubDate>Fri, 29 Dec 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[First we make our habits, and then they make us― John Dryden, poet and literary criticAre you perhaps planning to make a new year’s resolution to run...]]></description>
            <content:encoded><![CDATA[<img src="https://storage.googleapis.com/papyrus_images/8e6a0faba5eebf86b78311131fb27b45.jpg" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGfElEQVR4nAFxBo75ABYUDS4rGzMvITAtHlxYRq61srnCu4eHfoiGes/Qx9bVztHSy9va09zc1ejk3ezo4enm3quuqBYeHwsTFjE4OVVbWlxjYjI5ORQaHBAXGQ0WFwsRFwsRFhARFAkOEg0OEwAbGxIxLx8yLyMbHBY5OCl2d26xuLa5v7q5vbbLy8TV0cfZ1Mrk39Xr5Nny6uDz7OLy7N+SlI8AChA9RESBhYI7QUEFDhMEDBEVGRoaHyAQFhcKEBMPEBIQERQMERIKEBUAMS8eIyMZICAXIiMeIyMbb3FjvcS/wMW9ys3Dz9DH1tXK3NjO39zQ4N3S49/V6OLV+vHlzsjAVlhVhIZ/KC8uBAwRGBwdJSgnIyMgIiMhLC8oHh8cEBMUDRAVCg4TCA4TADYyIC4rHBcXD2ViVqelncXEu9PQx8HCvL2/u7W7tru9ucDBvb/CvcXFvs3KxNbRydzVzurj27u5sSwvLgADBxwhIEpMSF1fWDw8OC8vKSYpJRcgHgsXFwoTFwcUFgoTFwAdHBUYGRMwLyFva1zCwLbIxrzOyL3DwrrCwLnFxLvIxr/KyMDTzsTb1Mrf1s3k2s7l2M767uCJh4AABQcwMjB4eXJgYFkpKiY/PzlISUISGxkIFBYHDxQGEhUHERUGDhIALiobJCMWfndm3dTJ3NLF4tfH6NzN7+LU9efY++vd//Dh//Pj//Df/vDf/e7f/e7f/O7e//ntxL2yPDw2uq+hvrmsLS8ob2xhd3dsGSAgCBQWCBASBQoMBggMBAYIBgYJAB4dFDU0JWttZW1xcG1wbnFycW5ycYaEfpiUipGQibKup83Fu/Dl2P3x5f/26v/67f/67f/67v/y5dfNwvvu3mxpX3dzaNTOv0NFPwoQDhUXGAsMEAYGDQQGCQIDBwQGCAAlIhZCQjQkNzkfMTAjNzkeNjwjOjwrP0AvQ0QvSEorR0s+TkyGgHG7rJu/t6m7uLHKyMLi29Tt5tz/+u/57uO4saTw6N2zrqY2ODM/QDsQExMLDA4ICgwEBgkAAQUFCg8ALSgbMy8hISEaICAaNjcnamNNiHponYx4rJuFoJN/iYJyhIFsa2xZZmNWin5rn456mIt8s6SQtKiXsqiZqKSclJmVlJuXiI2FmqOcQ0dDDxEPFhgVCAgKBAQJAQoPAw4UAB0bFRsZEwQEBQABAxERDDs3ImZeSXRoW4FzZZKCcc6wkuC+nNi5mrSfioF9cYp/b7mghrGdh8Otk8KrlbyqlLamkqebi4uLg1ZiXywvKCYmHREUFAQOEgkRFhQdHiAlJQBoU0BWSTc8MyckIBscGBQZGA80MR04OSobKywPJStMTkiUgm2Ad2Vxal1eYFUfPkRFXGGRhnmxnYerm4W2pY6yoovBsJbCspqvppB/fmwmKigPGBobJiceJygbISNPST4AsI5mwp52wJt2sZBvpopqf2lQbl1FgW1SZVlEQDwuFhsYJzAwOD08VFNPd29pHDY+ACk3dnFln5B+sZ2Fjn9op5J3wqqIoo50dmtWMjYyHiorKS8tMzMvNzUuLSooMy8nAG5YPI9xTqeEX6aGYaeFYayKZbCOZcWieNi0j8Skg6+SdZaBam5jTlVPQ1laVEZLSiMjIE9KN09KO1xVRnFjU4x5X1FJNw8RECMlJDw5NFJJO0E6Lk5DNWZWQkY8L0g8LQA8MSM6LyBKOyZSQSs9MSJjTjJcSC9LPSeafVnKp32yk2rBon27n3mLel9taFxnYVV3aFyNfGKZhGmskXGLeF0wLSJIPSx4ZEeVeFWeflWvimGNb06CaUiRck97YUJNQSoAGRgSHhsWRDojU0MqNCweSj0mUEYpKicZTUMmd2U8aV07d2tLmYRjhXdggHNmopGApZiEjoFmKCohFx0YAAoGOzcrhXNQmoRWVUs1ZlU3tJBhn39Ug2lBe2M/aFU4TD4nABQQDxAQCigmGEpAJUI7IUpDJlBIKl1QNFdKOWJVSX5uX4hyWHdgRy0qJ3NnV6eTfLunk6ybiHFfUTcuJSAeGRgbFxwhGhcbFB8fGGVZPUE8JikmGkY8JicnGxwaEjAoGgAfHRQMCgkICgM0Mh1MQytVSDNYSDpgUkWDcWCNd2KMb1GDaUgnJCAwLi2Dcl11ZVWhjHetl4V3bF9DPjchIB8SExUFCg8eHxo+PCwKEhAABQkWFhUWGhUAAwMKCAYcHBbKsHTnkRqksgAAAABJRU5ErkJggg==" nextheight="402" nextwidth="768" class="image-node embed"><blockquote><p>First we make our habits, and then they make us</p><p>― John Dryden, poet and literary critic</p></blockquote><p>Are you perhaps planning to make a new year’s resolution to run a marathon? Or are you committing to a new sales quota at work for the year 2024? If you haven’t achieved the goal by July 2024, will you be unhappy? Will you give yourself some slack and simply postpone the goal? And if you exceed it, will you immediately make a new goal? Would you feel happy about it?</p><p>Most people are really bad at setting and reaching goals. First of all, people seldom reach their goals. In environments where people consistently achieve their goals, it is usually because the goal was set too low to begin with. In rare cases where goals are set high and then met, or even far exceeded, the fact that the goal existed or the level of the goal doesn’t seem to make any difference.</p><p>As I see it, thinking about goals isn’t healthy. I find it a much better approach to think about habits. <strong>Habits focus on the immediate action instead of a long-term outcome.</strong> People tend to be much more efficient and also happier after adopting habits. For example, if you want to run a marathon, start by adopting a habit of running 10 km twice a week instead of the sporadic running you would do otherwise. If you work in sales – or if you manage a team of sellers – don’t think about the annual sales quota but instead focus on adopting a habit of reaching out to, for example, 10 new customers daily.</p><p>Habits are better than goals because:</p><ul><li><p><strong>Habits are adopted immediately.</strong> You know in a matter of days if you are keeping the habit or not, while goals make it too easy to postpone doing the work.</p></li><li><p><strong>Habits are easier to plan.</strong> Selecting a goal that is concrete and measurable is hard. Even if a good metric exists, it is hard to choose a level low enough to be reachable and thus motivating to work towards, yet high enough to be admirable and to fuel people to make extra effort. For a habit, you pick something concrete, and just do it consistently.</p></li><li><p><strong>Habits are easy to track and measure.</strong> It is immediately evident if somebody is not sticking to a habit. There is no denial, just effort to get back into the habit. Goals are vague, and not meeting a goal is evident only when it is too late to do anything about it.</p></li><li><p><strong>Habits can be sustained</strong> for years and years. Goals often compel acts of heroism, which are not sustainable in the long run. As Bruce Lee once said, <em>“long-term consistency trumps short-term intensity.”</em></p></li><li><p><strong>Habits make people happier.</strong> If you forget or are unable to do something, just get back into the habit the following day. If you fail to meet a goal, you just feel miserable and have no immediate way to rectify the failure.</p></li></ul><div class="relative header-and-anchor"><h2 id="h-first-step-adopt-a-new-microhabit">First step: Adopt a new microhabit</h2></div><p>The smaller the step is, the easier it is to take. Microhabits are a concept of improving something, one tiny step at a time.</p><p>An example of a microhabit could be the act of drinking a glass of water every morning. This is something I have been doing myself for almost 3 years now: Every morning, I go straight out of bed to the kitchen and drink a glass of water. Only after that do I allow myself to brush my teeth and do other things that are part of the usual morning routine.</p><p>This is a great microhabit, as after a full night’s sleep, one is bound to have a dry mouth and a little bit of dehydration, for which water is the best cure.</p><p>Starting the day with a glass of water gives a nice head start into a larger habit of drinking water frequently. People typically don’t drink enough plain water, even though it is cheap (practically free), and ensuring good water balance takes away all symptoms of dehydration, such as headache and tiredness. For some reason, modern humans tend to prefer other drinks, such as beer or coffee, which can actually make dehydration worse. Water is essential for all life on earth, and we need to stress the importance of it. Humans can go without food and fast for 2–3 weeks, but without water, we perish in only 3–5 days. Water has zero calories, and the sensation of fullness from drinking benefits weight control.</p><div class="relative header-and-anchor"><h2 id="h-second-step-dont-give-up-stick-to-the-habit-until-it-becomes-effortless">Second step: Don’t give up, stick to the habit until it becomes effortless</h2></div><p>If you lapse from a habit one day, don’t worry, just get back into the habit the next day. Think of ways to remind yourself of and strengthen the habit. The above example of drinking a glass of water every morning can be strengthened by keeping a water purifier filled with water on the kitchen countertop. Every time you walk past the kitchen, it will remind you of the habit. This setup also minimizes the effort required to perform the microhabit, as you always have the water easily at hand. Personally, I also think water at room temperature feels healthier than drinking ice cold water directly from the tap.</p><p>The fact that you decided to do this, and you keep doing it after weeks and months will help strengthen your willpower. Studies on <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Neuroplasticity">neuroplasticity</a> show that it will physically help to rewire the circuits in your brain into making a decision, and sticking to it. Once you master this one microhabit, you’ll find it easier to adopt other habits that take more effort to get into.</p><p>As explained in a <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://doi.org/10.3389/fnins.2022.699817">review article in Frontiers in Neuroscience</a> of 63 meta-analyses, this occurs because of an increased ability to exert effortful control in our brains. Simply put, as with all biological organisms, our brain has evolved to save energy and only think in situations where that extra energy consumption is necessary. Most of the time, our brains run on autopilot, which means not only that the existing pathways keep getting reinforced, but also that the pathways of the pathway control system itself stay weak, as new pathways don’t need to be formed. Hence, when we take on a new habit and exert the mental effort to repeat the routine with conscious intent over and over, it leads to creation and reinforcement of pathways related to the habit. The day a habit has grown strong enough to become part of our brain’s autopilot program, the system that decides what goes into the autopilot and what goes out is also at its strongest.</p><div class="relative header-and-anchor"><h2 id="h-third-step-increase-the-number-of-microhabits">Third step: Increase the number of microhabits</h2></div><p>After successfully adopting your first intentional microhabit, the next step could be to either <strong>expand the first microhabit</strong> to make it more complicated, <strong>or to adopt a second microhabit</strong>.</p><ol><li><p>An example of expanding the first one could be taking a multivitamin pill with the water, or adding salt to the water to follow the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.hubermanlab.com/episode/using-salt-to-optimize-mental-and-physical-performance">Huberman routine</a>.</p></li><li><p>An example of an easy second microhabit would be to drink a cup of warm water every evening before going to bed. Since it is just water, you can drink it even after brushing your teeth. It feels somewhat filling, and helps me avoid drinking or snacking on anything else, which is likely unhealthy. The warmness of the water feels good, and I think it is plausible that it has similar health benefits to drinking tea, just without any flavor or sweeteners.</p></li></ol><div class="relative header-and-anchor"><h2 id="h-get-into-a-habit-now">Get into a habit now</h2></div><p>Get into good habits. Keep them for the rest of your life. Change them only if you find a new, better habit to replace the old. If you slip from your habit one day, forgive yourself, and get back into the habit the next day. By keeping the good habits, you will eventually reach your goal – and eventually also far exceed the goal. <strong>Make habits, not goals.</strong></p>]]></content:encoded>
            <author>otto@newsletter.paragraph.com (Otto Kekäläinen)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/8e6a0faba5eebf86b78311131fb27b45.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[How to conduct an effective code review]]></title>
            <link>https://paragraph.com/@otto/how-to-conduct-an-effective-code-review</link>
            <guid>ggtRvZgvxl2gOrZx8w3u</guid>
            <pubDate>Sun, 19 Nov 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[In software development, the code review process stands as a crucial checkpoint for ensuring code quality, fostering collaboration, and promoting kno...]]></description>
            <content:encoded><![CDATA[<img src="https://storage.googleapis.com/papyrus_images/b483f526e10a8fa527b1a4134c712f69.jpg" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGT0lEQVR4nDWU2XMT9wGAfzN5C5Omk3acDCSEOhASjhbwhHAEEtJOpgEcKG7oTBJMcHGJgZJiioFJMHbNacBgI4hlOZZtYausJdta1hbWscirY6WVdrVaaQ9pV6u1VrJs4UumzUw6gXZ46PcHfA/fwwckJa1mMl5W7CHFf//rh4mpueyjmUJhXsmMYzR/4h66+R/tuy7o1l02brxyt6hOf2kAk2W5pv7S8pLNq97d0tFjrKq/BlZsA+Wnnqs8C7bueemL48+fvLn2rOZ2t1FvNAFZzRQKc+rkVHW/1ytlp2cKmcnpiYkpQZJ33xlccdm4qdG4rfb2+43GkivGF891rvhGixH09daO11a9s3zt+qM1Zxbt3Ad2fLW2QQde/hUAACxY+OEF3foW85HrOl2nAczP/1CYf/z0xx8dTKLahMnZXCIh4gwvptS2Efw3l41bbpg/uNb31tGLL52680l9q95iG8FDWgje+cWf12368EDVsedLK8Chi6UaCPyf4t0VK89odtfdamhsArn8TCb7aGp67vH8Y43VN4gFA4I8Gh/zxBUpKR/sQN5oMIAtf3xh9XtNeoiIcl5GeICHDIijuuHqjs8rvm24VFJ1Dpxo3tpqeeH3n/28eGXR6o1g6UZQeui31ecPHz8NUulxNTOpZvMpRVWUtE9I3Se5EVqwRhJ+ThrGwxv2/62s/C+IEyMTKUeYsxOMzUvCqOd76H5Di7a5Xf/35nZwVrdGMwh2VD5TLy4BG0oX1dxc+lHZ/sojQE6pLCfGYoI/KrhiYiwhIwRzz03eJzlaVnO5CYaLR8eyLlZyxXgxPU5wUj8W7EN9f9X+s8kId5kHu+6Zd9U2gQY9qKwHRW8D8DIoXg82fPyLV9/Yd+goIKgIHorABKt9SF8Y8rtoIchJTlrITObnZucSmYlRVnSGeT6dLczPPy7MpTITQwF6f7NxVb3u0yajAXH0IiO3dPr3Pj0AtpeDTbvAwrfBL19/7mdFr7y+7LODh4GHZjV2sh72noY9e/UjH9+A/FE+n38kpnO4mA6IaUZW87OF/z59Mjc7hwQ5FyNmstk2xLWutn1lnf6cAYasaA9iv9Wm3/OnfUvXvFNU/NYrS5YtWb7m3a0flVceBaQgNcCe7brhxQ1doKJu78U2byxBikpATKfz0//56aenT55Mz8zm8jOikmXlrJRSU4rKilJ9L1JnfNCPekw2Vy9i7+xH2o1955s0R05+U1Zeuf0Pew9WHTt28iwQJQUmYi8evvTmgVMdJiQpj1FJNabmJmfn6fFpdXp+anZOSKbldC4/NZtKZ7h4ipfGUoqKUTHEG4RH/f02rLN/6Pztjqva7vbePgNk/vzYGfDqalD862eJeEHyhiJ6yBIIUa6YZAnEgoKUzE2N5WccnJLKTI5PzqSyk9F4MpnOiXJGSKapmMByoiDJMEYMoL5uy4NGXc8NQ99lraFR2w0NwDu/Ogne3AYWrdlWWgZceNCDEy4q2jVKaeyh75zkQ4oNJlJqfqYwO6tkJ2Q1JyrZCJdg47KQVMmoEGHjfFwSJTnMxU2o53sz0mmxa+7217a0X2gz3DUj73/5NVj2AVhcsmdfBfD4gz0PietDuGaEaLURkI+2UayfZgkxnXs0k1SyvDQmplRRyZIMR3MiL6Y4MUkyvI9kOC6BETRkd2sh+Koe0pqRmwao497Akk++BMt/B4rXV1R9DUZ9RB/qvwp7WhA3hFFmX1gzgkMuwh8VqKQ6MZEXUmkhqUbjyQgbJxmel8YImsXJiDdEh8LRhCgNY7gesRsQ53cQfAeCzVbH5h1lYMFrYMHCE9/WAdTr9/qDOqvnfD96A3Ffsbg1VrzFisNY0BMTeCUrK8+ysPEkJyYD4ZifjgUZ3heKUDE+wgk0w0Zi7ADqbjMP6yw2yI4NOT0QjFy5fvPEmdrmdj1wuv2jPsLpJS6a0dM99muDWKNl9LTRXgfZbV4SpZ9dj4snKVYIhKO+UMRHMhSb8BA05ic9BGV3+wkqEmWFAdQzgHq6hpx9VhQZQXsG4C7TgM5gBA7M9wB1Yx5vr91dY7DWGIaru4aP64fr7t63u3EHQWNMIiHKXiKCk0yEE0mGHw1QOMWYkBEb5g2Fo3w8YbOjOqOpFXZA1odmq6PfajfBVp3B2Nrd+z9c5Srz+UeZ+wAAAABJRU5ErkJggg==" nextheight="402" nextwidth="768" class="image-node embed"><p>In software development, the code review process stands as a crucial checkpoint for ensuring code quality, fostering collaboration, and promoting knowledge sharing among team members. Despite the importance, many engineers lack a clear mental map of how effective reviews work. This is my attempt to help code reviews and reviewers improve.</p><div class="relative header-and-anchor"><h2 id="h-first-review-question-what-is-the-intention">First review question: What is the intention?</h2></div><p>Focus first on the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://optimizedbyotto.com/post/good-git-commit/">git commit title and message</a>. Do you understand the description? Does the proposed change make sense based on only the description? Is the idea clear? Ask yourself if you can come up with reasons why the change should not be made, or if there are obvious better alternatives.</p><p>If the description does not make sense to you, immediately share that as your initial feedback. <strong>If the description about the intent is incomprehensible, don’t waste time reviewing code implementation details.</strong> It could be that the submission was still just a draft, and the only (and immediate) feedback should be a request for the submitter clarify their intent.</p><p>The next thing to check is if the code matches the description. For example, if the change proposes to fix a bug, it should not include extra code that adds a new feature. Sometimes when you review the code, 90% of the changes make sense and do exactly what the git commit message stated, but there might be some code lines changed that you don’t understand. Ask the submitter about these. Maybe they were included by mistake, or for an unobvious reason. This should be addressed by some comments inline next to the code, or by amending the change description.</p><div class="relative header-and-anchor"><h2 id="h-next-is-the-implementation-good">Next: Is the implementation good?</h2></div><p>Assuming the description and changes align and make sense, shift your focus to the implementation details and assessing how the problem was solved. As a reviewer, you should ask yourself: Are there alternative ways to achieve the same outcome? Would these alternatives bring clear advantages, or is the current proposal the most sensible one?</p><p>If the code changes look mostly good and you don’t anticipate a complete overhaul in the next revision, proceed to giving feedback about the details in the code. Look for potential logical errors, deviations from coding conventions, anti-patterns, spelling mistakes, and even hint at any security concerns or performance issues you may anticipate. Scrutinize every aspect that catches your eye and suggest improvements wherever possible.</p><p>If you find yourself commenting on nearly every code line changed, <strong>consider aborting the review</strong> and asking the submitter to address the first set of comments before completing a comprehensive review.</p><p><strong>Beware that some people might take every comment literally.</strong> When pointing out things that can be improved, remember to include phrases like “<em>in general it is better to..</em>” or “<em>if you have time do..</em>” to soften the general feedback, and clarify what actually needs to be addressed before you can give your approval for the change, and which things you merely suggest as quality improvements without insisting on them. <strong>Also beware of submitters who address comments mechanically.</strong> Instead of giving your own proposed better implementation as a code snippet, simply ask the submitter to “<em>please rethink X so it always does Y without doing Z</em>” which forces the submitter to write the better version themselves, and in the process train their brain muscle. Who knows, after thinking about the thing you pointed out, the submitter might even write a better fix than you initially thought of.</p><div class="relative header-and-anchor"><h2 id="h-nitpicking-is-not-only-ok-but-actually-important">Nitpicking is not only OK, but actually important</h2></div><p>Some may be hesitant to nitpick in code reviews, as nitpicking is not something civilized people tend to do in normal daily interactions. However, <strong>nitpicking is an integral part of code reviews</strong> and software engineering in general. Computers read every dot and comma literally, and these need to be exactly correct. The whole idea of a code review is to maximize quality and minimize the room for error, in both the short term and the long term. Thus, in the context of a code review, please nitpick as much as you can.</p><p><strong>The amount of feedback should be maximal, but the bar for approval does not need to be maximally high.</strong> The requirements for minimum quality should be suitable for the skill level of the developer team. Yes, over time the quality bar can be raised as the development team collectively learns about best practices, but if the quality bar is too high from the onset, it will discourage developers from making new submissions. And the iterative cycle that allows for learning to happen won’t take place.</p><p>Managing quality is hard, but ensuring acceptable quality is central to the code review process. As a general rule, however, if you are in doubt about what is reasonable to require, err on the side of demanding higher quality rather than settling for something you feel is substandard. In the long term, you are more likely to be happy about having made such a decision.</p><p>Keep in mind that the main reason people settle for low quality is that <em>doing things at the highest quality level requires more effort</em>, and some people are lazy. However, if people are constantly pushed to do things at high quality, doing things correctly will soon become effortless.</p><div class="relative header-and-anchor"><h2 id="h-continuous-integration-and-tests-saves-everybodys-time">Continuous integration and tests saves everybody’s time</h2></div><p>To support the review process by humans, all software projects should have a CI in place that runs automatic tests on every code change and helps detect <em>at least all easily detectable</em> regressions. Code submissions with failing CI typically don’t attract many reviewers, so submitters should review and fix all CI issues themselves as soon as possible.</p><p>On a related note, all high quality code submissions that change the program’s behavior should also update and extend the associated test code. If, for example, the software project has unit tests, and a submitter sends new code without new unit tests, the reviewer should request the submitter to write the missing tests. This feedback can even be automatic – a good CI system could detect a drop in test coverage.</p><div class="relative header-and-anchor"><h2 id="h-communicate-clearly-and-precisely">Communicate clearly and precisely</h2></div><p>In the review process, it is important that both the submitter and the reviewer communicate clearly and precisely.</p><p>From the reviewer’s side, this means that the reviewer should be clear on what things are required from the submitter and what things are just nice to have. If a reviewer thinks the whole approach is bad, they should be frank and reject the submission from the get go.</p><p>From the submitter’s side, the initial submission should be as complete as possible. If code is submitted for review (e.g. Pull Request or Merge Request) but it does not have the final contents, the submitter should be explicit about this and prefix their git commits or the review title with <code>WIP:</code> to signify that it is work-in-progress.</p><p>In my experience, the typical reason for code submissions and reviews getting stalled is simply unclear communication. The code submitted might be fully correct, but the plain English part is lacking, leading to miscommunication. Typically, reviewers also postpone diving into submissions that they don’t understand at first glance, as having to do detective work to figure out an unclear code submission can feel overwhelming. <strong>Therefore, the best way to ensure submissions are reviewed timeously is to communicate clearly.</strong></p><div class="relative header-and-anchor"><h2 id="h-avoid-noise-maximize-signal">Avoid noise, maximize “signal”</h2></div><p>Needless to say, both submitter and reviewer should know how to properly use the review tool. For example, both GitHub and GitLab have “Start review” and “Finish review” features that allow the reviewer to write multiple comments without spamming the submitter with multiple emails. Pressing “Finish review” will trigger the submission of one single email with all comments. Most systems also have buttons for requesting review and re-requesting review that the submitter should use to communicate clearly when the review feedback is addressed.</p><p>When a submitter re-submits code for review, they should always use <code>git push --force</code>. The reviewer is always looking at the submission with the question “Is this ready to be merged?” in their mind, and the submission they look at should be the final polished code. There should be no <code>WIP</code> commits or multiple intermediate git commits – the reviewer is not interested in how the submitter ended up with the correct end result. <strong>Only the final version matters.</strong></p><div class="relative header-and-anchor"><h2 id="h-respect-peoples-time-prioritize-correctly">Respect people’s time, prioritize correctly</h2></div><p>Reviews require time. The submitter should be extra diligent not to waste the reviewer’s time. Likewise, the reviewer should try to review as quickly as possible, or at least share their initial impression as a quick short comment.</p><p><strong>If reviewers are short on time, they should prioritize re-reviewing submissions they’ve already once given feedback on.</strong> People who already have context about something should continue on it, as it is most efficient for them. Other people on the team are more likely to review code submissions that have no reviews at all.</p><p>Likewise, submitters should prioritize responding to review feedback and updating and polishing their existing submissions as soon as possible. A reviewer is much more likely to re-review and approve a submission they have already looked at a couple days earlier than submissions that have been lingering for weeks or months.</p><div class="relative header-and-anchor"><h2 id="h-honor-original-code-let-submitters-feel-ownership">Honor original code, let submitters feel ownership</h2></div><p>A code submission and the review process serves as a process for the new submitter to become initiated with the software project. Reviewers should keep this in mind, and not rush to grab the code, but put in a little extra effort to guide the submitter on code base and on the quality bar. Reviewers might feel frustrated at times, but that is not an excuse for bad behaviour.</p><p>Sometimes, the reviewer might be tempted to fix the issues in the submission themselves instead of giving the original submitter a chance to do the final polish. <strong>A reviewer must resist this, as this will kill the feeling of ownership of the original submitter.</strong> In the context of open source projects, grabbing somebody else’s submission and committing it yourself might also constitute a copyright violation, and it certainly does not encourage the submitter to continue making further submissions. The reviewer should at most just rebase the commit, nothing more.</p><p>If the review process takes too many rounds of back-and-forth, then as a compromise the reviewer could merge the submission as-is, and immediately follow up with their own changes on top of the commit.</p><p>In both open source and company-internal work, the achievement that one’s code got merged is very valuable, in particular if it was the first accepted contribution to a particular project from that person. Don’t rob this from code submitters; let them earn it and have their <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Git">git</a> credits and purple GitHub merge badges in their profile.</p><div class="relative header-and-anchor"><h3 id="h-there-may-be-many-reviewers-but-never-more-than-one-code-submitter">There may be many reviewers, but never more than one code submitter</h3></div><p>A fundamental principle in code submissions is maintaining a single code submitter. While there may be <strong>multiple people reviewing</strong> and posting comments, there should never be more than <strong>one person submitting</strong> the code (and subsequently improved revisions). Having one author ensures a clear owner, who iterates the submission and decides how to address all feedback. If using a git branch, that person is the only one who amends commits and force pushes the branch until the final version has been completed.</p><p>If the submission is a patch on a mailing list, having multiple submitters of a single email is impossible and authorship and ownership not an issue. On GitHub or GitLab, this problem might arise, as the submission is a git branch, and the branch permissions may allow multiple people to push to it. <strong>Having multiple people pushing to the same pull request, however, is plain wrong.</strong></p><p>In some projects, git pull requests are intentionally misused by having multiple authors push git commits on them and discussing the collaboration in the “review comments” section. <strong>The correct way for a group of people to write code together in git before the code goes on the mainline branch is to have a <em>feature branch</em>.</strong> Both <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request">GitHub Pull Requests</a> and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html">GitLab Merge Requests</a> allow the submitter to select the <em>target branch</em> of the submission. With a feature branch model, each collaborator submits their own individual pull requests towards the feature branch, and once the feature is complete, the feature owner carries the responsibility of getting the feature branch accepted into the mainline branch of the git project.</p><p>Sometimes it may happen that there is disagreement or laziness, and the submitter refuses to properly address all feedback. In these cases, the recipient/reviewer either dismisses the submission completely or accepts it as-is (merges on target branch), where work can continue in the form of follow-up submissions from other people to further improve it.</p><p>It may also happen that a submission has a great idea, but the implementation is bad and unacceptable. The submitter is asked to improve the implementation, but the submitter might fail to do so. In this situation, nobody else should rewrite that same submission, as it blurs the lines of authorship and ownership. Instead, the other person with a better implementation should simply open a new pull request or submit an email with their own version and in their own name. After that, it will be up to the reviewers to decide which submission gets accepted and which one declined.</p><div class="relative header-and-anchor"><h2 id="h-code-reviews-are-opportunities-to-learn-for-both-the-code-submitter-and-the-reviewer">Code reviews are opportunities to learn – for both the code submitter and the reviewer</h2></div><p>Keep in mind that the reviewer does not need to be a superior developer. <strong>Anybody can be a reviewer, no matter how senior or junior a software developer they are</strong>. Reading code written by somebody else is a good way to get exposure to various coding styles and various ways that different developers solve coding problems. Reviewing code is a learning opportunity for the reviewer, and thus the review process ultimately leads to both the submitter and the reviewer writing better code in the future.</p><p>If you have an opportunity to do code reviews, do it, even if you don’t feel like an authority on a specific domain. You can still make a valuable contribution by giving feedback on some aspects, while leaving the final approval decision to a domain expert.</p><p>Remember to communicate clearly – the better the code is documented, and the better the feedback is explained, the more both submitter and reviewer learn from the experience.</p><p>If you have additional tips on best pratices for code reviews, please add a comment below!</p>]]></content:encoded>
            <author>otto@newsletter.paragraph.com (Otto Kekäläinen)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/b483f526e10a8fa527b1a4134c712f69.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[My 5 tips for efficient meetings]]></title>
            <link>https://paragraph.com/@otto/my-5-tips-for-efficient-meetings</link>
            <guid>52LvrkDz55G7UFaiuUHC</guid>
            <pubDate>Sun, 08 Oct 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[While large organisations scale best by emphasizing asynchronous communications, in-person or video meetings also have their place. As a manager who ...]]></description>
            <content:encoded><![CDATA[<img src="https://storage.googleapis.com/papyrus_images/97b8a52230dc7917ffe1c0a9ba376ee6.jpg" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGeElEQVR4nBXNe1TShx4A8K9zrmWWOTNREZ+IyMNHgPLQwAcgiGECRoIo5uNHzudRQFMnikLTpaakIzWHqaM0U9ZsprmmmVQuXW27t23nntXa6d6u273n3sr7x7pn53z+/0A0TxafrWZIChlyDVtRjpeWgwABTh5EceBAiGtA1K4ImltEAoTFG0+f2bi9uLl203C6B0hp78aL3mYchWSlytDX3D145frN3/79cm1j682bN2/++GPN+fXCrfXXr3eAklWQnF8pKK7japrqWrvmri/KmvpAogWmDDDkt2O53lyVB1PyVrxYUKy9sbTUa/0ExSt4K039XqbGT9kQXWuxfLbylfPBjz8/e7G9vXJ3a2dn5387O8urTsfi6qtXr4GaU8pDdNnVbdl1p419Qw82N5dW11CICVJPAJ7tRs/y5Kv3cnL3UoW7onlR3GM+zCzXRKm/rCao1ExvtAo77bP3/rJ2f+vBtz88//s/l1bvv/zv65cvX11fXp1wLGxv/wsox8p4msasWpO0vsc8YBudcow5FnG1FhCVAznVJT5zN1uxNzblnV2eri6uAK5e3n6HKHR8ZhGxfojbfvHIR5fttx9+tb658ejx02fP579c//X59u+//+fqwrLVfu0fL34DolTDKq7nV3UItN0dg2MD4zP9kw5ywzDIdBDDB5oIEmXuhCQXcHMFFzdw8ffDCIWicIGK0GzjmsYyOi/Zbm0t3/n65t2Hj//2bOqLlZWNR7/8+uLizPyp3pHvfnwCQWKEnK9jnjTGV3x4qveT5dW7ny2vR+qH4PgHcEgIlAxgSNypAg9UmIdvsEcA1isshkRPDckuJzbZEttsHNNk//yd2eW1yYXVG85vLHbHF85vnN//1Dc+o9Kbz88swnvCYoy8Fqtu9j3RoTEPbzx8PDK/uruiF6R6IKQBRbSPWwBMOSTlQXKRbyL/YHoOyBqhsDO47mNm6xjLZD/7uXNqYeXspGNwer7VOmboHzFbRnSn+9X15tG5JXiXW7gvu8pTXu+hNmKQroNlPaAygsIAyhaiUiduGaFU9cYpa/Na+guN1pz8Qo2+RdM3Vdo3hS3vBr4GU9hWa50enVvqGp/tmZzV91iPva/dHxxJTBTwVBXUNBFAsgoyNCCuBnkjKFtB3oyrOYdDTLym89IPLzFOGJCWvrmFFadzw7l+f3Tyytq9rSc/P3365Omt9S2VvgtY8uD0IlqeVqE1lbad1Z8512q1JYrlbImKJS/1DIkCYMiAnQdpahCWg0QHwrJEnSXTfDlU1cQtax+yXdrcenR77d7AhGN4+ob9+srU4p3J+dWhK0tb3/7w5Jfn49Ofc9XauGNVXoyjEM6o7/rYNDxRojcoqvT8/IpkiRKAkALRPKAdgcNK4BYDFwkqMGB0Q77JCuX7zWW69hMVp6oM3TRxAWDpLtEpEJMCuMP45Cydsae8oeOs7Wq7ZZSnKINAGjMH+XR+eWbhlslqO16pzyiqzFCfBFcM2S2C/meTINmTXsJQN+EFKiAJCHxVOFN0ICrhAIFFlVUeUWpQRKY3gYmKPRwYL+AcyRUjjSyRQlxchzSapaVawMQcLaw09g07Nx49/OtPRssFaVm9StsGriicazjVnSLAikpS1A3HqzqCkiRAVwG3yI/I8sVR0LGcOL7iEE8KWCYQ2C7kFCALSMI8tqyEKK/mlZsaLlxjFzUC+KbnIh2Wi1KkbsBmH7rsqO85b5tbADc00ZfKJ/JV9GwkKaeMr6hEpymBUwRkLgSQ3INJ+3E0PI0Tyj4KcdlAkwIjF1jqyOyTLL4sSlbFr+1Feu286o+AkMzL1UwtrHdfmJYiOpY4t7HTcmfrO/Ahc0IThDimMIqVQWCJopiCSFoKJoru4e2/xwfjHUZCRdLCSfH7ojkQwQFCKhDTgMDbfYhP5+ek55XHcaV0blZKdsEH5y6ZrJfP2Gb7xh3949fUWlNsmkSB1MBBLMU/khqIp2HwVHRknH8oyS+E4BuIcz+APhhCCoiI8wmMREfEBZEYbl7od7z/5BWMDyAwk8Qqdg4Sxz9OSsqUITrzyFWDZaJ90N5htbedmzBYJuo6rZycEvDyx3qjI3zQOB801scv1BsVvMcL5YkK3Y8K3R+ADYtlY2NYviFkvwgKmshEExICCIygGHY4jUdMlRFTJQmifGqGKl6ozCjU5tWaakwD+u4L+s7hGvNgtWmwwmj5P9B7gdsv8Un/AAAAAElFTkSuQmCC" nextheight="403" nextwidth="768" class="image-node embed"><p>While large organisations scale best by emphasizing asynchronous communications, in-person or video meetings also have their place. As a manager who is involved in a lot of planning and coordination work I’ve noticed that I’ve spent the majority of my working time in meetings in past years. These are my 5 tips to make meetings as efficient as possible.</p><div class="relative header-and-anchor"><h2 id="h-have-a-written-meeting-agenda-before-and-during-the-meeting">Have a written meeting agenda before and during the meeting</h2></div><p>While a simple <em>‘hey we should chat’</em> works for small informal meetings, having <strong>an agenda always makes the meeting much more efficient</strong> – even if there are just two participants. Put an agenda in every meeting invite you send. Explain what is the purpose of the meeting, who is attending and what should be the outcome. It does not have to be formal. Even a tiny agenda summary is enough to show you’ve put some thought into why the meeting is taking place.</p><p>When the meeting starts, show the agenda to remind everyone what the purpose of the meeting is so that attendees can help make the meeting productive. When possible, I prefer to organize meetings with <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://calendar.google.com/">Google Calendar</a> because <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://meet.google.com/">Meet</a> will automatically show the invite text (the agenda) in the video call as a small popup in the left lower corner.</p><div class="relative header-and-anchor"><h3 id="h-fall-back-agenda-if-unprepared-whot">Fall-back agenda if unprepared: WHOT</h3></div><p>If you find yourself chairing a meeting unprepared, remember the acronym <em>WHOT</em> to improvise an agenda and meeting structure that works in most situations:</p><ul><li><p><strong>Why are we here</strong> - Explain why the thing that is the topic of the meeting matters, and briefly the context or perhaps latest developments around it.</p></li><li><p><strong>How attendees relate to the topic</strong> - introduce participants if they have not met before, and even if they know each other, explain how they ended up being invited to the meeting or how they are expected to participate in the topic.</p></li><li><p><strong>Opinions (or orders)</strong> - Ask attendees for opinions on the topic. If the purpose of the meeting was to get input from everyone, your duty as the chair of the meeting is to make sure everyone has a chance to talk. Most meetings about non-urgent topics are about collecting and aligning views. If the meeting is about an urgent topic such as an operational issue, the O in the acronym stands for orders and the main part of the meeting is to make sure all participants know what they should execute immediately after the meeting. Even when giving out orders you should still verify that the participants understood and agree with the ask and not assume too much.</p></li><li><p><strong>Timeframe</strong> - Conclude the meeting with summarizing what is the timeframe of the agreed actions, or when the next event in the topic is expected to happen, or if a follow-up meeting is expected state the timeframe for it.</p></li></ul><div class="relative header-and-anchor"><h2 id="h-be-inclusive-with-open-ended-questions-and-dont-fear-silence">Be inclusive with open-ended questions and don’t fear silence</h2></div><p>Don’t be afraid of moments of silence when running a meeting. In fact, you should ensure that there are some pauses so that people who need more time to formulate their thoughts have an opportunity to speak up. This is important in particular if many participants are not native speakers of the meeting language. The best ideas might not be the ones that are voiced first, but the ones that come after some pondering on the topic.</p><p>If some participants talk too much, ask them politely to give space to others. If everyone seems silent, ask open-ended questions. Maintain a welcoming environment for discussion where speakers or their opinions are not directly criticised as that might silence some participants from sharing their honest opinions. Make sure all discussion have a respectful tone and opinions are voiced in a constructive manner with focus on solutions, or when discussing problems encourage participants to put forward concrete data points instead of pure opinions. As a chair for the meeting strive to be kind, but firm.</p><div class="relative header-and-anchor"><h2 id="h-respect-the-peoples-time">Respect the people’s time</h2></div><p>As the organizer you are responsible for allocating enough time. If the meeting is going overtime, end it, and schedule a follow-up meeting. If you think some participants talk too much or off-topic, steer the discussion back on-topic and remind about the time remaining. Even if there is allocated time left, don’t allow room for random mumblings. <strong>Respectful use of people’s time includes cutting the meeting short if the goal was met.</strong></p><p>To keep meetings efficient, they should not take longer than one hour. When planning the meeting and thinking about the agenda, make sure there are not too many items for one meeting. Also make sure that there are not too many participants. If the intent is to discuss something for an hour, there should be max 12 participants (which means on average 5 minutes of speak time per person). If the meeting has 20 or more participants, it is not a meeting but mostly a one-way announcement to the audience.</p><p>One large meeting with 20+ people could alternatively also be a series of meetings where the same chair talks to groups of 5 at a time, and after all the meetings sends out summary of all of them with a final conclusion.</p><div class="relative header-and-anchor"><h2 id="h-send-a-summary-after-the-meeting">Send a summary after the meeting</h2></div><p>People have a tendency to forget what was discussed or agreed after 2-3 weeks. Meetings are at risk of being wasted time unless the outcome is recorded at least in a small written summary. Writing formal meeting minutes is not an efficient use of time for everyday business meetings, but a 5 minute investment in a short summary is always worth it.</p><p>Formal meeting minutes are justified if the meeting is making decisions with ramifications in the tens or hundreds of thousands of dollars. Formal meeting minutes should record who was present, what the decision was, and <em>why</em>. For formal meetings it is important to agree to have some kind of approval mechanism for the minutes, and to not execute the decisions until the minutes have been approved with e.g. signatures.</p><p>If the meeting topic is very contentious, the meeting minutes should be written during the meeting and shared on-screen so participants can already during the meeting object in real-time if they see the decisions or their justifications not written down in the correctly.</p><div class="relative header-and-anchor"><h2 id="h-dont-have-meetings-if-possible">Don’t have meetings if possible</h2></div><p>The last tip is to take some time and think if a meeting really is needed? If you just want to have the opinion on one specific thing, perhaps the meeting could be replaced with a chat thread where your first message is the question followed with a bit of context. This might even work better, as people can take as much time as they want to formulate their best possible reply in writing and with exact data points and references. To ensure everybody participates, you can request everyone acknowledges the question with an 👀 emoji or a short reply. Such ‘meetings’ have the additional benefit that they don’t need a separate agenda or summary, as the chat was in recorded writing automatically.</p><p>As a replacement for a large 20+ participant meeting one could have a well written announcement email, or a video recording, since most of the participants in a large meeting will not have time to speak anyway, and they will mostly just be receiving information without participation.</p><p>If you feel that a meeting is needed for social cohesion, perhaps get the core decision done in an asynchronous chat or email discussion, and organize a breakfast or lunch separately to get the best social effect in a setting that is free from debating and just purely positive social time together.</p><p>If the meeting is a one-on-one between two people, it could be replaced with a walk in the park. That will make sure you both get some fresh air, and who knows, maybe also some fresh ideas?</p>]]></content:encoded>
            <author>otto@newsletter.paragraph.com (Otto Kekäläinen)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/97b8a52230dc7917ffe1c0a9ba376ee6.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Pulsar, the best code editor]]></title>
            <link>https://paragraph.com/@otto/pulsar,-the-best-code-editor</link>
            <guid>qSW51ROvzmtb6oP1GhPD</guid>
            <pubDate>Sun, 24 Sep 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[The key to being productive as a programmer is to have a great code editor. I have been an avid user of Atom since 2014, and its successor Pulsar sin...]]></description>
            <content:encoded><![CDATA[<img src="https://storage.googleapis.com/papyrus_images/d3f5b7e456eb51206259b60b08150c38.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAEDklEQVR4nL2TW2gjVRjHp7ikVBrbJiZpJul0ZnomczIzmZx0JjPJTDK5bDvNbZM23abbmF52qaGtaW274mJ166VgcRfLwj4oiwsFUZB9UF8EXxR81Bd9EATFN18E9VEFQZlMXequ4Aoi/PgYzsD3O+f7n4NhvS6H01upL4xBhGEPY70ueyUEEU4wjj7XMBFmI1qQCi8sb1y+cmCW5oI0RzJikOZkfWL/4NWdKy+tru0+8eQekg2SEZ2uANY7aDfBHEMYisWNbD6bzQUpgGF9Lh9JAFGM6Tktr0a1nJZPxjM8H7c7BmmBZESSEYnuR5Dm/AT04owvwHhwgFNhnGADFOcn2O4mEE5CbMAbSOVLBODP9LmxHqcHHwNQUnSzUG0mjaKWKyaNIoc0u68NgJIHBwtLa+9/8PGt2+8cHN64dfvtN47vHL/17p33Ptx/4ZUXX74x3VgK0hwBIpjD6R1Xs+fqrYQx5XB6h3yjAMqsoMBIAkCJQxqAEoDSaYG9dw5p5+qtYmW+UmuWa81S7UKl1tQyRSQbkppjBQVAaYTmLAGHkhxKevykw+ntnkBWdNMsN7ITNVZQitWmKKftud8jKFQv5Kfq6XxFlNMuH+knWHs4NgyvWAKsxzmuJGozlUG3B8POuH2jJIMU3ZxfWhflNM1ERTn9tydgBTWVr2bNul3L9cWkUTxNKl+lGBGLqbntvef3rx3V5haRbHSPJtstunFB3wgMUFa8dwlQ1q+IlD649tr1m8etS1sr7d21rb1C9bFKfalQbZZmWqWZ1kSpYQniutl56rmrh0dXD49WNi/H9UkAZRqOIzkdlZLxRDqeSAsoCQUlJusRpEaQmkrlaSYKoBzXTV4yQqLGIz0c1XjJ4FHKqpIBo1pcNy0B5hhAGQNljGgmpVUKIRSnGSQgfXtzc3Vl+emtzs7G+tb62urScqf9+OrFizsb6512m4HjAMpaplSuNc83lnNTM+Vas1A+Pz27WCw35pqXpmdbrNDNwOH0wqiq6CYBOKx30IePkQyanCx12u3G/KKiTwgoqepnJdVAclpSjXgiB6AcpDke6Tdff/P7H3/54aefP/v86y++/Parb7779bffP/rk0872M5u7z3JIw6kwZj/dE6x3AFhBYQWFl4yIlIbRFIxqIfEEhk9YA5EMACWGV0RLmRPllKTm7sIKyiNuwuUjT67pXwS9riHrFokPBIh0M+fux751AMr3Cwb/hYD5B/43QY/zNP+VgAB/Cjz4mG+E6Vbg9lND3qA1XCr8IBBWDCdz70ZiVRJERmiOoPmuhscUfWKqNMsKiqqfnSqUh0dCQQr2u/z3hH+ah/ofDY5x/QPDOBWm2SjFiKNASBgmzcZESefGUzE2XJqsuslQ0jD/AO9IPSM7WlE5AAAAAElFTkSuQmCC" nextheight="517" nextwidth="985" class="image-node embed"><p>The key to being productive as a programmer is to have a great code editor. I have been an avid user of <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://web.archive.org/web/20221002074229/https://atom.io/">Atom</a> since 2014, and its successor <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://pulsar-edit.dev/">Pulsar</a> since now in 2023.</p><div class="relative header-and-anchor"><h2 id="h-the-best-code-editor-came-from-github">The best code editor came from GitHub</h2></div><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Atom_%28text_editor%29">Atom the code editor</a> was created by Nathan Sobo, who joined GitHub in 2011 specifically to build the best possible code editor in the world. He also co-led the development of Teletype for Atom, which pioneered collaborative code editing, i.e. multiple developers writing the same code files at the same time. The team also created the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Electron_%28software_framework%29">Electron Framework</a>, which lives on as the user interface framework for e.g. <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Visual_Studio_Code">Microsoft VS Code</a> (created a couple years after Atom).</p><p>VS Code is actually the reason why Atom eventually <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.blog/2022-06-08-sunsetting-atom/">died</a>. In 2018, Microsoft acquired GitHub, the world’s largest open source hosting platform, to get premier mind share of developers, and as part of that plan, Microsoft also wanted as many developers as possible to do all their coding using an editor controlled by Microsoft. To achieve this goal, Microsoft invested hugely in VS Code, and naturally discontinued Atom after gaining control of it through the GitHub acquisition.</p><p>However, being <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://opensource.org/licenses/">open source software</a>, Atom can’t fully be killed. Yes, Microsoft controls the name, trademark and website – which are all now shutdown – but the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/pulsar-edit/pulsar/blob/master/LICENSE.md">source code is MIT licensed</a> and has resurrected as Pulsar the editor.</p><div class="relative header-and-anchor"><h2 id="h-local-app-but-built-with-web-technologies-htmljavascript">Local app but built with web technologies HTML/JavaScript</h2></div><p>The key innovation in Atom (and now Pulsar) was to build the whole editor as a web application that runs locally (offline) inside a dedicated browser window (Electron). This radically lowers the barrier of entry for all the millions of web developers out there to participate in the development of the editor. This is reflected today in Pulsar’s tagline “hyper-hackable text editor”, with reference to the original meaning of the word “hackable” as users’ ability to use the tool in ways the original designer had not anticipated.</p><p>To make Pulsar even more flexible, it also has a plugin system (called packages) with <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://web.pulsar-edit.dev/packages">over 10 000 community developed easily installable packages</a> one can use to extend the features of the editor.</p><p>Pulsar is feature rich and very powerful already out-of-the-box, but thanks to the vast number of additional packages, a developer can easily optimize one’s workflow to be as productive as possible. If a package does not already exist for your use case, creating one yourself is moderately easy to most developers, as you only need to know web development tech (JavaScript/CSS/HTML) to do it.</p><p>Consider the screenshot below showing a simple C source code file being edited.</p><img src="https://substack-post-media.s3.amazonaws.com/public/images/508c7817-2f8f-4a6b-8ec7-b02cf9d9df57_1003x774.png" alt="Editing a simple C file in Pulsar" title="Editing a simple C file in Pulsar" class="image-node embed"><p>In this picture, you can see that there is:</p><ul><li><p>a linter that automatically highlights with yellow a specific code word the user should improve</p></li><li><p>a yellow vertical line shows what lines in the file have been changed and are not yet committed in git</p></li><li><p>a file browser showing the files in the project, with modified files being yellow, new files being green and files ignored by git being greyed out</p></li><li><p>a rich status bar showing the git status, cursor location, file encoding and type, git branch name and even the GitLab CI status for the latest git commit</p></li></ul><p>This is perfect for my coding workflow – it does not feel bloated but has all the features I need without excess. Check out the post about <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://optimizedbyotto.com/post/develop-code-10x-faster/">How to code 10x faster than an average programmer</a> to see the Pulsar autosave feature being used to create the optimal code-test-repeat workflow.</p><div class="relative header-and-anchor"><h2 id="h-powerful-keyboard-shortcuts-without-the-need-to-memorize-too-much">Powerful keyboard shortcuts – without the need to memorize too much</h2></div><p>For maximum productivity, most developers prefer to be able to do everything using the keyboard, without the need to lift your hand to fiddle with the mouse. Pulsar has a keyboard shortcut for almost everything. If something is missing, you can also configure more yourself, and you can change the default keyboard bindings if you have existing preferences. However, the best part is that you don’t really need to spend any time learning keyboard shortcuts - <strong>you just need to remember Ctrl+Shift+P</strong>. This opens the command palette, where you can write in a keyword, select the action and press enter, or see on screen the keyboard shortcut and use it until it has gone effortlessly into your muscle memory.</p><img src="https://substack-post-media.s3.amazonaws.com/public/images/6c951648-9c25-4e02-93f6-babbdb63b0d3_1003x774.png" alt="Open the command palette in Pulsar by pressing Ctrl+Shift+P" title="Open the command palette in Pulsar by pressing Ctrl+Shift+P" class="image-node embed"><p>The two other keyboard shortcuts worth remembering are (same but without the shift) <strong>Ctrl+P</strong> that allows you to type a partial filename and press enter to quickly open any file in the project, and <strong>Ctrl+Shift+F</strong> to search a string anywhere in the project. The search feature is amazing, and also comes with a powerful search-and-replace feature with interactive previews on what string it would replace in which file.</p><div class="relative header-and-anchor"><h2 id="h-why-not-use-vim-like-all-real-programmers-do">Why not use Vim like all real programmers do?</h2></div><p>I know several world class programmers, and interestingly, the commonality among them is that they all seem to use <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Vim_%28text_editor%29">Vim</a> as their code editor. Many people I know who think of themselves as world class programmers use <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Editor_war">Emacs</a>.</p><p>While both of these can probably compete with Pulsar in number of extensions available and the amount of customizations and keyboard shortcuts, neither of them have a graphical user interface, and they are completely incapable of doing things like showing images or, for example, a Markdown preview (which by the way you can open in Pulsar by pressing <strong>Ctrl+Shift+M</strong>).</p><img src="https://substack-post-media.s3.amazonaws.com/public/images/f50b615c-f8a0-4329-8997-5e065f294ebb_1003x774.png" alt="Markdown preview in Pulsar by pressing Ctrl+Shift+M" title="Markdown preview in Pulsar by pressing Ctrl+Shift+M" class="image-node embed"><p>Also, Vim and Emacs use <em>modes</em> to allow users to either write the text in the file itself or to enter commands for the editor. <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.theregister.com/2020/02/19/larry_tesler/">Modes are terrible</a> to use, and thanks to people like Larry Tesler, we haven’t had modes in any new software since the 1980’s, and nearly all of humanity is blissfully unaware of what modes even are and will never be exposed to them. Instead of using keyboard shortcuts via modes, we today have keyboard shortcuts like Ctrl+C and Ctrl+V which are easy to use.</p><div class="relative header-and-anchor"><h2 id="h-are-electron-apps-slow">Are Electron apps slow?</h2></div><p>Admittedly, Pulsar is a bit slow to start up. It takes maybe 6–8 seconds for it to fully load from scratch, but once it is open, using it feels pretty snappy. The slowness is due to being an Electron app, which is basically a web browser. Indeed, the original author of Atom/Pulsar, Nathan Sobo, started a new code editor project after Atom was shut down. The new editor <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://zed.dev/">Zed</a> is extremely fast, as it is written in Rust and also because it utilizes the GPU for rendering. Zed, however, is not open source, and it is not available for Linux, so I have not even tried it. Instead, I tested another promising Rust-based blazingly fast editor, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://lapce.dev/">Lapce</a>. It is truly fast and pleasant to use. However, it lacks a lot of features, so I end up having to do extra manual work.</p><p>Thus, Pulsar holds up as my favorite editor, and for me <strong>Pulsar is the fastest one when measuring how quickly I am able to deliver code changes</strong> and write complete and correct text.</p><div class="relative header-and-anchor"><h2 id="h-my-pulsar-config-and-packages">My Pulsar config and packages</h2></div><p>If you want to replicate the Pulsar setup I have going, this is my <code>.pulsar/config.cson</code>:</p><p>yaml Copy</p><p><code>&quot;*&quot;: autocomplete-plus: confirmCompletion: &quot;tab always, enter when suggestion explicitly selected&quot; autosave: enabled: true welcome: showChangeLog: false showOnStartup: false</code></p><p><code>&quot;*&quot;: autocomplete-plus: confirmCompletion: &quot;tab always, enter when suggestion explicitly selected&quot; autosave: enabled: true welcome: showChangeLog: false showOnStartup: false</code></p><p>These are some of my favorite packages:</p><ul><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/pulsar-edit/sort-lines">Sort Lines</a></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://web.pulsar-edit.dev/packages/linter-spell">Linter Spell (human languages)</a></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://web.pulsar-edit.dev/packages/linter-flake8">Linter Flake8 (Python)</a></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://web.pulsar-edit.dev/packages/linter-clang">Linter Clang</a></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://web.pulsar-edit.dev/packages/linter-shellcheck-pulsar">Linter ShellCheck</a></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://web.pulsar-edit.dev/packages/linter-markdown">Linter MarkDown</a></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/burodepeper/language-markdown">Markdown language grammar</a></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/tsbarnes/language-debian">Debian control and rules file linter</a></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/lloiser/language-ansi-styles">ANSI color styles</a></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/russpowers/nim-atom">Nim language support</a></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/T-Huelsken/gitlab-manager">GitLab Manager</a></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/Stepsize/atom-better-git-blame">Better Git Blame</a></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/atom-minimap/minimap">Minimap</a></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/ansballard/minimap-autohider">Minimap autohider</a></p></li></ul>]]></content:encoded>
            <author>otto@newsletter.paragraph.com (Otto Kekäläinen)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/d3f5b7e456eb51206259b60b08150c38.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Unpacking Linux containers: understanding Docker and its alternatives]]></title>
            <link>https://paragraph.com/@otto/unpacking-linux-containers-understanding-docker-and-its-alternatives</link>
            <guid>FtvMwfOoYyf54xjpzHZO</guid>
            <pubDate>Mon, 08 May 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[In popularizing Linux containers, Docker brought about a new era of systems design based on these lightweight platforms, rather than heavy virtual ma...]]></description>
            <content:encoded><![CDATA[<img src="https://storage.googleapis.com/papyrus_images/6d42c229da8ab4318030bb7513c796e9.jpg" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGfElEQVR4nAFxBo75AHErHG0oHj0VD0QdFnwyI1QZDgwcKhQ3TBU1SQM1T015lkhfbVSDn0iBoE5baMXMzefl5Ketrp+prejm4+zr6PTx7//79Pjw5/Dq49XSzry8usLAvcTCvcTCvWZtdDFCUQBYIRhDJSRLMjMiKzceLTxLQD4LMkcCLEAPMkYBMU1ghZ5tp8RHZnc/YHVHYnWKm6Zmh5V0jpuXoKWmsrjc393GztC5x8urub7CxcT/9+399eny6t/b1cvUzcR1eHwyQVIANiQkADlRFztRFkZgD05uZoGUVIKfM1JlGzNECjFKTH6chLLKXZ69O1doQW6JXImhHHKTPYChTX2TOnaRWIOYVXyMWX6QSWt9e4OI7uje2tTL19DGwby2r62nb3V5KTxPADIrMxVMaRJAXSZNYiRKXVl1hmSZtmWWtFqAlyU+T0pxhWalxXOxzUJpfDY5QmmXrVGEnYyjr6ezuWyCjcLFxerl4N/a1N3Wz8S/u83Iw9fRx+LZzdfPxdTLvoyLiyc4SgAiJTIAMU4VNEkUQVkSQFhZeYtcj6thjqpwn7p0m7FdanNEX3BIl7pRboKRTD1aXWQARGN7hIy6zdeUqLKQm5/u7Obv6+Xx6+T58ur58un/+O3/9ert49fk282el5MkLTsADyEyAzZQEzpTF0RdDEhjV3mNSIOnV4qkY5GqbZqzcZ22YnqJSGRxREdSp1xOwIt/JUteVWFsgY+Yy9fXpautvcLA8evl9u7l9+7k9ezk9Ovh7+Xb2dDGzce7mJKKGBwjAAgmPBYzRho1SRs7UBRGYFJ4jDiCp0GBpFCGpVuMqF+OqHakvY6yxENCSo9JPNKTg4yKhkJLUpp/douBg5GlrJ6kpM3LyOTe2N/Y0dfQyc3Hv8S/uLm2r7m2r5yZkiAwOwATFhwWIiobLTkSLUAxKi44Wm4dbZAuf6M5hapBh6pHh6dSi6l2q8QwVGYjEg53VEp5gYJafI+GbmaEVVCPoapzkZ6Nn6evu7+ns7mqsrK/vbjWy8DCvLKysKqUlJIJLUEACQADFB8qESg4ACI3Gx0jN0FIFj5SI1ZuIVJqJmuLLX2hMoSrNpS9GlZyACI+ODtEYEdDW3OBR2R3QENJj4iHX4edeJOfprW5lqWshZebjJmcy8S7tbGqsK2qm5yZBy5BAAAnPBkuQRgtPxI+VRVEXTtmfzeBpjZxjjJedihMXihOYCtWbTVpgRtOZwApTUZHVLhZS21iYhdJXkl2j4F6eGZ8iIyhq83IxeLaz8zKxJSfo8C+uWhVVX51dZqcmgMtQQAAM04MNEwONE0AQF0AQlwrX3ond50reqAygKY7gqVEfptCdZI8W24lKi8AIzc+NTu2UkKGjp8hUmk7R09IZndFTFmBlp6vubna0snAv7qPnaG7sqpDMTcuFBUyGRsAJTwAACU9EDBCCS1CBzZRAzRLJVx5AW6YGG6UH3KXJXSbLnidMH2hLH2kEyw7MGaEMltzRh8ZSkpREUBWIENVPExaUkxPZGVmm6it0M7Iv8C9ip6nn6CgKyw6JwMAIwAAAB80AAAuRAsrPQwsQAdAWAU2TR1YdwBplA5rkxVulh5wlSZxlCVyliRukgwlNjdxjz1qhEUZFIdBMFcxKQ05TnBVWlM4OktPVlZUWVRVWqGcmGp4fzNEUSElNCYCACAAAAAZLAAWLT4JLUMPNEgONksAM0gcVXIAY48AZIwEaJAOaZIabZQXbpcZbpQHIjMbZIIjWHVBDwaFPSxpKR8zJCNcSEVENTdTUVFgVFFZVFRtaGNKQj4rLTEdIy4hAAAcAAAAFicACBooESg6Fig3AC1EATpSFld2AGOPAGKJBl6DElh8FlRxKFJqJEpdFRodKTY+KzhBOi4tST05U0xMWldYYV5dcWtnfXVxfXl0gnp0cmtlS0I9T0dAHx8jFwAAFwAAABIjAAIFDwkVHQwZJAAZKhQdJB07SgMuQiUtOCspLCklJicnKUE6OEE9PUpJSk5JSlVRUVtZW1tbWmFeW3JoY3dqZmpiXmRcWmlfXHhtaFJMSDQuKzkwKhwWEwgAAAAAAAALGwAUFRgYGiAZHyUdICMoIR4uKissKCk5MzRLRUVSTk5VUVJSUVJQT1JST09XVFRXVFReV1VxZV98b2Z2amVlXVxfWFVoXVl0aWR3amRGQD0zLCgsJB42LyokJCQfHhwLDhQ71JGV8KpZjQAAAABJRU5ErkJggg==" nextheight="403" nextwidth="768" class="image-node embed"><p>In popularizing Linux containers, Docker brought about a new era of systems design based on these lightweight platforms, rather than heavy virtual machines. However, now that Docker is slowly declining, it’s time to learn about the next generation of Linux container tools.</p><div class="relative header-and-anchor"><h2 id="h-docker">Docker</h2></div><p>When <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Docker_%28software%29">Docker</a> officially launched in 2013, it was not the first containerization solution for Linux. For example, Linux already had <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/LXC">LXC</a> back in 2008 (early versions of Docker ran on top of it), and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/FreeBSD_jail">FreeBSD jails</a> had been around since 1999. Nevertheless, Docker <em>was</em> the first developer-friendly and complete end-to-end solution that let us easily create, distribute, and run Linux containers.</p><p>Not only was it technically sound and convenient to use, but Docker was also a great example of a successful and well-run open source project. I experienced this personally during a couple of contributions where two people did the initial review within 24h of my Pull Request and a third person merged it in less than two weeks from the submission date. Docker developers also contributed <em>back</em> to Linux plenty of containerization-related improvements, started drove standardization efforts, and spun off many subcomponents (e.g., <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://containerd.io/">containerd</a>, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Open_Container_Initiative">OCI</a>, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/moby/buildkit">BuildKit</a>).</p><p>Today, container-based system architectures and development workflows are extremely popular, as seen with, for instance, the rise of <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/moby/moby/commit/b619220ce11770ffaea068b54d3975c74f7c24f9">Kubernetes</a>. While we are <em>still</em> waiting for the <em>‘year of the Linux desktop’</em> to happen, Docker did certainly make more Windows and Mac users run a virtual Linux machine on their laptops than ever before.</p><p>The company Docker Inc was, from the start, a venture-funded endeavor centered around an <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Open-core_model">open core model</a> and launched many closed-source products that drove revenue over the years. What used to be the core Docker software was renamed <em>Moby</em> in 2017, and that is where the open-source contributions (e.g., <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/moby/moby/commit/b619220ce11770ffaea068b54d3975c74f7c24f9">mine from 2015</a>) can be found. The founder <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://twitter.com/solomonstre">Solomon Hykes</a> no longer works for Docker Inc, and in recent years public sentiment around Docker has suffered due to various controversies. Yet at the same time, many similar (and some perhaps <em>better</em>) solutions have entered the space.</p><div class="relative header-and-anchor"><h2 id="h-what-actually-is-a-docker-container">What actually <em>is</em> a Docker container?</h2></div><p>To build a container, a software developer first writes a <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://docs.docker.com/engine/reference/builder/">Dockerfile</a>, which defines what Linux distribution the container is based on along with what software and configuration files and data it has. Much of the <code>Dockerfile</code> contents are basically shell script.</p><p>The build is done with command <code>docker build</code>, which executes the contents of the <code>Dockerfile</code> line-by-line and creates a Linux-compatible root filesystem (files under <code>/</code>). This is done utilizing a clever overlay filesystem, where each line in the <code>Dockerfile</code> amounts to one new layer. Thus, rebuilds of the container do <em>not</em> need to rebuild the whole filesystem, but can just execute the <code>Dockerfile</code> lines that changed from the previous build.</p><p>On a typical Linux system, the filesystem layers after a <code>docker build</code> execution can be found at <code>/var/lib/docker/</code>. If the container was based on Debian, one could find, for example, the <code>apt-get</code> binary of the image at a path like <code>/var/lib/docker/overlay2/c1ead1[...]d04e06/diff/usr/bin/apt-get</code>.</p><p>Additionally, some metadata is created in the process, which designates among other things the <em>entrypoint</em> of the container — i.e. what binary on the root filesystem to run when starting the container.</p><div class="relative header-and-anchor"><h2 id="h-unpacking-a-container">Unpacking a container</h2></div><p>To inspect what the root filesystem of the Docker image <code>debian:sid</code> looks like, one could create a container and inspect the mounted merged filesystem:</p><p>Copy</p><p><code>$ docker container create -i -t --name demo debian:sid 2734eb[...]d18852 $ cat /var/lib/docker/image/overlay2/layerdb/mounts/2734eb[...]d18852/mount-id 2854c7[...]9dfe25 $ find /var/lib/docker/overlay2/2854c7[...]9dfe25 | grep apt-get /var/lib/docker/overlay2/2854c7[...]9dfe25/merged/usr/share/man/man8/apt-get.8.gz /var/lib/docker/overlay2/2854c7[...]9dfe25/merged/usr/share/man/pt/man8/apt-get.8.gz /var/lib/docker/overlay2/2854c7[...]9dfe25/merged/usr/bin/apt-get</code></p><p><code>$ docker container create -i -t --name demo debian:sid 2734eb[...]d18852 $ cat /var/lib/docker/image/overlay2/layerdb/mounts/2734eb[...]d18852/mount-id 2854c7[...]9dfe25 $ find /var/lib/docker/overlay2/2854c7[...]9dfe25 | grep apt-get /var/lib/docker/overlay2/2854c7[...]9dfe25/merged/usr/share/man/man8/apt-get.8.gz /var/lib/docker/overlay2/2854c7[...]9dfe25/merged/usr/share/man/pt/man8/apt-get.8.gz /var/lib/docker/overlay2/2854c7[...]9dfe25/merged/usr/bin/apt-get</code></p><p>The command <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://docs.docker.com/engine/reference/commandline/export/">docker export</a> makes it easy to get the root filesystem into, for example, a tar package.</p><p>Copy</p><p><code>$ docker export demo &gt; debian-sid.tar $ tar xvf debian-sid.tar .dockerenv bin boot/ dev/ dev/console dev/pts/ dev/shm/ etc/ etc/.pwd.lock etc/alternatives/ etc/alternatives/README etc/alternatives/awk ... var/spool/mail var/tmp/ $ find . | grep apt-get ./usr/share/man/man8/apt-get.8.gz ./usr/bin/apt-get</code></p><p><code>$ docker export demo &gt; debian-sid.tar $ tar xvf debian-sid.tar .dockerenv bin boot/ dev/ dev/console dev/pts/ dev/shm/ etc/ etc/.pwd.lock etc/alternatives/ etc/alternatives/README etc/alternatives/awk ... var/spool/mail var/tmp/ $ find . | grep apt-get ./usr/share/man/man8/apt-get.8.gz ./usr/bin/apt-get</code></p><p>In theory, <strong>anything could create this root filesystem</strong>, and likewise anything starting could run a binary inside it — even the classic <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Chroot">chroot</a>. If you edit the files and want to get them back into Docker to run as a container, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://docs.docker.com/engine/reference/commandline/import/">docker import</a> makes it easy.</p><p>To export a full container image with both the root filesystem and the metadata, the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://docs.docker.com/build/exporters/oci-docker/">docker buildx</a> command offers some output format options, such as the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/opencontainers/image-spec/blob/v1.0.2/image-layout.md">Open Container Initiative standard format</a> or the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/moby/moby/blob/v24.0.5/image/spec/v1.2.md">Docker native image format</a>. To import a full container image with metadata, refer to the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://docs.docker.com/engine/reference/commandline/load/">docker load</a> command.</p><div class="relative header-and-anchor"><h2 id="h-orchestrating-a-container-start-with-dockerd-containerd-and-runc">Orchestrating a container start with dockerd, containerd and runc</h2></div><p>In the above example, a container was created, but not <em>started</em>. To start a container, one can try running:</p><p>Copy</p><p><code>$ docker run -it debian:sid bash root@c9a8e6c222ae:/#</code></p><p><code>$ docker run -it debian:sid bash root@c9a8e6c222ae:/#</code></p><p>From a user experience point of view, you are basically dropped into a Bash shell in a Debian Sid container. Under the hood, the <code>docker</code> command-line tool sends an HTTP request to the <code>dockerd</code> daemon running on the local system, which in turn asks <code>containerd</code> to run the container, which <em>in turn</em> starts <code>runc</code> directly or (due to backwards compatibility reasons) a <code>containerd-runc-shim</code>. Inside this one, you can find the actual running Bash binary:</p><p>Copy</p><p><code>$ ps fax | grep -C container 1122 /usr/bin/containerd 1660 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock 55409 /usr/bin/containerd-shim-runc-v2 -namespace moby -id c9a8e[..]0847e -address /run/containerd/containerd.sock 55428 \_ bash</code></p><p><code>$ ps fax | grep -C container 1122 /usr/bin/containerd 1660 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock 55409 /usr/bin/containerd-shim-runc-v2 -namespace moby -id c9a8e[..]0847e -address /run/containerd/containerd.sock 55428 \_ bash</code></p><p>Anyway, if you’re fine with slightly <em>less</em> automation and having more of a “hands-on” experience, read the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://manpages.debian.org/unstable/runc/runc.8.en.html">man page for runc</a> and try running the container directly with it.</p><div class="relative header-and-anchor"><h2 id="h-alternatives-in-the-linux-containers-stack">Alternatives in the Linux containers stack</h2></div><p>The Linux Foundation has a nice architecture schema to illustrate the various components and alternatives in the stack that originally evolved from Docker:</p><img src="https://substack-post-media.s3.amazonaws.com/public/images/84737ade-03bd-493e-8001-8adaf2ad57ad_1200x727.png" alt="Linux containers architecture diagram from containerd.io" title="Linux containers architecture diagram from containerd.io" class="image-node embed"><p>The <code>runc</code> is the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Open_Container_Initiative">OCI</a> reference implementation of their runtime specification. Popular alternatives to <code>runc</code> include: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/containers/crun">crun</a> (implemented in C to be faster and use less memory than <code>runc</code>, which is in Go) and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://cri-o.io/">CRI-O</a> (smaller and faster, with just enough features to be perfect for Kubernetes).</p><p>There are also container runtimes such as <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://katacontainers.io/">Kata</a> and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.zerovm.org/">ZeroVM</a> based on the idea of running each container inside a minimal virtual machine, which achieve better isolation between the containers compared to running them directly on the same host. This design aims to hit a “sweet spot” between the optimized performance of lightweight containers and the security of traditional full virtual machines.</p><div class="relative header-and-anchor"><h2 id="h-podman">Podman</h2></div><p>Missing from that diagram above is the current major competitor, the Red Hat-sponsored <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/containers">Podman, which offers a complete replacement</a> to the whole Docker stack.</p><p>The command-line tool <code>podman</code> is designed to be a drop-in-replacement for <code>docker</code>, so one can run the earlier command examples by just changing the first word: <code>podman build ..</code>, <code>podman container create ...</code>, <code>podman export ..</code> and so forth. Even <code>podman volume prune --force &amp;&amp; podman system prune --force</code> does exactly the same as the Docker equivalent — which is nice, as I tend to run that frequently to clean away containers and free disk space when I’m not actively using them.</p><p>To start a container one can run (for example):</p><p>Copy</p><p><code>$ podman run -it debian:sid bash root@312cbccb5938:/#</code></p><p><code>$ podman run -it debian:sid bash root@312cbccb5938:/#</code></p><p>When a container started like this is running, you would see in the process list something along the lines of:</p><p>Copy</p><p><code>87524 \_ podman 99902 \_ /usr/libexec/podman/conmon --api-version 1 -c 312cbc[...]93a0e1 -u 312cbc[...]93a0e1 -r /usr/bin/crun -b /home/otto/.local/share/containers/storage/overlay-containers/312cbc[...]93a0e1/userdata -p /run/user/1001/containers/overlay-containers/312cbc[...]93a0e1/userdata/pidfile -n naughty_dewdney --exit-dir /run/user/1001/libpod/tmp/exits --full-attach -l journald --log-level warning --runtime-arg --log-format=json --runtime-arg --log --runtime-arg=/run/user/1001/containers/overlay-containers/312cbc[...]93a0e1/userdata/oci-log -t --conmon-pidfile /run/user/1001/containers/overlay-containers/312cbc[...]93a0e1/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/otto/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1001/containers --exit-command-arg --log-level --exit-command-arg warning --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1001/libpod/tmp --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 312cbc[...]93a0e1 99905 \_ bash</code></p><p><code>87524 \_ podman 99902 \_ /usr/libexec/podman/conmon --api-version 1 -c 312cbc[...]93a0e1 -u 312cbc[...]93a0e1 -r /usr/bin/crun -b /home/otto/.local/share/containers/storage/overlay-containers/312cbc[...]93a0e1/userdata -p /run/user/1001/containers/overlay-containers/312cbc[...]93a0e1/userdata/pidfile -n naughty_dewdney --exit-dir /run/user/1001/libpod/tmp/exits --full-attach -l journald --log-level warning --runtime-arg --log-format=json --runtime-arg --log --runtime-arg=/run/user/1001/containers/overlay-containers/312cbc[...]93a0e1/userdata/oci-log -t --conmon-pidfile /run/user/1001/containers/overlay-containers/312cbc[...]93a0e1/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/otto/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1001/containers --exit-command-arg --log-level --exit-command-arg warning --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1001/libpod/tmp --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 312cbc[...]93a0e1 99905 \_ bash</code></p><p>Unlike Docker, there is no <code>containerd</code> or <code>runc</code> at play, but instead <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://manpages.debian.org/unstable/conmon/conmon.8.en.html">conmon</a> runs <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://manpages.debian.org/unstable/crun/crun.1.en.html">crun</a>, which is the <em>actual</em> container runtime. Note also that the container runs with regular user permissions (no need for root) and that the default location for storing container images and other data is in <code>~/.local/share/containers/</code> in the user home directory.</p><div class="relative header-and-anchor"><h3 id="h-podman-desktop">Podman desktop</h3></div><p>While I <em>personally</em> prefer to work on the command-line, I need to give a shoutout to Podman for also having a nifty desktop application for those who prefer to use graphical tools:</p><img src="https://substack-post-media.s3.amazonaws.com/public/images/0ba76ead-5885-4f11-99ab-5a8dc711e5f9_926x597.gif" alt="Podman Desktop demo" title="Podman Desktop demo" class="image-node embed"><div class="relative header-and-anchor"><h2 id="h-lxc-and-lxd">LXC and LXD</h2></div><p>The basic utility of Linux containers is to give system administrators a building block which behaves a bit <em>like</em> a virtual machine in terms of being an encapsulated unit — <strong>but without being so slow and resource hungry as actual virtual machines!</strong> Although containers typically boast a full root filesystem, the Docker philosophy was that each container should run just <em>one</em> process — and run it <em>well</em> — and crucially, not have any process managers or init systems inside the container. Many system administrators, however, <em>do</em> in practice run Docker containers that use, as an example, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Runit">runit</a> to ‘boot’ the container and manage server daemon processes inside them.</p><p>The Canonical-backed <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ubuntu.com/lxd">LXD</a> however tailors itself <em>specifically</em> for this type of use case, building upon LXC. After installing LXD and running <code>lxd init</code> to configure it, you can run full containerized operating systems with:</p><p>Copy</p><p><code>$ lxc launch images:debian/sid demo Creating demo Starting dem1 $ lxc exec demo -- bash root@demo:~#</code></p><p><code>$ lxc launch images:debian/sid demo Creating demo Starting dem1 $ lxc exec demo -- bash root@demo:~#</code></p><p>The host process list will show something along the lines of:</p><p>Copy</p><p><code>root 105632 lxcfs /var/snap/lxd/common/var/lib/lxcfs -p /var/snap/lxd/common/lxcfs.pid root 105466 /bin/sh /snap/lxd/24061/commands/daemon.start root 105645 \_ lxd --logfile /var/snap/lxd/common/lxd/logs/lxd.log --group lxd lxd 105975 \_ dnsmasq --keep-in-foreground --strict-order --bind-interfaces --except-interface=lo --pid-file= --no-ping --interface=lxdbr0 --dhcp-rapid-commit --quiet-dhcp --quiet-dhcp6 --quiet-ra --listen-address=10.199.145.1 --dhcp-no-override --dhcp-authoritative --dhcp-leasefile=/var/snap/lxd/common/lxd/networks/lxdbr0/dnsmasq.leases --dhcp-hostsfile=/var/snap/lxd/common/lxd/networks/lxdbr0/dnsmasq.hosts --dhcp-range 10.199.145.2,10.199.145.254,1h --listen-address=fd42:3147:bafe:37e5::1 --enable-ra --dhcp-range ::,constructor:lxdbr0,ra-stateless,ra-names -s lxd --interface-name _gateway.lxd,lxdbr0 -S /lxd/ --conf-file=/var/snap/lxd/common/lxd/networks/lxdbr0/dnsmasq.raw -u lxd -g lxd ... root 107867 \_ /snap/lxd/current/bin/lxd forkexec demo /var/snap/lxd/common/lxd/containers /var/snap/lxd/common/lxd/logs/demo/lxc.conf 0 0 0 -- env PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOME=/root USER=root LANG=C.UTF-8 TERM=xterm-256color -- cmd bash 1000000 107870 \_ bash root 106209 [lxc monitor] /var/snap/lxd/common/lxd/containers demo 1000000 106221 \_ /sbin/init 1000000 106372 \_ /lib/systemd/systemd-journald 1000000 106401 \_ /lib/systemd/systemd-udevd 1000997 106420 \_ /lib/systemd/systemd-resolved 1000100 106431 \_ /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only 1000000 106433 \_ /lib/systemd/systemd-logind 1000000 106436 \_ /sbin/agetty -o -p -- \u --noclear --keep-baud - 115200,38400,9600 linux 1000998 106446 \_ /lib/systemd/systemd-networkd</code></p><p><code>root 105632 lxcfs /var/snap/lxd/common/var/lib/lxcfs -p /var/snap/lxd/common/lxcfs.pid root 105466 /bin/sh /snap/lxd/24061/commands/daemon.start root 105645 \_ lxd --logfile /var/snap/lxd/common/lxd/logs/lxd.log --group lxd lxd 105975 \_ dnsmasq --keep-in-foreground --strict-order --bind-interfaces --except-interface=lo --pid-file= --no-ping --interface=lxdbr0 --dhcp-rapid-commit --quiet-dhcp --quiet-dhcp6 --quiet-ra --listen-address=10.199.145.1 --dhcp-no-override --dhcp-authoritative --dhcp-leasefile=/var/snap/lxd/common/lxd/networks/lxdbr0/dnsmasq.leases --dhcp-hostsfile=/var/snap/lxd/common/lxd/networks/lxdbr0/dnsmasq.hosts --dhcp-range 10.199.145.2,10.199.145.254,1h --listen-address=fd42:3147:bafe:37e5::1 --enable-ra --dhcp-range ::,constructor:lxdbr0,ra-stateless,ra-names -s lxd --interface-name _gateway.lxd,lxdbr0 -S /lxd/ --conf-file=/var/snap/lxd/common/lxd/networks/lxdbr0/dnsmasq.raw -u lxd -g lxd ... root 107867 \_ /snap/lxd/current/bin/lxd forkexec demo /var/snap/lxd/common/lxd/containers /var/snap/lxd/common/lxd/logs/demo/lxc.conf 0 0 0 -- env PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOME=/root USER=root LANG=C.UTF-8 TERM=xterm-256color -- cmd bash 1000000 107870 \_ bash root 106209 [lxc monitor] /var/snap/lxd/common/lxd/containers demo 1000000 106221 \_ /sbin/init 1000000 106372 \_ /lib/systemd/systemd-journald 1000000 106401 \_ /lib/systemd/systemd-udevd 1000997 106420 \_ /lib/systemd/systemd-resolved 1000100 106431 \_ /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only 1000000 106433 \_ /lib/systemd/systemd-logind 1000000 106436 \_ /sbin/agetty -o -p -- \u --noclear --keep-baud - 115200,38400,9600 linux 1000998 106446 \_ /lib/systemd/systemd-networkd</code></p><p>Notice how the daemon runs as root (and interacting with lxd/lxc requires root permissions). However, thanks to UID mapping, the root user inside the container is <em>not</em> a root user as found on the host system. This is one of the key design differences — and <em>why LXD is considered more secure than Docker</em>.</p><p>The downloaded root filesystems are stored at <code>/var/snap/lxd/common/lxd/images/</code> while the filesystems of running containers can be found at <code>/var/snap/lxd/common/lxd/storage-pools/default/containers/</code> as long as the LXD storage is directory-based (as opposed to a LVM or OpenZFS pool).</p><p>The examples above all have <code>snap</code> in their path, as there is no native Ubuntu package for LXD…but it forces users to install a Snap even when running <code>apt install lxd</code>.</p><p>As <code>lxd</code> controls the whole system, the command for managing individual containers is <code>lxc</code>:</p><p>Copy</p><p><code>$ lxc list +------+---------+---------------------+-----------------------------------------------+-----------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+---------+---------------------+-----------------------------------------------+-----------+-----------+ | demo | RUNNING | 10.199.145.6 (eth0) | fd42:3147:bafe:37e5:216:3eff:fe01:8da8 (eth0) | CONTAINER | 0 | +------+---------+---------------------+-----------------------------------------------+-----------+-----------+ $ lxc delete demo --force</code></p><p><code>$ lxc list +------+---------+---------------------+-----------------------------------------------+-----------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+---------+---------------------+-----------------------------------------------+-----------+-----------+ | demo | RUNNING | 10.199.145.6 (eth0) | fd42:3147:bafe:37e5:216:3eff:fe01:8da8 (eth0) | CONTAINER | 0 | +------+---------+---------------------+-----------------------------------------------+-----------+-----------+ $ lxc delete demo --force</code></p><p>Ergo, the process of creating LXC compatible container images is fairly simple. One can use any container builder to create the root filesystem (the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ubuntu.com/tutorials/create-custom-lxd-images#3-creating-basic-system-installation">LXC docs recommend using deboostrap</a> directly), and the basic metadata yaml file is so brief, it can be written manually. These are then imported to LXC with <code>lxc image import metadata.tar.gz rootfs.tar.gz --alias demo</code>.</p><p>The whole LXD stack ships with integrated tooling—even offering metal-as-a-service capabilities (MAAS) — so it goes <em>way beyond</em> what the Docker stack has.</p><div class="relative header-and-anchor"><h2 id="h-so-where-are-we-headed">So, where are we headed?</h2></div><p>To fully grasp how containers actually work, you should read the Linux kernel documentation on <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://man7.org/linux/man-pages/man7/namespaces.7.html">namespaces</a> and permission control via <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://man7.org/linux/man-pages/man7/capabilities.7.html">capabilities</a>. Keeping an eye on the progress of the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://opencontainers.org/">Open Container Initiative</a> will keep you right on top of the latest developments, and considering OCI compatibility in your infrastructure will enable you to migrate between Docker, Podman, and LXD easily.</p><p>Choosing the right container technology to use depends on <em>where</em> you intend to ship your containers. For developers targeting Kubernetes compatible production environments, Podman probably makes the most sense at the moment. Or, if your infrastructure consists of a lot of virtualized Ubuntu hosts and you want to have more flexibility, LXD is probably a good choice.</p><p>Podman is certainly gaining a lot of popularity <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://trends.google.com/trends/explore/TIMESERIES/1693782600?hl=en-US&amp;tz=420&amp;date=today+5-y&amp;hl=en-CA&amp;q=%2Fg%2F11j4j_npvw,lxc&amp;sni=3">according to Google Trends</a>. Docker will, however, continue to have the largest mindshare among average developers for years to come. For now, my recommendation is for all systems administrators and software architects to try and understand how these tools you <em>rely on</em> actually work — <em>by getting your hands dirty with them</em>. Choose the solutions you understand best, and keep an eye on the horizon for what’s coming next!</p>]]></content:encoded>
            <author>otto@newsletter.paragraph.com (Otto Kekäläinen)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/6d42c229da8ab4318030bb7513c796e9.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[The optimal home office]]></title>
            <link>https://paragraph.com/@otto/the-optimal-home-office</link>
            <guid>7MQEU1bEj5oNzvfsVbdV</guid>
            <pubDate>Mon, 17 Apr 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[The perfect home office setup achieves two things: it helps you stay focused for extended periods, allowing you to be in “the flow” and it prioritize...]]></description>
            <content:encoded><![CDATA[<img src="https://storage.googleapis.com/papyrus_images/b7a7c0ebdba99de925eb952cf44c9511.jpg" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGfElEQVR4nAFxBo75ABxEZiNHYzVRZjdRZDtRZjNIXzVOaTtQaS06TiUnMyUlMh4eKCMiKScnLBgTGColJy0tMyUkLCMjLiYmMiUrPDVIaDZJbDZRd0RfglRqjktliEBhgz1cezZScmFnep+MgQAaTHEfUnE3XnhFYnpFWm97mqqA1PZtwOppvOVxvONzueBvst1yr9t4sdp6rNGCtNZ/tdh9tdZ5tNZ5tNdtsNdstN55veKT0uxXcItRX4FQZoRDYIA/X34zVXldaH+rlIcAEFV+H1p9NWODQWaFQlx6hKW2pfP7muj6mOj6keT5iN74fdf3edP2dtD2fdP2ftP2gtb3htv4i934jd74leP5m+f6oOj6r/T7VXaVRVyDTGaNQmGFN2CDJVmAXGuFwKWMAAleihxgiTRliz9qjjpdhJKruM39/cfy/MHz/Lft+7Lq+6jl+qLh+azl+rTn+r7u+830/M71/MTx+8ny/Mfx/Mrz/Mnx/Mf0/FhzlDtgj0VsmDdkkCZjkhRbilxti9KwjgAAZJkMZ5orbpw5cZ43ZJE9iKtWxvRhwfF5zvOd3fW46vq+7/vM+P2n5fqY2PSO0vJ3xOyO0O6Ay+9twOqb1vCl4Paq4vbC8ftRdJovYps7bqMubZ8ab6MAYptXcZTdu4wAAHGqAHOsI3atLnWrMm2fAGeeAJLXAITJAILUAIzQAIrKAHa0L3emAGehAH+8AI/bAInWAIvSAIXKAILOAInQAIzSAJHWJ6bdJ2iZKnGtLHeyH3avF3yzAG6pU3af6cOMAAB/vwCBvg+Cvh2Cvht2rzKDuU2580Gx8TWt8UWy8VW28Ems6zuj5zmj5kGu8UGv8U2y8lCv4TCJtjmIs2+wyX22z2Wu0nKw0jlxnA59wiSFxAiEwwCJxQB6uVB+qfHJjgAAis4AjdEAj80Uj9EDgsIujsdJxPQ/uvNEvfNRwfRgxfRnxvVhxfRdwPSfzeyOyfNbvfNav/QmqO0AicdUsOJpsNhYm74PYYsAXpQVj9Uaj9EPk9QAltUAic1RhbT2zo8AAJXbAJfaEpfaH5vbFo7NLXakO57VPJvSMJzVM5vTOZvRQZ7WNZfSabDg2sOjucXITazlVqzbT6reUrDmWbTpbL3nV7DfSJvKInWtK5jcPJ7aMZzYL6PeAJnbUI+++dKTAACe5wCm7xqr8Cmr8Ciq8AlllwBXhABahwBWgQ1UeQtTeBdGYxtKahg6VQoDBxcZIyBPdRdSfBdRfBNSfxFTfhhSextSehxMdy6Fwk1efRoICBwWGiMsOQCf3lSbzvnVmAAiqOxAk8RIhKtPh6pWiq1jl7tpm7d3obJImMYAnuQAltgAWI0Ajc8ASHMAAAAAAAkAccIAid8AjuERmOs4o+9MqvBVrvFds/Jkv/RKSF8CAAAiGRMZGiAwqOBVo9v21qUAMJjdLFFxGCExFyIxDw8ZEAQKFhcdPzcuX5O0MqPlHXGhACRCBIzFFTdNAAAAAAMSA3W0E5TkOJvmUaXwX63xZrLyc7jzgL70fcf1RkNcAAAAAAAAHB0jV73xV7Dr8dq+ACiAwgBfnQBJfgBQigBJgQAjTQANNQBCcR5jki1ojyNHXyU7TSZBWBsdIwUBDQAAACc/Wy9ahzFaijNfjzBikzBom0pihTczQERYdS1CXRMYJw8RIhkqQlGg1HLA9O7n4ABbsvJcrvFqtvNzufN9wvVxs+olRmh5vfRutvFcsOUKdbIXfrsSgbIALEUABB8AABsAXagAle1Nn+9prfF+u/SLyvZ6tuIoTGiX0Oqf2/iR0PaKy/Z/xPV2xPCY5Pnv9vsAar30WbLyZrbzcLz0dcD0ecT1K1p/d8P1d8v2csDzlKOTwL52xb5gwbJQuaZHqo4iqKdmnrafoMHBv9XBwODVxePXqdDWSnaNs/n8rev7n+X5lNv4jdX3l+D55v7+/v7/AB2g6QCK3gCP3waU4Bqb5SWb5ABFdCeb3A+X3aepdOO5BuK5E+TCPezjg+rokObXYOXCL9qwCN++KNeyGs6yJsanAK6dUhlBbISlv21wbTJGVnOpzUORwGrA6vH//////wAAgdAAdswAfM4AgdEAhNQAgc4AO20AgcQZgcJyZ1aIYhmUcBuphSuykDyykkmniT+eeimLZRt8WS5yVjhfSzpPPSNBSFUAKllVZnVAKxkBAABcW14cPFkAicSIzPb2+v3zWT2/xdwEggAAAABJRU5ErkJggg==" nextheight="629" nextwidth="1200" class="image-node embed"><p>The perfect home office setup achieves two things: it helps you <strong>stay focused</strong> for extended periods, allowing you to be in “the flow” and it prioritizes ergonomic design to ensure that long hours at the computer <strong>don’t compromise your health</strong>.</p><div class="relative header-and-anchor"><h2 id="h-optimal-ergonomics">Optimal ergonomics</h2></div><p>I’ve spent a lot of time and thought creating my personal setup so that it that optimizes home office efficiency and inspiration. Here’s a glimpse into my workspace:</p><ul><li><p>At the center of my workspace is an <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.amazon.ca/gp/product/B07SPHL3HF?_encoding=UTF8&amp;th=1&amp;linkCode=ll1&amp;tag=otto0ac7-20&amp;linkId=53c7573ad92d146ecde359c18f5bbeb4&amp;language=en_CA&amp;ref_=as_li_ss_tl">extra wide, curved monitor</a>. It sits on an elevated shelf, at eye level. This positioning keeps my neck straight, reducing tension in my shoulders.</p></li><li><p>The heart of my setup is an <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.amazon.ca/gp/product/B08QJLCT7D?_encoding=UTF8&amp;linkCode=ll1&amp;tag=otto0ac7-20&amp;linkId=e0686292e6c638e38be4a5147ab52671&amp;language=en_CA&amp;ref_=as_li_ss_tl]">adjustable electric desk</a>. I can set it to the perfect height, so arms maintain a comfortable 90-degree angle. This eliminates slouching, saving my lower back from unnecessary stress.</p></li><li><p>With my desk is a comfortable chair, adjustable for the most optimal positions. The design ensures my feet and waist maintain the right angles. Plus, its wheels offer the flexibility to slide it away when I prefer standing.</p></li><li><p>At my fingertips is a wireless mouse and an <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.amazon.ca/Microsoft-A11-00337-Natural-Keyboard-Elite/dp/B0000642RX/?&amp;_encoding=UTF8&amp;tag=otto0ac7-20&amp;linkCode=ur2&amp;linkId=dd6e31ead13be91a09b593c1de118304&amp;camp=15121&amp;creative=330641">ergonomic split keyboard</a>. This design allows my wrists to rest, and my elbows to angle outward at a comfy 45 degrees, reducing wrist pain or carpal tunnel syndrome.</p></li><li><p>I keep my workspace feeling fresh and at a cool, consistent temp with an air conditioner. Additionally, a higher ceiling ensures ample airflow, while my laptop sits on a <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.amazon.ca/Portable-Laptop-Cooling-Powered-Support/dp/B07WC31MHQ?_encoding=UTF8&amp;linkCode=ll1&amp;tag=otto0ac7-20&amp;linkId=7f4850ef17f119be4d21835cff821566&amp;language=en_CA&amp;ref_=as_li_ss_tl">cooling fan</a>.</p></li><li><p>Positioned above the keyboard and behind the monitor is a strategically placed <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.amazon.ca/gp/product/B08HMLKS2N?_encoding=UTF8&amp;linkCode=ll1&amp;tag=otto0ac7-20&amp;linkId=a5aee1defb3d0303c6734d6093c25ea8&amp;language=en_CA&amp;ref_=as_li_ss_tl">light source</a>. This minimizes eye strain and can also be used to cast a flattering light on me during video calls. Speaking of which…</p></li><li><p>My <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.amazon.ca/MEE-audio-1080p-Webcam-Light/dp/B08PYWL6T6?_encoding=UTF8&amp;linkCode=ll1&amp;tag=otto0ac7-20&amp;linkId=7c3562906a7ab440c1d7a25574f6c129&amp;language=en_CA&amp;ref_=as_li_ss_tl">external high-resolution camera equipped with a microphone</a> and supplemental lighting, so I’m ready to take video calls. It’s easy to adjust, so I can look my best on screen.</p></li></ul><p>As you can see in the pictures, my office doesn’t have windows. A distant view can provide a restful break for the eyes, so having windows would be good, but natural light isn’t always optimal. A room illuminated solely by engineered light ensures I always optimal conditions even into the wee hours, as I tend to be a night owl. To avoid disrupting your <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Circadian_rhythm">circadian rhythm</a> remember to use display settings that avoid decrease emitted blue light in the evenings automatically.</p><img src="https://substack-post-media.s3.amazonaws.com/public/images/4bc639c0-392b-48cd-8a29-9f702c90ed69_1200x1600.jpeg" alt="Home office (click for larger image)" title="Home office (click for larger image)" class="image-node embed"><img src="https://substack-post-media.s3.amazonaws.com/public/images/4bdb0905-b5f4-451b-a162-368dcdff1cc4_915x791.jpeg" alt="Elevated laptop stand with cooling fan" title="Elevated laptop stand with cooling fan" class="image-node embed"><div class="relative header-and-anchor"><h2 id="h-cable-management-and-docking">Cable management and docking</h2></div><p>Ensuring cables are arranged efficiently, and accessibly, guarantees not just an aesthetically pleasing office but a functional one, too. Hidden from view (not pictured) is a metallic shelf beneath the desk with an extension cord boasting multiple electrical sockets and USB ports. This solitary extension powers the entire desk setup, with just a singular cord discreetly extending to the wall. Its design is flexible, moving in harmony as I adjust the desk’s height. A one-button power-off feature on the extension cord is particularly handy when I’m about to leave for a trip and want to quickly shut everything off.</p><p>My monitor isn’t just for visuals—it’s also a hub. Equipped with USB ports, it connects my keyboard, mouse, and monitor light. Docking my laptop is a breeze. A <strong>single USB-C cable</strong> handles both charging and connectivity to all peripheral devices (monitor, keyboard and mouse).</p><p>Integrated within my desk are two USB ports. One port for a USB-A to multi-device converter which charges essentials like my phone and headset. Despite having numerous USB-powered gadgets like my laptop cooler, watch winder, and LED light strip on the table, I always have ports at the ready for anything else that needs a charge.</p><div class="relative header-and-anchor"><h2 id="h-staying-in-the-flow">Staying in “the flow”</h2></div><p>Harnessing the elusive state of ‘flow’ is the holy grail of productivity for those engaged in creative endeavors. It’s that state of immersive focus where hours can feel like mere minutes and often where our most significant accomplishments emerge. Central to this deep concentration is the environment in which I work.</p><p>The foundation of my productivity temple is its isolation — a dedicated room free from distractions. This tranquility, essential for anyone looking to dive deep into their work, forms the cornerstone of my home office. I’ve chosen background lighting that sets the right mood to help you get ideas. Every item on my desk has been chosen intentionally and there is nothing extra. <strong>A clutter-free workspace translates to a clutter-free mind.</strong> With integrated USB outlets, a comprehensive cable management system, and a single power cord, I ensure my physical space remains pristine. A concealed drawer further houses miscellaneous items, keeping them out of sight but within reach.</p><p>The acoustics of my setup is the only thing that needs more work. The rug behind my computer and the carpet on the floor help cut down on echoes, but I want to add acoustic panels to further minimize the echo. Next time I can choose the room layout, I will try to avoid a wall behind my screen and narrow walls as it creates echo too easily.</p><p>Over time, I tried integrating scheduled breaks into my routine, using apps like <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://workrave.org/">Workrave</a> designed to prompt hourly pauses. However, these frequent interruptions shattered my flow state more than they helped. So, I’ve shifted my strategy. Instead of disrupting the rhythm, I ensure the ergonomics of my space support extended hours of comfortable, strain-free work.</p><div class="relative header-and-anchor"><h2 id="h-optimized-breaks">Optimized breaks</h2></div><p>Thanks to having my laptop docker with a single USB-C cable it is convenient to unplug and transition from a stationary work setting to a mobile one, be it another spot in my home or stepping out for a change of scenery. </p><p>The audio experience shouldn’t be tethered either. Investing in a high-quality <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="hhttps://www.amazon.ca/Sony-WH-CH720N-Cancelling-Headphones-Microphone/dp/B0BS1QCFHX?_encoding=UTF8&amp;th=1&amp;linkCode=ll1&amp;tag=otto0ac7-20&amp;linkId=a2ebb298d67d6b584cb96d6b81a67c47&amp;language=en_CA&amp;ref_=as_li_ss_tl">Bluetooth headset equipped with noise-cancelling</a> capabilities is invaluable. While it’s an obvious tool during travel or cafe-work sessions, its utility extends to the home office too. With it, I can freely roam my living space during remote meetings, providing a much-needed break from prolonged sitting. </p><p>The importance of physical activity during break times can’t be overstated. Interspersing work hours with moments of physical exertion, whether through pull-ups or exercises with <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.amazon.ca/gp/product/B08CMZMFCS?_encoding=UTF8&amp;linkCode=ll1&amp;tag=otto0ac7-20&amp;linkId=7f6ae13a779afae0819ccf902021b82e&amp;language=en_CA&amp;ref_=as_li_ss_tl">parallettes</a>, combats desk-bound fatigue and invigorates the mind.</p><div class="relative header-and-anchor"><h2 id="h-working-out-of-office">Working out-of-office</h2></div><p>One of the healthiest things we can do is keep active and go for a walk. On occasion, I’ve joined meetings with my headset on, walking outside, absorbing the content of the conversation and the rejuvenating effects of nature. Even though this may be limited to meetings that don’t require my visual attention, I find myself reenergized and coming to projects with fresh perspectives. </p><p>I can envision a not-so-distant future where technology may also play a role in this. Imagine augmented reality goggles paired with headsets that allow you to both view and participate in presentations as you walk a woodland path. Visualize crafting emails or designing projects using intuitive hand gestures while you’re perched on a park bench. This immersive, tech-driven experience could redefine our notion of a ‘workspace,’ transforming the outdoors into a potential office.</p><p>As technology continues to evolve and offers us new ways to work, the value of a dedicated workspace—especially a home office setup—remains undeniable. While the boundaries of ‘workspaces’ may expand and become more fluid in the future, the significance of a well-optimized home office is an investment that will stand the test of time. Here’s to many more productive days and better work habits!</p>]]></content:encoded>
            <author>otto@newsletter.paragraph.com (Otto Kekäläinen)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/b7a7c0ebdba99de925eb952cf44c9511.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[How to make a good git commit]]></title>
            <link>https://paragraph.com/@otto/how-to-make-a-good-git-commit</link>
            <guid>iU8HnI1hMFPUFkHp6RjM</guid>
            <pubDate>Sun, 26 Mar 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[As a software developer, your core skill is how to improve an existing code base to make the software better iteratively, patch by patch.To be a good...]]></description>
            <content:encoded><![CDATA[<img src="https://storage.googleapis.com/papyrus_images/6484c5abd19dfc0a5ea5c387f8b75dc7.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAPCAIAAAAK4lpAAAAACXBIWXMAAAsTAAALEwEAmpwYAAAEK0lEQVR4nIWTW0zbVhiAvY5uTxuqJk3rXkYSwuWhmtbXvW7q0x6nXTp2ednT9rKXadpWraWoCoUC4VpGSDpuEc3FmMQ4TRNCIMTEmFuxY4xxguPYSRjOGKRpElJPsVvENKp9+h5+H5/j//zn/AaA55yrOgmBqipAp6up1Wkb6us0mhqNpkan02q1mtra2vo6fWNjQ32dXl9bq1ce6+v0Wq2muvq1qirglfMvq54/99KFC9Vvv/4qYLhx7beffmxruXnr+rXbN2+MmYeG+3ss/X2TE0PTDjNkN007hqcdwzBkmXGNuKF7M64/4OmKbsjihix2m8luMzltQ077sMtpVgdPtE3+DjR/8tH1jz9o+fTDls+ugL9862n+3tP83UzzDyNdV0aHa5zjDffu6hyj9UFI67W+EXS+E7Bf9Frf9N9/y3//IgrrZ+2XHOYG0NIIWhoWpy9t+C+v+95T3fC/G/G8D+AYgeMRHMdJkkinxCSfSKfEKEnaIQj0eh6GQ4EIGoigj5gtbGlxZQXf3Fjfpqkdht5h6EwmJctPy+VjVVkun1guF2VZpigSKBQK8inK5bIsy5lMJhxGk4LI8zzkcoHgFAwjNLOzSZIEGU2l0jyfFARRnXwm6iuSJIFcLldSKCuUSiVZlsVU6tHaWrlY3BfFAIwsuJEpqzUyN5+XpGKhQlGx/G+envJY+U4lwQsq2AsEvdzuZjJJM/HVpXUvzS5TTGSLiSSycfEolT5Ki0dirpj7/woomtpN7Ma5eJyLxXZjShDHlrHZYODPbDad2QsuL9gfQn408GDR5w37E3wilRIFQUgKz0idhSAIkiRhGAZc7Wj6ouvLqx1Nn3c0fd39TVPHVz9P/9q61GaFrU67E4KgyQmr4Zaho/2Osau7/XabwWAwm82wgksBhuHBwUGj0djX39fX39/b09vb29vT02M2mRAEAebxoM/nCy4E/QH/ArqARtAnhSf72f2xibE77e2dnZ2Dg3dDoRBBEBiG4Ti+qZDguJPN7mUyB/8hm806HA6Xy3X2HUiSND423traajAYTCYTy7KCINA0TZBkjI3R9NZuPE5RFMMwHMdJknTmHUAQVEmQz+dPd4LaRZm9DAzDRmOX0dhts9lIkoxzXJJPRqkowzAsG+O4OMuyPM+zLPvXwcFRLneo8HfFo8PDw1wu9yzBiyoAQXB0ZGRsbHxycnJuLohhGIqioVAIx1fUeHZ2FsOwUCgUnJ8Ph8ORSCQSCjGLizEU3VxZXV1bs9lsMAwDj/OPC4VC/jnqb8HzvMPhQBSmQJAgiChJEgRBK2zT9BZFRSmKZdktZUQ9LobZlsSUJIrHpVI+nwdBsFLBwMAACE75fD7vKRAEUdsDhmEQBNWGQRBkbm6uErrdCII8ODXf4/F4vV6fzwfPeCCXC8dxFEXVhf8Atpw9TJcVjSQAAAAASUVORK5CYII=" nextheight="617" nextwidth="1317" class="image-node embed"><p>As a software developer, your core skill is how to improve an existing code base to make the software better iteratively, <strong>patch by patch</strong>.</p><p>To be a good software developer you need to:</p><ul><li><p>understand the concept of a code patch,</p></li><li><p>know how do code improvements in well sized and properly documented patches, and</p></li><li><p>skillfully use <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://git-scm.com/">git version control software</a> to manage patches.</p></li></ul><div class="relative header-and-anchor"><h2 id="h-what-is-a-patch">What is a patch?</h2></div><p>A <strong>patch defines the changes</strong> to be made to the code base. It is basically a list of code lines to be added, removed or modified in a code base. Each patch always also has an <strong>author</strong>, a <strong>timestamp</strong> when it was written, a <strong>title</strong> that describes it and a longer text body that <strong>explains why</strong> this particular patch is good and applying it on the code base is beneficial.</p><p>Example:</p><p>patch Copy</p><p><code>Author: Otto Kekäläinen Date: June 22nd, 2022 08:08:08 Make output friendlier for users Add line break so text is readable and add a 2 second delay between messages so it does not scroll too fast. --- a/demo.c +++ b/demo.c @@ -8,7 +8,8 @@ int main() { for(;;) { - printf(&quot;Hello world!&quot;); + printf(&quot;Hello world!\n&quot;); + sleep(2); } return 0; }</code></p><p><code>Author: Otto Kekäläinen Date: June 22nd, 2022 08:08:08 Make output friendlier for users Add line break so text is readable and add a 2 second delay between messages so it does not scroll too fast. --- a/demo.c +++ b/demo.c @@ -8,7 +8,8 @@ int main() { for(;;) { - printf(&quot;Hello world!&quot;); + printf(&quot;Hello world!\n&quot;); + sleep(2); } return 0; }</code></p><div class="relative header-and-anchor"><h2 id="h-how-to-make-a-patch">How to make a patch</h2></div><p>You can make a patch by simply copying a file, changing something in it, and then comparing the copy to the original file using the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://manpages.debian.org/unstable/diffutils/diff.1.en.html">command diff</a> and saving the output.</p><p>Copy</p><p><code>$ cp demo.c demo.c.orig $ nano demo.c $ diff -u demo.c.orig demo.c &gt; demo.patch $ cat demo.patch</code></p><p><code>$ cp demo.c demo.c.orig $ nano demo.c $ diff -u demo.c.orig demo.c &gt; demo.patch $ cat demo.patch</code></p><p>patch Copy</p><p><code>--- demo.c.orig +++ demo.c @@ -8,7 +8,8 @@ int main() { for(;;) { - printf(&quot;Hello world!&quot;); + printf(&quot;Hello world!\n&quot;); + sleep(2); } return 0; }</code></p><p><code>--- demo.c.orig +++ demo.c @@ -8,7 +8,8 @@ int main() { for(;;) { - printf(&quot;Hello world!&quot;); + printf(&quot;Hello world!\n&quot;); + sleep(2); } return 0; }</code></p><p>The patch can be sent by email or uploaded somewhere. After that, anybody can download the patch, read it, and apply it to their copy of the code base using the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://manpages.debian.org/unstable/patch/patch.1.en.html">command patch</a>.</p><p>shell Copy</p><p><code>$ grep Hello demo.c printf(&quot;Hello world!&quot;); $ curl -O https://…/demo.patch $ patch -p0 &lt; demo.patch patching file demo.c $ grep Hello demo.c printf(&quot;Hello world!\n&quot;);</code></p><p><code>$ grep Hello demo.c printf(&quot;Hello world!&quot;); $ curl -O https://…/demo.patch $ patch -p0 &lt; demo.patch patching file demo.c $ grep Hello demo.c printf(&quot;Hello world!\n&quot;);</code></p><p>As this is not very fast nor convenient, software developers like to use git, a version control software that automates all of this. In git, we tend to talk about git <strong>commits</strong>, which basically just means a <strong>patch that has been applied on a code base</strong>.</p><div class="relative header-and-anchor"><h2 id="h-examples-of-a-good-git-commit-messages">Examples of a good git commit messages</h2></div><p>A good git commit message typically has these characteristics (adapted from the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://git-scm.com/book/en/v2/Distributed-Git-Contributing-to-a-Project#_commit_guidelines">Git Pro book</a>):</p><p>Copy</p><p><code>Capitalized, short summary of what the change is More detailed explanatory text that focuses on the &apos;why&apos; to motivate the change. Use present tense and imperative format (write &quot;Fix bug&quot;, not &quot;Fixed bug&quot;). Wrap it to about 72 characters or so. The blank line separating the summary from the body is critical. Further paragraphs come after blank lines. - Bullet points are okay, too - Typically a hyphen or asterisk is used for the bullet, followed by a single space, with blank lines in between, but conventions vary here - Use a hanging indent</code></p><p><code>Capitalized, short summary of what the change is More detailed explanatory text that focuses on the &apos;why&apos; to motivate the change. Use present tense and imperative format (write &quot;Fix bug&quot;, not &quot;Fixed bug&quot;). Wrap it to about 72 characters or so. The blank line separating the summary from the body is critical. Further paragraphs come after blank lines. - Bullet points are okay, too - Typically a hyphen or asterisk is used for the bullet, followed by a single space, with blank lines in between, but conventions vary here - Use a hanging indent</code></p><p>Here are a couple of real-world examples in pure text form:</p><p>From <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/MariaDB/server/commit/ff1d8fa7b0fe473a6bacd23ac553e711b3a11032">MariaDB@ff1d8fa7</a>:</p><p>Copy</p><p><code>Deb: Clean away Buster to Bookworm upgrade tests in Salsa-CI Upgrades from Debian 10 &quot;Buster&quot; directly to Debian 12 &quot;Bookworm&quot;, skipping Debian 11 &quot;Bullseye&quot;, fail with apt erroring on: libcrypt.so.1: cannot open shared object file This is an intentional OpenSSL transition as described in https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=993755</code></p><p><code>Deb: Clean away Buster to Bookworm upgrade tests in Salsa-CI Upgrades from Debian 10 &quot;Buster&quot; directly to Debian 12 &quot;Bookworm&quot;, skipping Debian 11 &quot;Bullseye&quot;, fail with apt erroring on: libcrypt.so.1: cannot open shared object file This is an intentional OpenSSL transition as described in https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=993755</code></p><p>From <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/MariaDB/server/commit/2c5294142382469a9ad48c44979b7fcb7b146417">MariaDB@2c529441</a>:</p><p>Copy</p><p><code>Deb: Run wrap-and-sort -av Sort and organize the Debian packaging files. Also revert 4d03269 that was done in vain.</code></p><p><code>Deb: Run wrap-and-sort -av Sort and organize the Debian packaging files. Also revert 4d03269 that was done in vain.</code></p><div class="relative header-and-anchor"><h2 id="h-five-requirements-for-a-good-git-commit">Five requirements for a good git commit</h2></div><p>In order of importance:</p><div class="relative header-and-anchor"><h3 id="h-1-commits-should-be-atomic">1. Commits should be <em>atomic</em></h3></div><p>The first and most important thing about <strong>a good patch or a commit</strong> is that it <strong>should be a </strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://git-scm.com/docs/gitworkflows#_separate_changes"><strong>self-standing change</strong></a>. If a commit fixes a bug, it should not at the same time add a new feature or fix some other completely unrelated bug, otherwise it is not <em>atomic</em>. If you add a new feature, the same commit should ideally also add automatic tests for the feature to ensure it won’t regress, and the same commit should update the documentation to mention the feature, as it is all related and should either go or not go into the code base along with the feature itself.</p><p>If your changes are not properly scoped and self-standing, you might end up in a situation later on where somebody decides to revert or reject the commit that introduced a new feature, but miss removing the tests or documentation about it, which would not have happened if they were added in separate commits.</p><p>There is no clear rule on what is the optimal scope for a commit; it is something you will learn by experience. Sometimes it makes sense to have several separate changes in one single commit simply because of each one of them being so small. In other cases, one single logical change might span multiple commits, because it was perhaps clearer to move or rename files in one commit and then update their commits in another. This is something you will just have to learn over time as you become a more experienced software developer.</p><div class="relative header-and-anchor"><h3 id="h-2-the-title-should-be-descriptive-yet-terse-and-not-too-long">2. The title should be descriptive, yet terse and not too long</h3></div><p>A title starts with a capital letter and has no trailing dot. Just like the subject line in an email. The title should make sense when read in a list of commits. If the title is too long, it will be cut off. A limit of 72 characters is safe to all typical places where people will be reading it, such as in a terminal window or when browsing GitHub or GitLab, but striving for under 50 characters is even better.</p><div class="relative header-and-anchor"><h3 id="h-3-the-commit-message-should-explain-why-it-was-made">3. The commit message should explain <em>why</em> it was made</h3></div><p>The text should be verbose enough for anybody reviewing the commit to understand <strong>why</strong> it was made, and to be convinced that the change is good. Every commit must have a text body, even if it is very short. This forces the author to spend a few seconds thinking about the change before committing.</p><p>Note that the commit message is about the change itself, so it should answer the question ‘why’. If you want to explain how a certain line of code works, simply use an inline comment next to the code itself. That way, the <em>documentation is in the correct context</em>. <strong>The git commit description should have just a tiny bit of ‘what’ and ‘how’, and mostly focus on the ‘why’.</strong></p><p>The commit body should be wrapped at about 72 characters. Proper use of empty lines and lists that are indented with a dash or star makes the body more readable.</p><p>Remember to use imperative format. Don’t write <em>Fixed bug</em> or <em>Added feature</em>. Instead write <em>Fix bug</em> or <em>Add feature</em>. The patch hasn’t added or fixed anything at the time you wrote it. Think about it like an order you give to the code base to start following. Also, try to keep your text in the current tense and passive format. Don’t write <em>This commit makes X</em> but simply <em>Make X</em>. Don’t write <em>I changed Y</em> but just simply <em>Change Y</em>.</p><div class="relative header-and-anchor"><h3 id="h-4-use-references-when-available">4. Use references when available</h3></div><p>If your code change is related to a previous commit, mention the commit ID. In most software commits, IDs will automatically become links. If the code change is related to something that was discussed or tracked elsewhere, please include the bug tracker ID or a URL to the discussion. However, the reference alone does not remove the need to write a git commit message. You cannot expect that somebody reading your commit has time or even access to open and read all references - use them only as pointers for more information.</p><p>Viewing one of the earlier git commit message examples in <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://git-scm.com/docs/gitk">gitk</a>, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://gitlab.com/ottok/mariadb/-/commit/2c5294142382469a9ad48c44979b7fcb7b146417">GitLab</a> and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/MariaDB/server/commit/ff1d8fa7b0fe473a6bacd23ac553e711b3a11032">GitHub</a> illustrates how a git commit automatically becomes a link:</p><img src="https://substack-post-media.s3.amazonaws.com/public/images/8c2b4a66-d288-458f-95db-b0ab661771e2_1440x424.png" alt="Same commit in gitk, GitLab and GitHub" title="Same commit in gitk, GitLab and GitHub" class="image-node embed"><div class="relative header-and-anchor"><h3 id="h-5-maintain-correct-authorship-and-copyright-credits">5. Maintain correct authorship and copyright credits</h3></div><p>The author name and timestamp are automatic if you configure your git correctly, so this should be a non-issue. If you neglect to configure git with your real name and email, you will be muddling the waters for anybody who later wants to verify something about authorship. In the worst case scenario, all your commits might be purged from the git repository due to unclear copyright.</p><p>Also keep in mind that if you commit code on behalf of somebody else, you must tell git that the author for a particular commit was somebody else and you only committed it. Read up on <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://git-scm.com/docs/git-commit#Documentation/git-commit.txt---authorltauthorgt">git commit –author</a> for details.</p><div class="relative header-and-anchor"><h2 id="h-the-right-tools-makes-git-commits-easy">The right tools makes git commits easy</h2></div><p>Using a good tool to craft your git commits goes a long way in making the commit flawless.</p><p>My personal choice is <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/prati0100/git-gui/">git-citool</a>, which is distributed together with git itself, so anybody can use it on any operating system. It does not use the native graphics of each operating system, but a cross-platform graphics library, which may look a bit ugly. It is, however, very easy and convenient to use, so I love it.</p><p>To make a new commit, simply run <code>git citool</code>. It starts off empty and then you can select which files you want to stage, and write the git commit message in this box, and press commit. Super easy, and it is very clear what changes you are committing.</p><img src="https://substack-post-media.s3.amazonaws.com/public/images/06e47569-1817-4d91-9279-eee07f92f5ff_1317x617.png" alt="git citool example" title="git citool example" class="image-node embed"><div class="relative header-and-anchor"><h2 id="h-dont-settle-for-bad-commits-amend-them">Don’t settle for bad commits - amend them</h2></div><p>If you are not happy with your commit and want to edit it, or to use git terminology you want to “amend”, this is possible only for the topmost git commit that does not have any child commits yet. Run <code>git-citool --amend</code>.</p><p>Here you can see a git commit that is really bad, so it really needs to be fixed. However, with git-citool it is easy and fast.</p><div class="relative header-and-anchor"><h2 id="h-wip-commits-how-to-avoid-postponing-writing-the-perfect-git-commit-message">WIP commits: how to avoid postponing writing the perfect git commit message</h2></div><p>Remember that you don’t have to make a perfect git commit right off the bat. Do it only once you know what you actually want to write in it. While still working on the code and saving intermediate versions of it, I recommend using WIP commits where the title is simply <em>WIP</em>, or if you already have some commit text draft, prefix the title with <em>WIP:</em>.</p><div class="relative header-and-anchor"><h3 id="h-use-git-rebase-i-frequently">Use <code>git rebase -i</code> frequently</h3></div><p>When you are done with WIP commits, you can run <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://git-scm.com/docs/git-rebase">git rebase -i</a> to squash them together and write the final git commit message.</p><p>For a visual explanation see the presentation <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.youtube.com/watch?v=1NoNTqank_U&amp;t=76s">A Branch in Time (a story about revision histories)</a> by Tekin Süleyman on how to use interactive rebase in git and why rebasing and amending commits will end up making the code quality better in the long run.</p><div class="relative header-and-anchor"><h2 id="h-a-polished-git-commit-is-always-worth-the-effort">A polished git commit is always worth the effort</h2></div><p>Someone who is lazy might say that while they agree with the principles, they don’t have time to follow them. To that I respond that <strong>doing things correctly from the onset actually saves time down the road</strong>.</p><ul><li><p><strong>If your git commits are good, the job of the reviewer will be much easier.</strong> They won’t waste time on just trying to understand your change, but they will get it directly and will be able to focus their energy on actually reviewing and spotting flaws in your code. If you avoid shipping a bug, you save a lot of work not having to debug, write a fix and ship a new release.</p></li><li><p><strong>A great git commit is also useful even if it later turns out the commit had a bug</strong>, because whoever fixes that bug will have a much easier time reading in the commit what the change was supposed to do, and understanding where it fell short, and then making the same change in the correct way. This leads to bugs being fixed much more quickly and with less effort - and most often the person doing the fix is a future you who no longer remembers what the present you was thinking while making that commit, with the future you just having to stare at the commit until it makes sense.</p></li><li><p><strong>You don’t have to rewrite anything when it comes time to submit the commit for review</strong>. Every single code review system I have ever used will automatically use the commit title and message as the review title and message if the review is a single commit review.</p></li></ul><div class="relative header-and-anchor"><h2 id="h-now-go-and-build-great-software-patch-by-patch">Now go and build great software – patch by patch!</h2></div><p>Now you know how to make a good git commit message. If you are proud of your work and like doing things well, you will follow these guidelines. To further learn how to polish your git commit messages, see also the post on <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://optimizedbyotto.com/post/git-commit-message-examples/">git commit messages by example</a>.</p><p>The Linux kernel developer’s guide has also an excellent description of <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://docs.kernel.org/process/submitting-patches.html#separate-your-changes">how to seprate changes into self-standing logical changes</a>, and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://docs.kernel.org/process/submitting-patches.html#describe-your-changes">how to describe the changes</a>.</p>]]></content:encoded>
            <author>otto@newsletter.paragraph.com (Otto Kekäläinen)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/6484c5abd19dfc0a5ea5c387f8b75dc7.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Quick builds and rebuilds of MariaDB using Docker]]></title>
            <link>https://paragraph.com/@otto/quick-builds-and-rebuilds-of-mariadb-using-docker</link>
            <guid>oYHLN82gtAt2Q9Zyt1tw</guid>
            <pubDate>Sun, 12 Mar 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[The MariaDB server has over 2 million lines of code. Downloading, compiling (and re-compiling) and running the test suite can potentially consume a l...]]></description>
            <content:encoded><![CDATA[<img src="https://storage.googleapis.com/papyrus_images/13ba5746c4006c9cb9b9b971a635dc3d.gif" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAQCAIAAAD4YuoOAAAACXBIWXMAAAPoAAAD6AG1e1JrAAADEUlEQVR4nKWTzU/TYBzH+wcYEw++BTSSIL7Fi15NTIxGSTQhRF4CKgNk0AmsY2xtcW23tqx9Rtd269MK67ZuDN1iJOCi+BYMIRqjQiKKbxc9eCEhBC9L9GbGjPHgwbDke3iew/P95Pn8ngf58vXD6uq3QmGtUFhbX18pFNZ+/Py+vr7y5euHN29eLC29fLf86v3bheUnL+dvzWRDybmJ+7Opu89vP35iTT+MTy5Ozy89ePFpdtGkVJ2QdGJklJR1YsSk1KxgLuTnEZoDmMdnprOxdHaAZAZJf1snautCr3ShnY7+xssddY0ttivobcqaVe6+ij+9icdajtR5zjouH60navs6jjf5zjvdp3vO7jxRV3Wm9fCF5kMXanefvLi/9sSWY6ZDRWheQTEyomcyufzUzNzUzNx4bmZy6tGzZ6+npx7EzJQ+lkA7sCpkxwGk8jCyrwaprEZ2VSHbq5Gd1ciuP6lBKmuQij85iOzZi2xzn0N/A7zXhxXDMhJZxbBkI03xEQ7omMfnwHAX4b/U1FU6tlFU8T/ZAGx1nbIXAX2DNICJEd2SjDQLoCiqbpLr99AuLzuAc3hAbGm8+p+9/wY4vX5BTgDFDENriItQjBwIRgkmhJMCzUYFGG9ttG8eQLISFQhnUvlU5o5hJEmG9/qHI4mkNGoKmi6qmtfPtbWimwewABI+QY5Ybho4vQGCBRQXkvRYQFQCosKKUQCtqzbX5gGCPEZQID2ev/dwLn/v8fLnj/GJnGZakh4T4Q2aV6PJnAMly5mByrCKGBzlRBmaqVg6SwdlW7ero2eguxe393p6nHhDfVtZQ6YCcsqaHp+YjJkpM5mhglJQhTg9jGIkipF9g3RZr4gVo4RP0LQMCyAPYAiaRfuC/DeguaGzLACKkRQXZniFBVAxLBbAoDpW2vIAijBmb8fKUtTvoVkxWnTFhUuLUnXxckyIZKX2S9fKAnT34kN+SZDHFMOStKSkJRXD8l4vKmrvdvZgREO9rSxFQwyQoQXNCVaMaqPjwTCUjXgImkMM4IHmpflyPtov5ejeBN+ReZQAAAAASUVORK5CYII=" nextheight="611" nextwidth="1200" class="image-node embed"><p>The MariaDB server has over 2 million lines of code. Downloading, compiling (and re-compiling) and running the test suite can potentially consume a lot of time away from actually making the code changes and being productive. Knowing a few simple shortcuts can help avoid wasting time.</p><p>While the official build instructions on <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://mariadb.org/get-involved/getting-started-for-developers/get-code-build-test/">mariadb.org</a> and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://mariadb.com/kb/en/generic-build-instructions/">mariadb.com/kb</a> are useful to read, there are ways to make the build (and rebuild) significantly faster and more efficient.</p><blockquote><div class="relative header-and-anchor"><h2 id="h-tldr-for-debianubuntu-users">TL;DR for Debian/Ubuntu users</h2></div><p>Get the latest MariaDB 11.0 source code, install build dependencies, configure, build and run test suite to validate binaries work:</p><p>shell Copy</p><p><code>mkdir quick-rebuilds cd quick-rebuilds git clone --branch 11.0 --shallow-since=3m \ --recurse-submodules --shallow-submodules \ https://github.com/MariaDB/server.git mariadb-server mkdir -p ccache build data docker run --interactive --tty --rm -v ${PWD}:/quick-rebuilds \ -w /quick-rebuilds debian:sid bash echo &apos;deb-src http://deb.debian.org/debian sid main&apos; \ &gt; /etc/apt/sources.list.d/deb-src-sid.list apt update apt install -y --no-install-recommends \ devscripts equivs ccache eatmydata ninja-build clang entr moreutils mk-build-deps -r -i mariadb-server/debian/control \ -t &apos;apt-get -y -o Debug::pkgProblemResolver=yes --no-install-recommends&apos; export CCACHE_DIR=$PWD/ccache export CXX=${CXX:-clang++} export CC=${CC:-clang} export CXX_FOR_BUILD=${CXX_FOR_BUILD:-clang++} export CC_FOR_BUILD=${CC_FOR_BUILD:-clang} export CFLAGS=&apos;-Wno-unused-command-line-argument&apos; export CXXFLAGS=&apos;-Wno-unused-command-line-argument&apos; cmake -S mariadb-server/ -B build/ -G Ninja --fresh \ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache -DCMAKE_C_COMPILER_LAUNCHER=ccache \ -DPLUGIN_COLUMNSTORE=NO -DPLUGIN_ROCKSDB=NO -DPLUGIN_S3=NO \ -DPLUGIN_MROONGA=NO -DPLUGIN_CONNECT=NO -DPLUGIN_TOKUDB=NO \ -DPLUGIN_PERFSCHEMA=NO -DWITH_WSREP=OFF eatmydata cmake --build build/ ./build/mysql-test/mysql-test-run.pl --force --parallel=auto</code></p><p><code>mkdir quick-rebuilds cd quick-rebuilds git clone --branch 11.0 --shallow-since=3m \ --recurse-submodules --shallow-submodules \ https://github.com/MariaDB/server.git mariadb-server mkdir -p ccache build data docker run --interactive --tty --rm -v ${PWD}:/quick-rebuilds \ -w /quick-rebuilds debian:sid bash echo &apos;deb-src http://deb.debian.org/debian sid main&apos; \ &gt; /etc/apt/sources.list.d/deb-src-sid.list apt update apt install -y --no-install-recommends \ devscripts equivs ccache eatmydata ninja-build clang entr moreutils mk-build-deps -r -i mariadb-server/debian/control \ -t &apos;apt-get -y -o Debug::pkgProblemResolver=yes --no-install-recommends&apos; export CCACHE_DIR=$PWD/ccache export CXX=${CXX:-clang++} export CC=${CC:-clang} export CXX_FOR_BUILD=${CXX_FOR_BUILD:-clang++} export CC_FOR_BUILD=${CC_FOR_BUILD:-clang} export CFLAGS=&apos;-Wno-unused-command-line-argument&apos; export CXXFLAGS=&apos;-Wno-unused-command-line-argument&apos; cmake -S mariadb-server/ -B build/ -G Ninja --fresh \ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache -DCMAKE_C_COMPILER_LAUNCHER=ccache \ -DPLUGIN_COLUMNSTORE=NO -DPLUGIN_ROCKSDB=NO -DPLUGIN_S3=NO \ -DPLUGIN_MROONGA=NO -DPLUGIN_CONNECT=NO -DPLUGIN_TOKUDB=NO \ -DPLUGIN_PERFSCHEMA=NO -DWITH_WSREP=OFF eatmydata cmake --build build/ ./build/mysql-test/mysql-test-run.pl --force --parallel=auto</code></p><p>To rebuild after code change simply run:</p><p>shell Copy</p><p><code>eatmydata cmake --build build/</code></p><p><code>eatmydata cmake --build build/</code></p><p>For full details, read the whole article.</p></blockquote><div class="relative header-and-anchor"><h2 id="h-stay-organized-keep-directories-clean">Stay organized, keep directories clean</h2></div><p>First step is to create the working directory and some directories inside it to:</p><p>shell Copy</p><p><code>mkdir quick-rebuilds cd quick-rebuilds mkdir -p ccache build data</code></p><p><code>mkdir quick-rebuilds cd quick-rebuilds mkdir -p ccache build data</code></p><p>The directory <code>ccache</code> will be used by the tool with the same name to store build cache permanently. Build artifacts will be output in the directory <code>build</code> to avoid polluting the source code directory so that git in the source tree will not accidentally commit any machine-generated files. The <code>data</code> directory is useful for temporary test installs.</p><p>The next step is to get the source code into this working directory.</p><div class="relative header-and-anchor"><h2 id="h-dont-download-the-whole-project-use-shallow-git-clone">Don’t download the whole project – use shallow git clone</h2></div><p>The oldest git commit in the project is from <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/MariaDB/server/commit/7eec25e393727b16bb916b50d82b0aa3084e065c">July, 2000</a>. Since then, MariaDB has had nearly 200 000 commits. To build the latest version and perhaps submit a Pull Request to commit your improvement to the project, you don’t necessarily need to have all those 200 000 commits available in your git clone. You can use <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://git-scm.com/docs/shallow">shallow git clone</a> to, for example, fetch only the history of past 3 months:</p><p>Copy</p><p><code>$ git clone --branch 11.0 --shallow-since=3m \ --recurse-submodules --shallow-submodules \ https://github.com/MariaDB/server.git mariadb-server Cloning into &apos;mariadb-server&apos;... remote: Enumerating objects: 41075, done. remote: Counting objects: 100% (41075/41075), done. remote: Compressing objects: 100% (29333/29333), done. remote: Total 41075 (delta 19706), reused 20092 (delta 10708), pack-reused 0 Receiving objects: 100% (41075/41075), 75.85 MiB | 8.48 MiB/s, done. Resolving deltas: 100% (19706/19706), done. Checking out files: 100% (24070/24070), done. Submodule &apos;extra/wolfssl/wolfssl&apos; (https://github.com/wolfSSL/wolfssl.git) registered for path &apos;extra/wolfssl/wolfssl&apos; Submodule &apos;libmariadb&apos; (https://github.com/MariaDB/mariadb-connector-c.git) registered for path &apos;libmariadb&apos; Submodule &apos;storage/columnstore/columnstore&apos; (https://github.com/mariadb-corporation/mariadb-columnstore-engine.git) registered for path &apos;storage/columnstore/columnstore&apos; Submodule &apos;storage/maria/libmarias3&apos; (https://github.com/mariadb-corporation/libmarias3.git) registered for path &apos;storage/maria/libmarias3&apos; Submodule &apos;storage/rocksdb/rocksdb&apos; (https://github.com/facebook/rocksdb.git) registered for path &apos;storage/rocksdb/rocksdb&apos; Submodule &apos;wsrep-lib&apos; (https://github.com/codership/wsrep-lib.git) registered for path &apos;wsrep-lib&apos; Cloning into &apos;/srv/sources/mariadb/quick-rebuilds/mariadb-server/extra/wolfssl/wolfssl&apos;... remote: Enumerating objects: 2851, done. remote: Counting objects: 100% (2851/2851), done. remote: Compressing objects: 100% (2124/2124), done. remote: Total 2851 (delta 800), reused 1576 (delta 589), pack-reused 0 Receiving objects: 100% (2851/2851), 20.91 MiB | 10.43 MiB/s, done. Resolving deltas: 100% (800/800), done. ... Unpacking objects: 100% (3/3), done. From https://github.com/codership/wsrep-API * branch 694d6ca47f5eec7873be99b7d6babccf633d1231 -&gt; FETCH_HEAD Submodule path &apos;wsrep-lib/wsrep-API/v26&apos;: checked out &apos;694d6ca47f5eec7873be99b7d6babccf633d1231&apos; $ git -C mariadb-server/ show --oneline --summary f2dc4d4c (HEAD -&gt; 11.0, origin/HEAD, origin/11.0) MDEV-30673 InnoDB recovery hangs when buf_LRU_get_free_block $ git -C mariadb-server submodule 4fbd4fd36a21efd9d1a7e17aba390e91c78693b1 extra/wolfssl/wolfssl (4fbd4fd) 12bd1d5511fc2ff766ff6256c71b79a95739533f libmariadb (12bd1d5) 8b032853b7a200d9af4d468ac58bb9f4b6ac7040 storage/columnstore/columnstore (8b03285) 3846890513df0653b8919bc45a7600f9b55cab31 storage/maria/libmarias3 (3846890) bba5e7bc21093d7cfa765e1280a7c4fdcd284288 storage/rocksdb/rocksdb (bba5e7b) 275a0af8c5b92f0ee33cfe9e23f3db5f59b56e9d wsrep-lib (275a0af) $ du -shc mariadb-server/.git/modules/{storage/*,extra/wolfssl,libmariadb,wsrep-lib} \ mariadb-server/.git mariadb-server/ 30M mariadb-server/.git/modules/storage/columnstore 1M mariadb-server/.git/modules/storage/maria 20M mariadb-server/.git/modules/storage/rocksdb 40M mariadb-server/.git/modules/extra/wolfssl 2M mariadb-server/.git/modules/libmariadb 1M mariadb-server/.git/modules/wsrep-lib 80M mariadb-server/.git 548M mariadb-server/ =720M total</code></p><p><code>$ git clone --branch 11.0 --shallow-since=3m \ --recurse-submodules --shallow-submodules \ https://github.com/MariaDB/server.git mariadb-server Cloning into &apos;mariadb-server&apos;... remote: Enumerating objects: 41075, done. remote: Counting objects: 100% (41075/41075), done. remote: Compressing objects: 100% (29333/29333), done. remote: Total 41075 (delta 19706), reused 20092 (delta 10708), pack-reused 0 Receiving objects: 100% (41075/41075), 75.85 MiB | 8.48 MiB/s, done. Resolving deltas: 100% (19706/19706), done. Checking out files: 100% (24070/24070), done. Submodule &apos;extra/wolfssl/wolfssl&apos; (https://github.com/wolfSSL/wolfssl.git) registered for path &apos;extra/wolfssl/wolfssl&apos; Submodule &apos;libmariadb&apos; (https://github.com/MariaDB/mariadb-connector-c.git) registered for path &apos;libmariadb&apos; Submodule &apos;storage/columnstore/columnstore&apos; (https://github.com/mariadb-corporation/mariadb-columnstore-engine.git) registered for path &apos;storage/columnstore/columnstore&apos; Submodule &apos;storage/maria/libmarias3&apos; (https://github.com/mariadb-corporation/libmarias3.git) registered for path &apos;storage/maria/libmarias3&apos; Submodule &apos;storage/rocksdb/rocksdb&apos; (https://github.com/facebook/rocksdb.git) registered for path &apos;storage/rocksdb/rocksdb&apos; Submodule &apos;wsrep-lib&apos; (https://github.com/codership/wsrep-lib.git) registered for path &apos;wsrep-lib&apos; Cloning into &apos;/srv/sources/mariadb/quick-rebuilds/mariadb-server/extra/wolfssl/wolfssl&apos;... remote: Enumerating objects: 2851, done. remote: Counting objects: 100% (2851/2851), done. remote: Compressing objects: 100% (2124/2124), done. remote: Total 2851 (delta 800), reused 1576 (delta 589), pack-reused 0 Receiving objects: 100% (2851/2851), 20.91 MiB | 10.43 MiB/s, done. Resolving deltas: 100% (800/800), done. ... Unpacking objects: 100% (3/3), done. From https://github.com/codership/wsrep-API * branch 694d6ca47f5eec7873be99b7d6babccf633d1231 -&gt; FETCH_HEAD Submodule path &apos;wsrep-lib/wsrep-API/v26&apos;: checked out &apos;694d6ca47f5eec7873be99b7d6babccf633d1231&apos; $ git -C mariadb-server/ show --oneline --summary f2dc4d4c (HEAD -&gt; 11.0, origin/HEAD, origin/11.0) MDEV-30673 InnoDB recovery hangs when buf_LRU_get_free_block $ git -C mariadb-server submodule 4fbd4fd36a21efd9d1a7e17aba390e91c78693b1 extra/wolfssl/wolfssl (4fbd4fd) 12bd1d5511fc2ff766ff6256c71b79a95739533f libmariadb (12bd1d5) 8b032853b7a200d9af4d468ac58bb9f4b6ac7040 storage/columnstore/columnstore (8b03285) 3846890513df0653b8919bc45a7600f9b55cab31 storage/maria/libmarias3 (3846890) bba5e7bc21093d7cfa765e1280a7c4fdcd284288 storage/rocksdb/rocksdb (bba5e7b) 275a0af8c5b92f0ee33cfe9e23f3db5f59b56e9d wsrep-lib (275a0af) $ du -shc mariadb-server/.git/modules/{storage/*,extra/wolfssl,libmariadb,wsrep-lib} \ mariadb-server/.git mariadb-server/ 30M mariadb-server/.git/modules/storage/columnstore 1M mariadb-server/.git/modules/storage/maria 20M mariadb-server/.git/modules/storage/rocksdb 40M mariadb-server/.git/modules/extra/wolfssl 2M mariadb-server/.git/modules/libmariadb 1M mariadb-server/.git/modules/wsrep-lib 80M mariadb-server/.git 548M mariadb-server/ =720M total</code></p><p>With a 3-month history, the main git data for MariaDB is about 50 MB, and the submodules as shallow clones add 30 MB more. If not using shallow cloning, the whole MariaDB repository and submodules would amount to over 1 GB of data, so using shallow clones cuts the amount of data to be downloaded by over 80%.</p><p>The checked out data is almost 550 MB, but that is unpacked from the git data, so actual network transfer was at max 80 MB of git data.</p><div class="relative header-and-anchor"><h2 id="h-build-inside-a-throwaway-container">Build inside a throwaway container</h2></div><p>In addition to the source code, one also needs a long list of build dependencies installed. Instead of polluting your laptop/workstation with tens of new libraries, install all the dependencies inside a container that has a working directory mounted inside it. This way your system will stay clean, but files written in the working directory will be accessible both inside and outside the container, and persist after the container is gone.</p><p>Next, start the container:</p><p>shell Copy</p><p><code>docker run --interactive --tty --rm \ -v ${PWD}:/quick-rebuilds -w /quick-rebuilds debian:sid bash</code></p><p><code>docker run --interactive --tty --rm \ -v ${PWD}:/quick-rebuilds -w /quick-rebuilds debian:sid bash</code></p><p>This example uses <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Docker_%28software%29">Docker</a>, but the principle is the same with <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/OS-level_virtualization#Implementations">any Linux container</a> tool, such as Podman.</p><p>Inside the Debian container, use apt to automatically install all dependencies (about 160 MB download, over 660 MB when unpacked to disk) as defined in MariaDB sources file <code>debian/control</code>:</p><p>shell Copy</p><p><code>echo &apos;deb-src http://deb.debian.org/debian sid main&apos; \ &gt; /etc/apt/sources.list.d/deb-src-sid.list apt update apt install -y --no-install-recommends \ devscripts equivs ccache eatmydata ninja-build clang entr moreutils mk-build-deps -r -i mariadb-server/debian/control \ -t &apos;apt-get -y -o Debug::pkgProblemResolver=yes --no-install-recommends&apos;</code></p><p><code>echo &apos;deb-src http://deb.debian.org/debian sid main&apos; \ &gt; /etc/apt/sources.list.d/deb-src-sid.list apt update apt install -y --no-install-recommends \ devscripts equivs ccache eatmydata ninja-build clang entr moreutils mk-build-deps -r -i mariadb-server/debian/control \ -t &apos;apt-get -y -o Debug::pkgProblemResolver=yes --no-install-recommends&apos;</code></p><p>The single biggest boost to the (re-)compilation speed is gained with <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ccache.dev/">Ccache</a>:</p><p>shell Copy</p><p><code>export CCACHE_DIR=$PWD/ccache ccache --show-stats --verbose</code></p><p><code>export CCACHE_DIR=$PWD/ccache ccache --show-stats --verbose</code></p><p>We also want to prime the environment to use <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://clang.llvm.org/">Clang</a>:</p><p>shell Copy</p><p><code>export CXX=${CXX:-clang++} export CC=${CC:-clang} export CXX_FOR_BUILD=${CXX_FOR_BUILD:-clang++} export CC_FOR_BUILD=${CC_FOR_BUILD:-clang} export CFLAGS=&apos;-Wno-unused-command-line-argument&apos; export CXXFLAGS=&apos;-Wno-unused-command-line-argument&apos;</code></p><p><code>export CXX=${CXX:-clang++} export CC=${CC:-clang} export CXX_FOR_BUILD=${CXX_FOR_BUILD:-clang++} export CC_FOR_BUILD=${CC_FOR_BUILD:-clang} export CFLAGS=&apos;-Wno-unused-command-line-argument&apos; export CXXFLAGS=&apos;-Wno-unused-command-line-argument&apos;</code></p><p>The first step in actual compilation is to run <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://manpages.debian.org/unstable/cmake/cmake.1.en.html">CMake</a>, instructing it to look at the source in directory <code>mariadb-server/</code>, output build artifacts in directory <code>build/</code> and use <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ninja-build.org/">Ninja</a> as the build system. This line also always forces a fresh configuration, discarding any previous CMakeCache.txt files, to use ccache instead of calling gcc/c++ directly, and also skip a bunch of rarely used large plugins to save a lot of compilation time.</p><p>shell Copy</p><p><code>cmake -S mariadb-server/ -B build/ -G Ninja --fresh \ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache -DCMAKE_C_COMPILER_LAUNCHER=ccache \ -DPLUGIN_COLUMNSTORE=NO -DPLUGIN_ROCKSDB=NO -DPLUGIN_S3=NO \ -DPLUGIN_MROONGA=NO -DPLUGIN_CONNECT=NO -DPLUGIN_TOKUDB=NO \ -DPLUGIN_PERFSCHEMA=NO -DWITH_WSREP=OFF</code></p><p><code>cmake -S mariadb-server/ -B build/ -G Ninja --fresh \ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache -DCMAKE_C_COMPILER_LAUNCHER=ccache \ -DPLUGIN_COLUMNSTORE=NO -DPLUGIN_ROCKSDB=NO -DPLUGIN_S3=NO \ -DPLUGIN_MROONGA=NO -DPLUGIN_CONNECT=NO -DPLUGIN_TOKUDB=NO \ -DPLUGIN_PERFSCHEMA=NO -DWITH_WSREP=OFF</code></p><p>If you are interested in knowing all possible build flags available, simply query them from CMake with:</p><p>shell Copy</p><p><code>cmake build/ -LH</code></p><p><code>cmake build/ -LH</code></p><p>Note that after the configure stage has run, there are no traditional Makefiles in ‘build/’, only a <code>ninja.build</code> since we are using Ninja. Thus, running <code>make build</code> will build. With Ninja it will be <code>ninja -C build</code>. However, we don’t need to call Ninja directly either but just let CMake orchestrate everything with:</p><p>Copy</p><p><code>$ eatmydata cmake --build build/ [173/1462] Building C object plugin/auth_ed25519/CMakeFiles/ref10.dir/ref10/ge_add.c.o</code></p><p><code>$ eatmydata cmake --build build/ [173/1462] Building C object plugin/auth_ed25519/CMakeFiles/ref10.dir/ref10/ge_add.c.o</code></p><p>In interactive mode, Ninja will have just one line of output at the time showing progress. The numbers inside the brackets show how many files have been compiled of the total number of files to compile, and the filename after it shows which file is currently being compiled. Ninja runs by default on all available CPU cores, so there is no need to define parallelism manually. If Ninja encounters warnings or errors, it will spit them out but continue to show the one-liner status at the bottom of the terminal. To abort Ninja, feel free to press <code>Ctrl+C</code> at any time.</p><p>Re-starting the compilation will continue where it left off – Ninja is very smart and fast in figuring out what files need to compiled.</p><div class="relative header-and-anchor"><h2 id="h-running-the-mariadb-test-suite-mtr">Running the MariaDB test suite (MTR)</h2></div><p>While the MariaDB server does have a small amount of <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://cmake.org/cmake/help/book/mastering-cmake/chapter/Testing%20With%20CMake%20and%20CTest.html#testing-using-ctest">CTest unit tests</a>, the main test system is the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://mariadb.com/kb/en/mysql-test-runpl-options/">mariadb-test-run script</a> (inherited from mysql-test-run). Each test file (suffix <code>.test</code>) consists mainly of SQL code which is executed by <code>mariadb-test-run</code> (MTR) and output compared to the corresponding file with the expected output in text format (suffix <code>.result</code>).</p><p>To start the MTR with CMake run:</p><p>shell Copy</p><p><code>cmake --build build/ -t test-force</code></p><p><code>cmake --build build/ -t test-force</code></p><p>Alternatively, one can simply invoke the script directly after the binaries have been compiled:</p><p>shell Copy</p><p><code>./build/mysql-test/mysql-test-run.pl --force</code></p><p><code>./build/mysql-test/mysql-test-run.pl --force</code></p><p>This offers more flexibility, as you can easily add parameters such as <code>--parallel=auto</code> (as the default is to run just one test worker on one CPU) or limit the scope to just one suite or just one individual test:</p><p>shell Copy</p><p><code>./build/mysql-test/mysql-test-run.pl --force --parallel=auto --skip-rpl --suite=main</code></p><p><code>./build/mysql-test/mysql-test-run.pl --force --parallel=auto --skip-rpl --suite=main</code></p><p>Note that all commands in this example run as root, as it necessary to start the whole container with a root user inside it to have permissions to apt install build dependencies. However, the mariadb-test-run is actually not designed to be run as root and will end up skipping some tests when run as root. Also, when run like this, a lot of the debugging information isn’t fully shown. To make most out of the mysql-test-run/mariadb-test-run script, read more in the post <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="grokking-mariadb-test-run-mtr">Grokking the MariaDB test runner (MTR)</a>.</p><div class="relative header-and-anchor"><h2 id="h-more-build-targets">More build targets</h2></div><p>As concluded above, the target <code>test-force</code> was for MTR, and the plainly named target <code>test</code> is for CUnit tests. The equivalent direct Ninja command for running target <code>test</code> would be <code>ninja -C build/ test</code>. To list all targets, run <code>cmake --build build/ --target help</code> or <code>ninja -C build/ -t targets all</code>.</p><p>MariaDB 11.0 has currently over 1300 targets. There does not seem to be a very consistent pattern in how build targets are named or how they are intended to be used. One way to find CMake targets that might be more important than others is to simply grep them from the main level CMake configuration file:</p><p>Copy</p><p><code>$ grep ADD_CUSTOM_TARGET mariadb-server/CMakeLists.txt ADD_CUSTOM_TARGET(import_executables ADD_CUSTOM_TARGET(INFO_SRC ALL ADD_CUSTOM_TARGET(INFO_BIN ALL ADD_CUSTOM_TARGET(minbuild) ADD_CUSTOM_TARGET(smoketest</code></p><p><code>$ grep ADD_CUSTOM_TARGET mariadb-server/CMakeLists.txt ADD_CUSTOM_TARGET(import_executables ADD_CUSTOM_TARGET(INFO_SRC ALL ADD_CUSTOM_TARGET(INFO_BIN ALL ADD_CUSTOM_TARGET(minbuild) ADD_CUSTOM_TARGET(smoketest</code></p><p>One of the standard targets is <code>install</code>, which can be run <code>ninja -C build install</code> or CMake:</p><p>Copy</p><p><code>$ cmake --install build/ -- Install configuration: &quot;RelWithDebInfo&quot; -- Up-to-date: /usr/local/mysql/./README.md -- Up-to-date: /usr/local/mysql/./CREDITS -- Up-to-date: /usr/local/mysql/./COPYING -- Up-to-date: /usr/local/mysql/./THIRDPARTY -- Up-to-date: /usr/local/mysql/./INSTALL-BINARY -- Up-to-date: /usr/local/mysql/lib/plugin/dialog.so -- Up-to-date: /usr/local/mysql/lib/plugin/client_ed25519.so -- Up-to-date: /usr/local/mysql/lib/plugin/caching_sha2_password.so -- Up-to-date: /usr/local/mysql/lib/plugin/sha256_password.so ... -- Installing: /usr/local/mysql/support-files/systemd/mysql.service -- Installing: /usr/local/mysql/support-files/systemd/mysqld.service -- Installing: /usr/local/mysql/support-files/systemd/mariadb@.service -- Installing: /usr/local/mysql/support-files/systemd/mariadb@.socket -- Installing: /usr/local/mysql/support-files/systemd/mariadb-extra@.socket -- Up-to-date: /usr/local/mysql/support-files/systemd/mysql.service -- Up-to-date: /usr/local/mysql/support-files/systemd/mysqld.service</code></p><p><code>$ cmake --install build/ -- Install configuration: &quot;RelWithDebInfo&quot; -- Up-to-date: /usr/local/mysql/./README.md -- Up-to-date: /usr/local/mysql/./CREDITS -- Up-to-date: /usr/local/mysql/./COPYING -- Up-to-date: /usr/local/mysql/./THIRDPARTY -- Up-to-date: /usr/local/mysql/./INSTALL-BINARY -- Up-to-date: /usr/local/mysql/lib/plugin/dialog.so -- Up-to-date: /usr/local/mysql/lib/plugin/client_ed25519.so -- Up-to-date: /usr/local/mysql/lib/plugin/caching_sha2_password.so -- Up-to-date: /usr/local/mysql/lib/plugin/sha256_password.so ... -- Installing: /usr/local/mysql/support-files/systemd/mysql.service -- Installing: /usr/local/mysql/support-files/systemd/mysqld.service -- Installing: /usr/local/mysql/support-files/systemd/mariadb@.service -- Installing: /usr/local/mysql/support-files/systemd/mariadb@.socket -- Installing: /usr/local/mysql/support-files/systemd/mariadb-extra@.socket -- Up-to-date: /usr/local/mysql/support-files/systemd/mysql.service -- Up-to-date: /usr/local/mysql/support-files/systemd/mysqld.service</code></p><p>To better understand the full capabilities of the build tools, it is recommended to skim through the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://manpages.debian.org/unstable/cmake/cmake.1.en.html">cmake man page</a> and the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://manpages.debian.org/unstable/ninja-build/ninja.1.en.html">ninja man page</a>.</p><div class="relative header-and-anchor"><h2 id="h-run-the-build-binaries-directly">Run the build binaries directly</h2></div><p>Instead of wasting time on running the <code>install</code> target, one can simply invoke the build binaries directly:</p><p>Copy</p><p><code>$ ./build/client/mariadb --version ./build/client/mariadb from 11.0.1-MariaDB, client 15.2 for Linux (x86_64) using EditLine wrapper $ ./build/sql/mariadbd --version ./build/sql/mariadbd Ver 11.0.1-MariaDB for Linux on x86_64 (Source distribution)</code></p><p><code>$ ./build/client/mariadb --version ./build/client/mariadb from 11.0.1-MariaDB, client 15.2 for Linux (x86_64) using EditLine wrapper $ ./build/sql/mariadbd --version ./build/sql/mariadbd Ver 11.0.1-MariaDB for Linux on x86_64 (Source distribution)</code></p><p>To actually run the server, it needs a data directory and a user, which can be created with:</p><p>Copy</p><p><code>$ ./build/scripts/mariadb-install-db --srcdir=mariadb-server $ adduser --disabled-password mariadb $ chown -R mariadb:mariadb ./data $ ./build/sql/mariadbd --datadir=./data --user=mariadb &amp; [Note] Starting MariaDB 11.0.1-MariaDB source revision as process 5428 [Note] InnoDB: Compressed tables use zlib 1.2.13 [Note] InnoDB: Using transactional memory [Note] InnoDB: Number of transaction pools: 1 [Note] InnoDB: Using crc32 + pclmulqdq instructions [Warning] mariadbd: io_uring_queue_init() failed with errno 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB [Note] InnoDB: Completed initialization of buffer pool [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) [Note] InnoDB: Opened 3 undo tablespaces [Note] InnoDB: 128 rollback segments in 3 undo tablespaces are active. [Note] InnoDB: Setting file &apos;./ibtmp1&apos; size to 12.000MiB. Physically writing the file full; Please wait ... [Note] InnoDB: File &apos;./ibtmp1&apos; size is now 12.000MiB. [Note] InnoDB: log sequence number 47391; transaction id 14 [Note] InnoDB: Loading buffer pool(s) from /quick-rebuilds/data/ib_buffer_pool [Note] InnoDB: Buffer pool(s) load completed at 230220 20:28:45 [Note] Plugin &apos;FEEDBACK&apos; is disabled. [Note] Server socket created on IP: &apos;0.0.0.0&apos;. [Note] Server socket created on IP: &apos;::&apos;. [Note] ./build/sql/mariadbd: ready for connections. Version: &apos;11.0.1-MariaDB&apos; socket: &apos;/tmp/mysql.sock&apos; port: 3306 Source distribution</code></p><p><code>$ ./build/scripts/mariadb-install-db --srcdir=mariadb-server $ adduser --disabled-password mariadb $ chown -R mariadb:mariadb ./data $ ./build/sql/mariadbd --datadir=./data --user=mariadb &amp; [Note] Starting MariaDB 11.0.1-MariaDB source revision as process 5428 [Note] InnoDB: Compressed tables use zlib 1.2.13 [Note] InnoDB: Using transactional memory [Note] InnoDB: Number of transaction pools: 1 [Note] InnoDB: Using crc32 + pclmulqdq instructions [Warning] mariadbd: io_uring_queue_init() failed with errno 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB [Note] InnoDB: Completed initialization of buffer pool [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) [Note] InnoDB: Opened 3 undo tablespaces [Note] InnoDB: 128 rollback segments in 3 undo tablespaces are active. [Note] InnoDB: Setting file &apos;./ibtmp1&apos; size to 12.000MiB. Physically writing the file full; Please wait ... [Note] InnoDB: File &apos;./ibtmp1&apos; size is now 12.000MiB. [Note] InnoDB: log sequence number 47391; transaction id 14 [Note] InnoDB: Loading buffer pool(s) from /quick-rebuilds/data/ib_buffer_pool [Note] InnoDB: Buffer pool(s) load completed at 230220 20:28:45 [Note] Plugin &apos;FEEDBACK&apos; is disabled. [Note] Server socket created on IP: &apos;0.0.0.0&apos;. [Note] Server socket created on IP: &apos;::&apos;. [Note] ./build/sql/mariadbd: ready for connections. Version: &apos;11.0.1-MariaDB&apos; socket: &apos;/tmp/mysql.sock&apos; port: 3306 Source distribution</code></p><p>It is necessary to define the custom data directory path and custom user, otherwise mariadbd will fail to start:</p><p>Copy</p><p><code>[Warning] Can&apos;t create test file /usr/local/mysql/data/03727bdc8fe2.lower-test ./build/sql/mariadbd: Can&apos;t change dir to &apos;/usr/local/mysql/data/&apos; (Errcode: 2 &quot;No such file or directory&quot;) [ERROR] Aborting ./build/sql/mariadbd: Please consult the Knowledge Base to find out how to run mysqld as root! [ERROR] Aborting</code></p><p><code>[Warning] Can&apos;t create test file /usr/local/mysql/data/03727bdc8fe2.lower-test ./build/sql/mariadbd: Can&apos;t change dir to &apos;/usr/local/mysql/data/&apos; (Errcode: 2 &quot;No such file or directory&quot;) [ERROR] Aborting ./build/sql/mariadbd: Please consult the Knowledge Base to find out how to run mysqld as root! [ERROR] Aborting</code></p><p>To gracefully stop the server, send it the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://optimizedbyotto.com/post/stop-senseless-killing/">SIGTERM signal</a>:</p><p>Copy</p><p><code>$ pkill -ef mariadbd [Note] ./build/sql/mariadbd (initiated by: unknown): Normal shutdown [Note] InnoDB: FTS optimize thread exiting. [Note] InnoDB: Starting shutdown... [Note] InnoDB: Dumping buffer pool(s) to /quick-rebuilds/data/ib_buffer_pool [Note] InnoDB: Buffer pool(s) dump completed at 230220 20:29:05 [Note] InnoDB: Removed temporary tablespace data file: &quot;./ibtmp1&quot; [Note] InnoDB: Shutdown completed; log sequence number 47391; transaction id 15 [Note] ./build/sql/mariadbd: Shutdown complete mariadbd killed (pid 5428)</code></p><p><code>$ pkill -ef mariadbd [Note] ./build/sql/mariadbd (initiated by: unknown): Normal shutdown [Note] InnoDB: FTS optimize thread exiting. [Note] InnoDB: Starting shutdown... [Note] InnoDB: Dumping buffer pool(s) to /quick-rebuilds/data/ib_buffer_pool [Note] InnoDB: Buffer pool(s) dump completed at 230220 20:29:05 [Note] InnoDB: Removed temporary tablespace data file: &quot;./ibtmp1&quot; [Note] InnoDB: Shutdown completed; log sequence number 47391; transaction id 15 [Note] ./build/sql/mariadbd: Shutdown complete mariadbd killed (pid 5428)</code></p><div class="relative header-and-anchor"><h2 id="h-quick-rebuilds">Quick rebuilds</h2></div><p>With this setup, you can invoke <code>eatmydata cmake --build build/</code> to have the source code re-compiled as quickly as possible.</p><p>The ‘screenshot’ below showcases how Ninja/CMake will only rebuild the file with changes and its dependencies. In the case of a simple MariaDB client version string change, only 5 files needed to be re-built, and it <strong>took less than a second</strong>:</p><p>Copy</p><p><code>$ sed &apos;s/*VER= &quot;15.1&quot;/*VER= &quot;15.2&quot;/&apos; -i mariadb-server/client/mysql.cc $ time eatmydata cmake --build build/ [5/5] Linking CXX executable client/mariadb real 0m0.992s user 0m0.374s sys 0m0.353s</code></p><p><code>$ sed &apos;s/*VER= &quot;15.1&quot;/*VER= &quot;15.2&quot;/&apos; -i mariadb-server/client/mysql.cc $ time eatmydata cmake --build build/ [5/5] Linking CXX executable client/mariadb real 0m0.992s user 0m0.374s sys 0m0.353s</code></p><p>A similar version string change in the server leads to having to rebuild over a thousand files:</p><p>Copy</p><p><code>$ sed &apos;s/MYSQL_VERSION_PATCH=1/MYSQL_VERSION_PATCH=2/&apos; -i mariadb-server/VERSION $ time eatmydata cmake --build build/ [0/1] Re-running CMake... -- Running cmake version 3.25.1 -- MariaDB 11.0.2 -- Packaging as: mariadb-11.0.2-Linux-x86_64 -- Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE) == Configuring MariaDB Connector/C -- SYSTEM_LIBS: dl;m;dl;m;/usr/lib/x86_64-linux-gnu/libssl.so;/usr/lib/x86_64-linux-gnu/libcrypto.so;/usr/lib/x86_64-linux-gnu/libz.so -- Configuring OQGraph -- Configuring done -- Generating done -- Build files have been written to: /quick-rebuilds/build [377/1257] Generating user.t troff: fatal error: can&apos;t find macro file m [378/1257] Generating user.ps troff: fatal error: can&apos;t find macro file m [433/1257] Building CXX object storage/archive/CMakeFiles/archive.dir/ha_archive.cc.o In file included from /quick-rebuilds/mariadb-server/storage/archive/ha_archive.cc:29: /quick-rebuilds/mariadb-server/storage/archive/ha_archive.h:91:15: warning: &apos;index_type&apos; overrides a member function but is not marked &apos;override&apos; [-Winconsistent-missing-override] const char *index_type(uint inx) { return &quot;NONE&quot;; } ^ /quick-rebuilds/mariadb-server/sql/handler.h:3915:23: note: overridden virtual function is here virtual const char *index_type(uint key_number) { DBUG_ASSERT(0); return &quot;&quot;;} ^ [...] In file included from /quick-rebuilds/mariadb-server/storage/archive/ha_archive.cc:29: /quick-rebuilds/mariadb-server/storage/archive/ha_archive.h:163:7: warning: &apos;external_lock&apos; overrides a member function but is not marked &apos;override&apos; [-Winconsistent-missing-override] int external_lock(THD *thd, int lock_type); ^ /quick-rebuilds/mariadb-server/sql/handler.h:5153:15: note: overridden virtual function is here virtual int external_lock(THD *thd __attribute__((unused)), ^ 36 warnings generated. [1257/1257] Linking CXX executable extra/mariabackup/mariadb-backup real 2m7.786s user 12m56.232s sys 1m57.842s</code></p><p><code>$ sed &apos;s/MYSQL_VERSION_PATCH=1/MYSQL_VERSION_PATCH=2/&apos; -i mariadb-server/VERSION $ time eatmydata cmake --build build/ [0/1] Re-running CMake... -- Running cmake version 3.25.1 -- MariaDB 11.0.2 -- Packaging as: mariadb-11.0.2-Linux-x86_64 -- Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE) == Configuring MariaDB Connector/C -- SYSTEM_LIBS: dl;m;dl;m;/usr/lib/x86_64-linux-gnu/libssl.so;/usr/lib/x86_64-linux-gnu/libcrypto.so;/usr/lib/x86_64-linux-gnu/libz.so -- Configuring OQGraph -- Configuring done -- Generating done -- Build files have been written to: /quick-rebuilds/build [377/1257] Generating user.t troff: fatal error: can&apos;t find macro file m [378/1257] Generating user.ps troff: fatal error: can&apos;t find macro file m [433/1257] Building CXX object storage/archive/CMakeFiles/archive.dir/ha_archive.cc.o In file included from /quick-rebuilds/mariadb-server/storage/archive/ha_archive.cc:29: /quick-rebuilds/mariadb-server/storage/archive/ha_archive.h:91:15: warning: &apos;index_type&apos; overrides a member function but is not marked &apos;override&apos; [-Winconsistent-missing-override] const char *index_type(uint inx) { return &quot;NONE&quot;; } ^ /quick-rebuilds/mariadb-server/sql/handler.h:3915:23: note: overridden virtual function is here virtual const char *index_type(uint key_number) { DBUG_ASSERT(0); return &quot;&quot;;} ^ [...] In file included from /quick-rebuilds/mariadb-server/storage/archive/ha_archive.cc:29: /quick-rebuilds/mariadb-server/storage/archive/ha_archive.h:163:7: warning: &apos;external_lock&apos; overrides a member function but is not marked &apos;override&apos; [-Winconsistent-missing-override] int external_lock(THD *thd, int lock_type); ^ /quick-rebuilds/mariadb-server/sql/handler.h:5153:15: note: overridden virtual function is here virtual int external_lock(THD *thd __attribute__((unused)), ^ 36 warnings generated. [1257/1257] Linking CXX executable extra/mariabackup/mariadb-backup real 2m7.786s user 12m56.232s sys 1m57.842s</code></p><p>The above example also shows how Ninja spits out warnings.</p><p>Despite the majority of the project files being re-built, it still <strong>took only two minutes</strong>, mainly thanks to ccache having a high hit-rate.</p><p>Copy</p><p><code>$ ccache --show-stats Cacheable calls: 3235 / 3235 (100.0%) Hits: 1932 / 3235 (59.72%) Direct: 49 / 1932 ( 2.54%) Preprocessed: 1883 / 1932 (97.46%) Misses: 1303 / 3235 (40.28%) Local storage: Cache size (GB): 0.11 / 5.00 ( 2.18%)</code></p><p><code>$ ccache --show-stats Cacheable calls: 3235 / 3235 (100.0%) Hits: 1932 / 3235 (59.72%) Direct: 49 / 1932 ( 2.54%) Preprocessed: 1883 / 1932 (97.46%) Misses: 1303 / 3235 (40.28%) Local storage: Cache size (GB): 0.11 / 5.00 ( 2.18%)</code></p><p>Without ccache, the build time in the same scenario is 6–8 minutes. There are some extra flags in ccache (such as <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ccache.dev/manual/4.7.4.html#_configuration_options">CCACHE_SLOPPINESS</a>) which can be used to further tune the ccache speed, but when I did some experimenting, I didn’t discover any that made a visible impact.</p><p>Without <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://manpages.debian.org/unstable/eatmydata/eatmydata.1.en.html">eatmydata</a>, the build takes 10-20 seconds longer, as the system calls to disk will wait for <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://manpages.debian.org/unstable/manpages-dev/fsync.2.en.html">fsync</a> and the like to complete, but which we are fine skipping since we don’t care about data durability and crash recovery as this is a throwaway environment anyway. Using regular <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://gcc.gnu.org/">GNU GCC</a> instead of Clang adds another 20–40 seconds to the rebuild time.</p><p>The current two minutes for the build time on my laptop with 8-core Intel i7-8650U CPU @ 1.90GHz is not exactly instant, but it is fast enough that I can sit and wait it out without feeling the need to context switch and loose my focus.</p><div class="relative header-and-anchor"><h2 id="h-automatic-rebuild">Automatic rebuild</h2></div><p>As showcased in the post <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://optimizedbyotto.com/post/develop-code-10x-faster/">How to code 10x faster than an average programmer</a>, as a high-performing software developer, you don’t want to waste time on manually running a lot of commands to build and test your code, but instead you want to have a setup where you write code in your editor and have the code automatically re-compile and run when the source code file is saved.</p><p>For MariaDB, the automatic rebuild part can easily be achieved with:</p><p>shell Copy</p><p><code>find mariadb-server/* | entr eatmydata cmake --build build/</code></p><p><code>find mariadb-server/* | entr eatmydata cmake --build build/</code></p><p>To automatically rebuild and also run a binary (in this case the <em>mariadb</em> client), define multiple commands in quotes to the <code>-s</code> parameter:</p><p>shell Copy</p><p><code>find mariadb-server/* | \ entr -s &apos;eatmydata cmake --build build/; ./build/client/mariadb --version&apos;</code></p><p><code>find mariadb-server/* | \ entr -s &apos;eatmydata cmake --build build/; ./build/client/mariadb --version&apos;</code></p><img src="https://substack-post-media.s3.amazonaws.com/public/images/385d82ff-e391-400d-b526-3bb583bdee69_1200x611.gif" alt="MariaDB client automatic compilation and re-run" title="MariaDB client automatic compilation and re-run" class="image-node embed"><p>When running the server use the <code>-r</code> parameter to have Entr automatically restart it:</p><p>shell Copy</p><p><code>find mariadb-server/* | \ entr -r ./build/sql/mariadbd --datadir=./data --user=mariadb</code></p><p><code>find mariadb-server/* | \ entr -r ./build/sql/mariadbd --datadir=./data --user=mariadb</code></p><img src="https://substack-post-media.s3.amazonaws.com/public/images/dc0ac3a7-b64f-43d1-8c7d-fd8edc2f7c07_1200x611.gif" alt="MariaDB server automatic compilation and restart" title="MariaDB server automatic compilation and restart" class="image-node embed"><p>If the you are developing an MTR test by editing *.test files, there is no need to recompile anything, and you can simply have Entr re-run the test every time a file is changed:</p><p>shell Copy</p><p><code>find mariadb-server/* | entr -r ./build/mysql-test/mysql-test-run.pl main.connect</code></p><p><code>find mariadb-server/* | entr -r ./build/mysql-test/mysql-test-run.pl main.connect</code></p><img src="https://substack-post-media.s3.amazonaws.com/public/images/9ddce876-6a34-494b-8095-bc2c0bae1823_1200x611.gif" alt="MariaDB test run automatic restart" title="MariaDB test run automatic restart" class="image-node embed"><div class="relative header-and-anchor"><h2 id="h-conclusion">Conclusion</h2></div><p>The examples above are specific for MariaDB and illustrate in detail how to be efficient and avoid wasting <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://xkcd.com/303/">time compiling</a>, but the principles of utilizing ccache/clang/ninja apply for any software project in C/C++, and entr comes in handy in a myriad of situations.</p><p>Hopefully this inspires you to raise the bar on what to expect of speed and efficiency in the future!</p>]]></content:encoded>
            <author>otto@newsletter.paragraph.com (Otto Kekäläinen)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/13ba5746c4006c9cb9b9b971a635dc3d.gif" length="0" type="image/gif"/>
        </item>
        <item>
            <title><![CDATA[Grokking the MariaDB test runner (MTR)]]></title>
            <link>https://paragraph.com/@otto/grokking-the-mariadb-test-runner-mtr</link>
            <guid>NMEWnWk4NqddrakV385G</guid>
            <pubDate>Sun, 19 Feb 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[The main test system in the MariaDB open source database project is the mariadb-test-run script (inherited from mysql-test-run). It is easy to run to...]]></description>
            <content:encoded><![CDATA[<img src="https://storage.googleapis.com/papyrus_images/2df0dcfb461473828e9f7e5869b80c41.jpg" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGaUlEQVR4nCXP/U/TdwLA8Y9bbpMwxoPIM7SWPtEH6BPtt4/0+ZmWFvtIS0srSKuALQJFhisgKnSAlLYUy6SgDsNUJhRYefBp5u5yJrddNLlddrng9icMMeeO5LK75P37K2+w+mh1YeHLiempsRtTvr5AJBbfSO8m791bfvhN//CIpLkNI2usEutwQjVXY1CctrIEslomp5Iu+BRLzzpVdbwCnYMmFVPYlVwZQW5gGBz0RidJbaqRakkCOYzMBEdHR+///f7tr78+efYEWUX96AQcjiETyBBPqBBIVXi2FMGWV8sN1SozVtaIFjVUiXUEsQbOkJwgMjMR+I9hmDwstZRWh+SrKQ12hs6mszk9nb7eSwN9gb5iJB4cHr49OHh7cHDw55cvyQwuztYpDYwLvIG6lk623onjKYrxjNzK6mx4VS68qgBNKsYzCsnsUho/n8j+BEU6jsDlERgVDCFW3EA3OPl62+zN+HDspsnX3x8MVhIh8O7du8PDw4ODgx9evaKy+LbPrsZ3nsc3H0+vPHr04i8vXv+4+OL7wMquO7rc8PkU1+3Hq5pgTGkeprYEXfNpEfyTgvIcGLq4BsKIdQyjS2pxhSIRzblujtU1ODSEpLB/P/g/8PrHvzN5knKFwRi87hmb1rf75YZmi7utpcPn7BvoGrvRn1i6dn8z8fiP/SvpifVdf/yO2D9Mb/JiRdqKaqiSLaGb3Ap760goRNPbxM2tI6NX0RQOOPrP0fv3vx0eHv7jp594IiWgykDzRdB1xbq057n72BR5wA/Gqf4xyDOo6hgw9gQd3YPWrt6OwGeOnn7L4Kg5GGocT0BtvbBaXq2hRWFvHZualrZ4SFLNYHCoisEHf3v1w/7+P/ffvPnXmzcimRrAKUBqB+YOYyLlXX7mWNyzLO0Zlp7qFp6ee/Ana3JPHdtQhVcZwQQtEKZ3X5f5hq7d+dp9JQSn8Ug6u8LeeiUUIkk1RLYgeGWEyBKC7ksdvoutT54/3f/lZ6lKB3IQoEYIWBrD7AP37cfm+fTpxFZ9LKWOpZzJbet8WhnbUMQ2lPEtUWyLH0uLYpuRldWJuXkUJMCrLQKjIxKLjH8RGg+Njwx/jqJAQKKF9BbB7rPd/V9+VmlPg4xSgOMAPEc7cdez/Mx+a8eysG1IfGtP7rbe3mtObjuTO47f27bNb9nm0+bE5t30zlcPvybzJCiRjmlwyMxOpd0lMjqYKhNV2QhqaKU6s/BRevOvr7/X6g2gGA3wHIAg148tdKw8dya36+MpZWxDMLPWsrijS2yJo+uimTVpdE0RXVfF1sTRtfYbtzqvT2EhPpynhIwuoatL1uaXt/fIzvpFLZ2AzkT6+87OLcyupu7L1Q0wMouibARwgvrql+fufdd+e6dpIa2Np9jhNWdyRzu3yQuvMadXOdOr7BsP2ZP3WeFVSecAT2euwNNgTAlVZxM4PNIzXdIzXRJXJ9/uAUYL3xdo6+k7OxebUKnUhUgCTVKPoLB0w7Pn733Xvrhju5VWxzZEM+vNyR1D4lthNMWYXmeHU5zoBieyXndzm+8fxTL5OeXIEiqHoDRyrG1ch4frPM+xt3NtZ4FSDRkNAl+7PnJtQCIUgGPZWIhPqzfZJu90rr60L+7xZtY4kXVu+BvDXEodWeN/scIZX+Zdv824fJM9OEv2hygt/pM03vESBI5MQDLrGFo9pLfgREqcygwZXKC0DK6pl7ubDF6XlVFLA5mFSCqTr7OqL1yWXrwm8w1xvZc4Z7p5rguac72mC/2a1o4Gt1docTO0TWRxPZzCglfTT6LwGQXlfH4NhorFUfEVCFh+JSYLQ0GL9KASfapOQtBqxV63DWLQAcgqQBGLMJQiJLEYRSyCY4uR+DxEVQGShMDRkDgKDEPBkSAKS1Anb5BqTSZnq9Nzvs3XbfNc4AtoCEQuAnHyWEZWNqoa5BSfJEKgphZTBs8j1GAtp+X/A8AHmfmZeSUnCmFlMDQKW11DY3IFMrWmscnh9vr8Q6NXJyOxcHxu9tZidGFpNBL1D43avD5Bg4UOkeFluVkZx06giNlIPPgwOxOGAUhs+fEMUFiUbzI26LVqmVyu1+marBaXq6W9vbWz43x3tz8Q6Lt8+dLISHBsbDQyMzkZCXf19GrMNjKLV4EhlJ06lVtSAfLLxGIRHoP6GPyhEInPRhHBsawPckv+C7Y6aKHdahQuAAAAAElFTkSuQmCC" nextheight="402" nextwidth="768" class="image-node embed"><p>The main test system in the MariaDB open source database project is the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://mariadb.com/kb/en/mysql-test-runpl-options/">mariadb-test-run script</a> (inherited from <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://dev.mysql.com/doc/dev/mysql-server/latest/PAGE_MYSQL_TEST_RUN_PL.html">mysql-test-run</a>). It is easy to run to and does not require you to compile any source code.</p><p>While writing MTR tests is relevant only for MariaDB developers, knowing how to run MTR is useful for any database administrator running MariaDB, as it is a <strong>quick way to validate that MariaDB can run correctly on your hardware and operating system version</strong>.</p><blockquote><div class="relative header-and-anchor"><h2 id="h-tldr-for-debianubuntu-users">TL;DR for Debian/Ubuntu users</h2></div><p>Run the full 6000+ test suite as current user in temporary directory with multiple workers in parallel and with detailed logging on failures, typically taking over 30 minutes on modern laptop:</p><p>Copy</p><p><code>apt install -y mariadb-test mariadb-backup mariadb-plugin-* patch gdb cd /usr/share/mysql/mysql-test export MTR_PRINT_CORE=detailed ./mtr --force --parallel=auto --vardir=$(mktemp -d) \ --skip-test-list=unstable-tests.amd64 --big-test</code></p><p><code>apt install -y mariadb-test mariadb-backup mariadb-plugin-* patch gdb cd /usr/share/mysql/mysql-test export MTR_PRINT_CORE=detailed ./mtr --force --parallel=auto --vardir=$(mktemp -d) \ --skip-test-list=unstable-tests.amd64 --big-test</code></p><p>For full details, read the whole article.</p></blockquote><div class="relative header-and-anchor"><h2 id="h-install-mariadb-test-package-in-debianubuntu">Install ‘mariadb-test’ package in Debian/Ubuntu</h2></div><p>To avoid polluting your actual system with new packages, it is convenient to run MTR in a throwaway container. Start one with some RAM-memory-backed disk allocated with <code>--shm-size=1G</code> so running MTR with <code>--mem</code> later on is possible:</p><p>shell Copy</p><p><code>docker run --interactive --tty --rm --shm-size=1G debian:sid bash</code></p><p><code>docker run --interactive --tty --rm --shm-size=1G debian:sid bash</code></p><p>This example uses <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Docker_%28software%29">Docker</a>, but the principle is the same with <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/OS-level_virtualization#Implementations">any Linux container</a> tool, such as Podman.</p><p>Next, install the MariaDB test suite package. This will also pull in the MariaDB server and all the necessary dependencies. Additionally, also install <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/GNU_Debugger">the GNU Debugger (gdb)</a> for automatic stack traces and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://manpages.debian.org/unstable/patch/patch.1.en.html">patch</a>.</p><p>shell Copy</p><p><code>apt update apt install -y mariadb-test mariadb-backup mariadb-plugin-* patch gdb</code></p><p><code>apt update apt install -y mariadb-test mariadb-backup mariadb-plugin-* patch gdb</code></p><p>The mariadb-test-run is not intended to be run as root and will skip some tests if run as root. Therefore, create a new user inside the container and grant it permissions to the test directory:</p><p>shell Copy</p><p><code>adduser --disabled-password mariadb-test-runner chown -R mariadb-test-runner /usr/share/mysql/mysql-test</code></p><p><code>adduser --disabled-password mariadb-test-runner chown -R mariadb-test-runner /usr/share/mysql/mysql-test</code></p><p>At minimum, the test runner user needs to be able to write to the path <code>/usr/share/mysql/mysql-test/var</code> (unless some other path is defined with <code>--vardir</code> and <code>--tmpdir</code>), but some tests also run <code>patch</code> to modify the test files on-the-fly, so just grant permissions to the whole test directory. As this is a throwaway container anyway, there is no need to be prudent with the permissions.</p><p>Next, switch to the test user and test directory and start the run with default settings:</p><p>Copy</p><p><code>$ su - mariadb-test-runner $ cd /usr/share/mysql/mysql-test $ ./mariadb-test-run Logging: ./mariadb-test-run VS config: vardir: /usr/share/mysql/mysql-test/var Creating var directory &apos;/usr/share/mysql/mysql-test/var&apos;... Checking supported features... MariaDB Version 10.11.2-MariaDB-1 - SSL connections supported - binaries built with wsrep patch Using suites: main-,archive-,atomic-,binlog-,binlog_encryption-,client-,csv-,compat/oracle-,compat/mssql-,compat/maxdb-,encryption-,federated-,funcs_1-,funcs_2-,gcol-,handler-,heap-,innodb-,innodb_fts-,innodb_gis-,innodb_i_s-,innodb_zip-,json-,maria-,mariabackup-,multi_source-,optimizer_unfixed_bugs-,parts-,perfschema-,plugins-,roles-,rpl-,stress-,sys_vars-,sql_sequence-,unit-,vcol-,versioning-,period-,sysschema-,disks,func_test,metadata_lock_info,query_response_time,sequence,sql_discovery,type_inet,type_uuid,user_variables Collecting tests... ... main.subselect_innodb &apos;innodb&apos; w4 [ pass ] 3359 main.subselect_sj2 &apos;innodb&apos; w1 [ pass ] 2737 main.subselect_sj2_jcl6 &apos;innodb&apos; w3 [ pass ] 2933 main.parser_bug21114_innodb &apos;innodb&apos; w8 [ pass ] 15364 innodb_gis.rtree_search &apos;innodb&apos; w5 [ pass ] 52379 -------------------------------------------------------------------------- The servers were restarted 1736 times Spent 8527.863 of 1659 seconds executing testcases Completed: All 5131 tests were successful. 995 tests were skipped, 280 by the test itself.</code></p><p><code>$ su - mariadb-test-runner $ cd /usr/share/mysql/mysql-test $ ./mariadb-test-run Logging: ./mariadb-test-run VS config: vardir: /usr/share/mysql/mysql-test/var Creating var directory &apos;/usr/share/mysql/mysql-test/var&apos;... Checking supported features... MariaDB Version 10.11.2-MariaDB-1 - SSL connections supported - binaries built with wsrep patch Using suites: main-,archive-,atomic-,binlog-,binlog_encryption-,client-,csv-,compat/oracle-,compat/mssql-,compat/maxdb-,encryption-,federated-,funcs_1-,funcs_2-,gcol-,handler-,heap-,innodb-,innodb_fts-,innodb_gis-,innodb_i_s-,innodb_zip-,json-,maria-,mariabackup-,multi_source-,optimizer_unfixed_bugs-,parts-,perfschema-,plugins-,roles-,rpl-,stress-,sys_vars-,sql_sequence-,unit-,vcol-,versioning-,period-,sysschema-,disks,func_test,metadata_lock_info,query_response_time,sequence,sql_discovery,type_inet,type_uuid,user_variables Collecting tests... ... main.subselect_innodb &apos;innodb&apos; w4 [ pass ] 3359 main.subselect_sj2 &apos;innodb&apos; w1 [ pass ] 2737 main.subselect_sj2_jcl6 &apos;innodb&apos; w3 [ pass ] 2933 main.parser_bug21114_innodb &apos;innodb&apos; w8 [ pass ] 15364 innodb_gis.rtree_search &apos;innodb&apos; w5 [ pass ] 52379 -------------------------------------------------------------------------- The servers were restarted 1736 times Spent 8527.863 of 1659 seconds executing testcases Completed: All 5131 tests were successful. 995 tests were skipped, 280 by the test itself.</code></p><div class="relative header-and-anchor"><h2 id="h-defining-what-tests-to-run">Defining what tests to run</h2></div><p>If no test suite is selected, MTR will run about 6000 tests, which on my laptop takes about 30 minutes. If MTR is started with additional <code>--big-test</code> parameter, it will run additional tests that are resource intensive and consume, for example, a lot of RAM memory and take a long time to run, totalling to a test run that has 6100 tests and takes 43 minutes to complete (on my laptop). To <em>only</em> run big tests, use <code>--big --big</code>.</p><p>If there is a need to limit the scope, such as in build systems that want to validate that the built binary works without running all tests, typically <code>--suite=main --skip-rpl</code> is used. This results in about 1000 tests being run, which on my laptop takes about 3½ minutes.</p><p>Even when running without any limitations on what tests are run, many tests have code that makes them opt out automatically based on some condition being missing, and on my laptop about 1000 tests end up being skipped. Some examples:</p><p>Copy</p><p><code>main.connect2 [ skipped ] Requires debug build main.mysql_client_test_comp [ skipped ] No IPv6 main.connect-abstract [ skipped ] Need Linux main.grant_cache_ps_prot [ skipped ] Need ps-protocol main.fix_priv_tables [ skipped ] Test need MYSQL_FIX_PRIVILEGE_TABLES main.no-threads [ skipped ] Test requires: &apos;one_thread_per_connection&apos; main.udf_skip_grants [ skipped ] Need udf example main.lowercase_mixed_tmpdir_innodb [ skipped ] Test requires: &apos;lowercase2&apos; main.innodb_load_xa [ skipped ] Need InnoDB plugin mariabackup.alter_copy_excluded [ skipped ] No mariabackup binlog.binlog_expire_warnings [ skipped ] Test needs --big-test encryption.innodb-spatial-index [ skipped ] requires patch executable plugins.pam_cleartext [ skipped ] Not run as user owning auth_pam_tool_dir rpl.rpl_gtid_mdev4474 &apos;innodb,row&apos; [ skipped ] Neither MIXED nor STATEMENT binlog format</code></p><p><code>main.connect2 [ skipped ] Requires debug build main.mysql_client_test_comp [ skipped ] No IPv6 main.connect-abstract [ skipped ] Need Linux main.grant_cache_ps_prot [ skipped ] Need ps-protocol main.fix_priv_tables [ skipped ] Test need MYSQL_FIX_PRIVILEGE_TABLES main.no-threads [ skipped ] Test requires: &apos;one_thread_per_connection&apos; main.udf_skip_grants [ skipped ] Need udf example main.lowercase_mixed_tmpdir_innodb [ skipped ] Test requires: &apos;lowercase2&apos; main.innodb_load_xa [ skipped ] Need InnoDB plugin mariabackup.alter_copy_excluded [ skipped ] No mariabackup binlog.binlog_expire_warnings [ skipped ] Test needs --big-test encryption.innodb-spatial-index [ skipped ] requires patch executable plugins.pam_cleartext [ skipped ] Not run as user owning auth_pam_tool_dir rpl.rpl_gtid_mdev4474 &apos;innodb,row&apos; [ skipped ] Neither MIXED nor STATEMENT binlog format</code></p><p>If you are interested in one particular test, just give it as the last argument:</p><p>Copy</p><p><code>$ ./mariadb-test-run main.connect Logging: ./mariadb-test-run main.connect Creating var directory &apos;/usr/share/mysql/mysql-test/var&apos;... Checking supported features... MariaDB Version 10.11.2-MariaDB-1 - SSL connections supported - binaries built with wsrep patch Collecting tests... Installing system database... ============================================================================== TEST RESULT TIME (ms) or COMMENT -------------------------------------------------------------------------- main.connect [ pass ] 14206 -------------------------------------------------------------------------- The servers were restarted 0 times Spent 14.206 of 20 seconds executing testcases Completed: All 1 tests were successful.</code></p><p><code>$ ./mariadb-test-run main.connect Logging: ./mariadb-test-run main.connect Creating var directory &apos;/usr/share/mysql/mysql-test/var&apos;... Checking supported features... MariaDB Version 10.11.2-MariaDB-1 - SSL connections supported - binaries built with wsrep patch Collecting tests... Installing system database... ============================================================================== TEST RESULT TIME (ms) or COMMENT -------------------------------------------------------------------------- main.connect [ pass ] 14206 -------------------------------------------------------------------------- The servers were restarted 0 times Spent 14.206 of 20 seconds executing testcases Completed: All 1 tests were successful.</code></p><div class="relative header-and-anchor"><h2 id="h-optimize-the-mtr-run-for-speed">Optimize the MTR run for speed</h2></div><p>There are three parameters can greatly improve how quickly MTR runs. The primary one is <code>--parallel=auto</code>, which will run MTR in parallel with as many workers as there are CPUs (by default MTR runs with just one worker). On my laptop, going from one MTR worker to 8 in parallel reduced the total run time for the <em>main</em> suite from 17 to about 3 minutes.</p><p>Another parameter is <code>--fast</code>, which will make the MTR kill all server processes violently without waiting for them to gracefully shutdown. The test run restarts the MariaDB server hundreds of times, so saving half a second on every shutdown results in the main suite completing 20 seconds faster.</p><p>The parameter <code>--mem</code> instructs MTR to make the directory <code>var/</code> a symbolic link to a subdirectory on the shared memory device (<code>/dev/shm</code>). This works if the container is started with <code>--shm-size</code> and has at least 350MB of space on the ramdisk. On my laptop, this further reduced the main test suite duration down to 2½ minutes.</p><div class="relative header-and-anchor"><h2 id="h-optimize-the-mtr-run-for-logging">Optimize the MTR run for logging</h2></div><p>When running a single test, one might use <code>--verbose</code> as an additional argument to see the commands that run in the test. It is also possible to define <code>--verbose --verbose</code> twice, but that makes the test run so verbose that is it unusable.</p><p>When running a suite of tests, you only want to have extra output visible if the test fails. If the server fails to start and a particular test doesn’t run at all, using <code>--verbose-restart</code> might be beneficial. If running the test in an automated system, one might want to save results in a JUnit-compatible XML file that can, for example, be <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://docs.gitlab.com/ee/ci/testing/unit_test_reports.html">rendered by GitLab CI</a>.</p><p>This is the command that several CI systems run MTR with:</p><p>shell Copy</p><p><code>export MTR_PRINT_CORE=detailed eatmydata perl -I. ./mariadb-test-run \ --force --testcase-timeout=120 --suite-timeout=540 --retry=3 \ --verbose-restart --max-save-core=1 --max-save-datadir=1 \ --parallel=auto --skip-rpl --suite=main \ --xml-report=mariadb-test-run-junit.xml</code></p><p><code>export MTR_PRINT_CORE=detailed eatmydata perl -I. ./mariadb-test-run \ --force --testcase-timeout=120 --suite-timeout=540 --retry=3 \ --verbose-restart --max-save-core=1 --max-save-datadir=1 \ --parallel=auto --skip-rpl --suite=main \ --xml-report=mariadb-test-run-junit.xml</code></p><p>The command above also showcases use of <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://manpages.debian.org/unstable/eatmydata/eatmydata.1.en.html">eatmydata</a>, which makes <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://manpages.debian.org/unstable/manpages-dev/fsync.2.en.html">fsync</a> and similar system calls to skip memory-to-disk guarantees, but in my testing with MTR, it didn’t affect speed.</p><div class="relative header-and-anchor"><h2 id="h-running-more-tests-for-mariadb">Running more tests for MariaDB</h2></div><p>The above summarizes everything one typically needs to know for <em>running</em> the mariadb-test-run.</p><p>For a <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Database_administration">DBA</a>, there are also several other tools that ship with MariaDB, which can help with testing and validating ones own environment, such as <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://dyn.manpages.debian.org/unstable/mariadb-test/mysql-stress-test.pl.1.en.html">mariadb-stress-test</a> and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://manpages.debian.org/unstable/mariadb-client/mariadb-slap.1.en.html">mariadb-slap</a>.</p><div class="relative header-and-anchor"><h2 id="h-writing-mtr-tests-for-mariadb">Writing MTR tests for MariaDB</h2></div><p>If you want to <em>write</em> a new test or fix an existing test, there are many more additional parameters to learn, such as <code>--record</code> and <code>--gcov</code>. The official <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://mariadb.org/get-involved/getting-started-for-developers/get-code-build-test/">contribution docs at mariadb.org</a> list some of the most useful MTR parameters. The <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://mariadb.com/kb/en/generic-build-instructions/">mariadb.com knowledge base article</a> lists them all. As with all commands in Linux, there is also the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://manpages.debian.org/unstable/mariadb-test/mysql-test-run.pl.1.en.html">main page for mariadb-test-run</a>.</p><p>The structure of the tests is actually quite easy, and <strong>requires no C/C++ skills to write</strong>. Each test file (suffix <code>.test</code>) consists mainly of SQL code which is executed by <code>mariadb-test-run</code> (MTR), with the output compared (with <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://manpages.debian.org/unstable/diffutils/diff.1.en.html">diff</a>) to the corresponding file with the expected output in text format (suffix <code>.result</code>).</p><p>In my development <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://optimizedbyotto.com/post/develop-code-10x-faster/">workflow</a>, writing a test and running MTR might look something like this:</p><p>If you want to contribute to the MariaDB open source project, extending the test coverage is a great place to start. To <em>scratch your own itch</em>, think about a MariaDB bug you encountered while using it, and think if that can be reproduced as a test and submitted upstream so that it becomes part of the body of tests and thus easy to catch if it ever regresses again.</p><p>To learn more about writing mariadb-test-run tests, read the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://mariadb.org/get-involved/getting-started-for-developers/writing-good-test-cases-mariadb-server/">test case authoring guide at mariadb.org</a>.</p>]]></content:encoded>
            <author>otto@newsletter.paragraph.com (Otto Kekäläinen)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/2df0dcfb461473828e9f7e5869b80c41.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[How to code 10x faster than an average programmer]]></title>
            <link>https://paragraph.com/@otto/how-to-code-10x-faster-than-an-average-programmer</link>
            <guid>kWXXRmReva5NH7xVwPNz</guid>
            <pubDate>Sun, 29 Jan 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[What is the key to being an efficient programmer? Well, the answer is surprisingly simple. Having a setup where you can write and test your code over...]]></description>
            <content:encoded><![CDATA[<img src="https://storage.googleapis.com/papyrus_images/1f448aaeb828d368e8f3fb5ece675ed4.jpg" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGQklEQVR4nCWUW0wbhgFFb+ImCxBIAgl+gfHbGLCxMQb8AGyCHxiMgwGDAQeoCamJIalpDCU8DaY8AgQDLhQoDRC8khASRGmjpohmMNjYpnbqGrR1U9VpU9OpVTSp+5gqMdFJ9/983KMDvBIJCh8SPUixIHMhyoQoC6JsJOciw3w2wwyxBvHp4KUhKRt0EZgScJJPsIRn4tLI/GS+XMtISJVlaC5QGWGkGBqDy2LHRpIZxCgm8TwJCAEoPIQQQeZAU/6Kewz1/aj1wNFz2tkf7BxE4zgc/bjixRUPanp0vpXmRzvHrw2gpovnGpS3jxs8/oKuiZybvTpXt+Z6l/56l6l5kJWkACEUhDCAAESyEEIBTkCiC6/pRIEDaiu0NuS+Kmrxq2/fT+mcUQ8GsoYDbRv76z8cPvjq5cruF8b+uRzvrH12vXJoztTUp7E3KHIKFSZrmtFysdTOT8nAsVBCKDGURPsZcIaGIDKk+sjyNwhF15BVCoUJ8rxE77zyvS2pf53hezz8/F/bPx22735dubi59rvnf/3hR/nCtmzm48u+QN34Uu3o3crucZtnvLJ7wu4ZlWiM5xk8Cis+gsoEkrSQ5oASB7mJaHOftDQQdDZkmE+pi2hNfs7IQ7gnWb33Vl/8VLn2e/X0k/TRNWvr0Jeffe58uEPo+WXZ8ILNO2G4+kZGcXVGga3Q2WRr6pLr8onRjKAIKiGMCNAEUBgRr4CqJKqyKbjYSdBWINMcnFVMavJTuwNw+uo/OWj99M/c4fsoc0eW36BU3VpZmNv6zV7o4Fr5SKCk/119U7/icr28yHapocV2q1+pN1NjxWS2MILJB5jiow9CKJBfIlpvhFtdwYZKqIuCNCXUttnE6U3ayNrtg39nj62izE1uewfGq8FK41yL/atfrcqmN8vGl20j75UNzhW3Dhbe9FR7Rmq9E+q84liRlMETkNjxQGjUkaaRLKQYkGmBshBqCzKLkVXC7piR+jdSJz9s/cO3mvlNOAchy4dEH0Vjee2XvnkyW+FbzvPO2t5629Y9XujutjT3VnUO13SPyHUmEo1DiZWQWP8HhNNBZENpRkEddFUwOWCqO2mslYyspC7saJf3yp88D++cO8KTODhNoXPiOuzmP931eFaeZvTO57SNyeu7hJZaobFCc60lr7FHpjUx4pNY4jR6rAiITkCMGNFCGKq5bh/z9QGW6zarYUDgGqL3vX925LHA/0HNs6/DPHehLIXUCLr4AiuuLl/1tL/B45/mNA7rO/xKV19SmTPBWJHtbNO7emSGIl6SnCmWsUQpQKwcbBn4KuS/Ri6qiyq4wihy0IrqYstvRHoWf9H7AG3zgsknkb41qMpAS0SMBHShXp56v7nC9dYQvdl/sW1M4ujkGCti88qyHC3a17uT80p5Ci09VU1LVgI0IfhpkOhgbbxgaYgqvhZjqedWuOIuu8J7lsJurx33BNA6B/ckKjqQlA+WlMzgqaWixqLs/K4JZus76c2jCVfbGQU1bFO1ynErq75DqL1E5QkoXGEUgwtQE8CWgJcCk4Noc0eXNsTYbtJfbeHb36T61smTW+furJk3/qha2pZNrB/zzGs8UwOe7hlPy8ZQS97AAq15SmhxJJiqhHllIkOJNK8k2VCccDGfyEs8xxb8rCkj6Wh8JeT5UBUjs+DIoqzSIE0pLjfjitfx0eeDX35/9aMvoLHRWqe4tuu1uSpXrX1peur51qNct/d4so6pyOEocziyi7xUVZxSy1Low8Xp58SZp7liIF5JkKiPGml15Szv5d771BB4lru8qwnsZC98MrH3l42Xhycax1Ddnup7AH0lzkbXFurqnQ1mbfZCb/vLvx30BdYyR1dMbz/OH3uov7Oc3T/PVBkj2IJwTmIoMx4R8dKzSengpp5vvFO59611+5+W3RfW/e/rD3789eHh4ovDwgf7XP8GEnWQF4CVjPMxLCabeOpURILstaHFnZ39w/98t/3Nd6qlbU1gV//oM0Ngl5lljuKJqXwJiSMAjkWCFo8YEao7InwfBvWtRs9uCRZ33tz/x7t//y/P9xj1wye876O6AycpoIlBFwKhOEcPqWpn21sbfPd++2z74OmqefqRYGbTurqvmNqgpemCLkSdieGGUhj/AxDyHgB+TSoLAAAAAElFTkSuQmCC" nextheight="403" nextwidth="768" class="image-node embed"><p>What is the key to being an efficient programmer? Well, the answer is surprisingly simple. Having a setup where you can write and test your code over and over in an uninterrupted flow will dramatically increase your productivity.</p><p>The cost of doing <em>just one more tweak</em> to make the code perfect should be as close to zero as possible. The developer should not feel any drain when doing <em>just one more test</em> to ensure everything is absolutely correct. The experience should be fast and frictionless.</p><div class="relative header-and-anchor"><h2 id="h-instant-run-change-re-run-cycle">Instant run, change, re-run cycle</h2></div><p>I always try to set up my development environment in a way that I can write <strong>code in one window, and <em>immediately</em> see the result in another</strong>. It does not matter if I am doing front-end or back-end development – I insist on having the code in one window and the result update in another window as soon as I press <em>Ctrl+S</em> or switch focus between windows.</p><img src="https://substack-post-media.s3.amazonaws.com/public/images/d2a25f44-a618-4e9d-920f-368b9a5e6250_1200x615.gif" alt="Atom/Pulsar autosave and Entr in action" title="Atom/Pulsar autosave and Entr in action" class="image-node embed"><p>I achieve this with a combination of two great developer tools:</p><ul><li><p>The <s>Atom</s> <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://optimizedbyotto.com/post/pulsar-best-text-file-and-code-editor/">Pulsar code editor</a> with <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/atom/autosave">autosave</a> to automatically save code files.</p></li><li><p>The <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://eradman.com/entrproject/">Entr command-line tool</a> to restart programs automatically when files change.</p></li></ul><p>The basic usage of Entr is to list all files in your coding project with <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://manpages.debian.org/unstable/findutils/find.1.en.html">find</a> and pipe the list to <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://manpages.debian.org/unstable/entr/entr.1.en.html">entr</a> telling it what command to run when any of the files are updated:</p><p>shell Copy</p><p><code>find * | entr python3 demo.py</code></p><p><code>find * | entr python3 demo.py</code></p><p>If you are working on a long-running process which does not exit on every run, such as a server app, you might want to use the <code>-r</code> and <code>-z</code> parameters to make Entr restart the program:</p><p>shell Copy</p><p><code>find * | entr -rz node server.js</code></p><p><code>find * | entr -rz node server.js</code></p><p>Occasionally the workflow might need two commands, such as in this example compiling and running a demo program in C. That can be achieved with the <code>-s</code> parameter along with the commands as one quoted string:</p><p>shell Copy</p><p><code>find * | entr -s &quot;gcc demo.c -o demo; ./demo&quot;</code></p><p><code>find * | entr -s &quot;gcc demo.c -o demo; ./demo&quot;</code></p><img src="https://substack-post-media.s3.amazonaws.com/public/images/ca0dabda-c4da-4808-ad6a-118c076ad411_1200x615.gif" alt="Atom/Pulsar autosave and Entr in action with GCC compilation and execution" title="Atom/Pulsar autosave and Entr in action with GCC compilation and execution" class="image-node embed"><p>But what if the full development cycle includes uploading the files to a remote server? No problem, Entr and Rsync can handle that as well, and with the addition of ts you will also see the timestamps of when Rsync last ran.</p><p>shell Copy</p><p><code>find * | entr rsync -avz --delete-after * example.com:/path-to-target-dir/ | ts</code></p><p><code>find * | entr rsync -avz --delete-after * example.com:/path-to-target-dir/ | ts</code></p><div class="relative header-and-anchor"><h2 id="h-browser-window-auto-reload">Browser window auto-reload</h2></div><p>The same principle applies to all development workflows, not just command-line stuff. For example this very blog is written in Markdown and is converted to static HTML pages with <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://gohugo.io/">Hugo</a>, which has a built-in command <code>hugo server</code> that serves the pages locally and automatically reloads pages thanks to <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.npmjs.com/package/livereload-js">livereload.js</a>.</p><img src="https://substack-post-media.s3.amazonaws.com/public/images/026d6bf9-0eb2-408b-8558-3b0d6dbc4a81_1200x615.gif" alt="Atom/Pulsar autosave and Entr in action with GCC compilation and execution" title="Atom/Pulsar autosave and Entr in action with GCC compilation and execution" class="image-node embed"><div class="relative header-and-anchor"><h2 id="h-real-time-feedback-from-code-editor">Real-time feedback from code editor</h2></div><p>While optimizing the <em>code-change-compile-run cycle</em> is the key to productivity, additional gains can also be achieved when the code editor gives real-time visual clues of what is going on and what might be an issue.</p><p>The screenshot below shows how the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/AtomLinter/linter-gcc">GCC linter in <s>Atom</s> Pulsar</a> adds a red dot to the line with an issue, and underlines the exact section on the line. A popup shows in-context information when the cursor is in the problematic function call.</p><p>In the same screenshot, note also how nicely <s>Atom</s> Pulsar shows (with a yellow hint) the filename that has uncommitted changes, and inside the file we can see the lines that have changed. The dark grey communicates that a file is excluded from git tracking with <code>.gitignore</code> (in this case the <em>demo</em> binary, as we only want to have source code in git).</p><img src="https://substack-post-media.s3.amazonaws.com/public/images/fa298369-b22d-4c0c-9561-cfdaef92505e_894x558.png" alt="Atom/Pulsar git and linter integrations in action" title="Atom/Pulsar git and linter integrations in action" class="image-node embed"><p>In <s>Atom</s> <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://optimizedbyotto.com/post/pulsar-best-text-file-and-code-editor/">Pulsar</a> I have linters for all the programming languages I use, and also <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.shellcheck.net/">Shellcheck</a> for bash scripts and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://yamllint.readthedocs.io/">yamllint</a> for configuration files.</p><div class="relative header-and-anchor"><h2 id="h-what-sets-some-programmers-apart">What sets some programmers apart</h2></div><p>During my career in software engineering, I’ve noticed that some coders simply have an instinct for what is <em>too slow</em>. While most people keep on grinding on the path they set on when trying to solve a problem, the more passionate programmers feel frustrated if their progress is too slow. An effective programmer will switch to optimize the speed of their progress – and only once they are happy with their velocity they will switch back to solve the original problem. This means they are eventually much faster at it and all future similar problems. Smart programmers have a great sense of when it is appropriate to stop the process, improve tooling, and then have the process run much faster.</p><p>Next time you catch yourself wasting time on doing something repetitive and slow, stop and ask yourself why you are doing it. A good programmer will always raise the bar for what they consider acceptable development velocity.</p>]]></content:encoded>
            <author>otto@newsletter.paragraph.com (Otto Kekäläinen)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/1f448aaeb828d368e8f3fb5ece675ed4.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Resist the urge of the first solution]]></title>
            <link>https://paragraph.com/@otto/resist-the-urge-of-the-first-solution</link>
            <guid>lOgSxEBlBlObOMJiCc86</guid>
            <pubDate>Sun, 11 Dec 2022 00:00:00 GMT</pubDate>
            <description><![CDATA[This is the biggest and most common mistake I see that prevents people from thinking clearly, and it is also the most difficult one to unlearn.Too of...]]></description>
            <content:encoded><![CDATA[<img src="https://storage.googleapis.com/papyrus_images/c2b2fb93ab68f9c1f6245aef8ea1e0be.jpg" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAFH0lEQVR4nI2SfUxTZxjFn4iIEaEfWkopBSy0hRZLC5SP9hZppS0Fyi2trSLF1tJiCyirUFq+hVYRRFsFNBMQmQpMWeaMomRRHCNxE0OmybJNtviXC3FZlmXZkiXL7nJFZ8ww2Zvc5M1NnvN73nMOoCQw0cHBDvNkEH0IbaAwYczA/diSOXcQWfTIn/o0K4Fdf50zYpN27E7b3/NBbCGIzbZiU3bsfPmLM8blo6VLzYq5g8gn1syLBu5QEbMnL8abSa5J3lTJAJQMgJKggg7VrA3udJIPiT5dGD+m50ybhXdrxF96Cr7vVq/0ly1O+Sbu3+z54vHog3tDDz7rXlgcuX3181Hvi4DuB79msbngX8CAetvRvBhPBtHBDjPRXwI0RCinQTUrtEFA6EaiA0rGBV3ydGXajFP8sEG61I0Gr43V3nvU9Ol8+41bfy4P/fjdB13Xp+1Xb+2ZmukI+pfai5bc+bNO8XRl+qiec7ow3p9HcwtJDnZYBR3fHgcYqWBLXO/iR3aKqacUjPc1rIly/m1bxoxLefDcOcPETMfk+Nj1YezZAPbHDez5MPa4u+9Cf23/MXFXsKq5cb5Bfqc6e7JSOIyyA0rGETHVxSfYktaX03Bx/NNRwBIfUsfd1Cwi9+TTBouZFw3c6xah1X8k/+QFZ/+JySk/9o3n56+H539a8Y4M3xo9gC3VBnvtakcdp67V7DTf3C8YN/KGNIm98pi2bMohXngVM8QQ9RqAkl/l7BaSXroUN66Jb67WpvsH9G73UJ8NW3BiD6tbm8qztQZRiXb3XiV227ByBfXWaxHj7qi9Du+evMs6VkAZ50NongxiDWfjPgZot7wGrLpkTQypT41oydrak493Sdro5XiO9bXof5kq+/XD0uWzBT1NKAuRs8Qys15yt0X0NCj99qTEYpSGlzukxl3nixOOy+jtORQXP9KW+MqfNwDtllePaBAQfTkkt5pP8/awPD31LuNvlwpn27PuN7FuXmos6+xWHHJdG3RMVsVO1SQvtG6XFeUTDVaKztS2M8kvoTYI8P7sY0AZ5W2AhgiGKNifsM6ZEt4uCK9U5UTXtWWb7YNe9ZMT4kkn50q35urS3LVny9NPnwzMzZzymfu0sZM2plmbEydTR6h0lUhyZyahhou7b6S+kX1zQ8l4X/cnbjjMWaeViZKMFpOp8FEfMvses6OzuuP57zG9Y1t56cTUjM1dZ81frdg7m4Z1xBE7R5KbShVJtVlsNy/Mmhi6l4ZLrQHAGSQop0NNAhTnplKMVglaEnBkXXJy+c3HhZfnwBuA1dPUT+0d3+Dy+/fxa3ZGb0vhbBJJi9PiaxPx8dVs1wZoiFBKBhMFSvhxESq9x5S1dIQXcCvA1gYqc9LZj0BUCCIVIzABYhRKD9hcphE9QZvDgGRBMWuLOeat3dcG4IGTQBsbGlVQul0qyePHIM468A5CvAjEOkhBQCgHRAsbGVDRwLTXMzdDVEICmZ2io+OD/1Vb41cJEfSRwBUKIF+TmsrmNLTDzgoACiSJoaQCVAZIUwCEQ7oactWEiM0Qy0phROtfDv5fAEoAZQyE58jjOSzIVID7JKQXQUVjsrsr1nYYRCikyoi+UeDuAAghUKhqCqBrqa8NWGWUEUASHapKCgF6EljdO1wtkFsGvHzIVIFILbQcAJEcAFgAskgoI629/jsBeNoEUCAZgirnxso6UJqAksAtUEp3V6Qp1MysHLxLsWlQao0zWORZPDTinTr/ADPy0Ki9YGisAAAAAElFTkSuQmCC" nextheight="630" nextwidth="1200" class="image-node embed"><p>This is the biggest and most common mistake I see that prevents people from thinking clearly, and it is also the most difficult one to unlearn.</p><p>Too often people rush to the first solution they see. <em>It is just too easy.</em> But the risk of the first solution not being the best one is way too high. If you want to think clearly, innovate and solve problems in engineering or just in life in general, learn this.</p><p>Always resist the temptation to define the problem so that it fits the solution. It must always be the other way around: first learn as much as possible about the problem. Define it well. You must first know what problem you are solving before you can even start to think about potential solutions. If you rush to the first solution you come across, you will most likely focus too much on that particular solution, with the result that you become blind to the real problem and lose the ability to see a good solution that will actually solve the problem.</p><p><strong>Focus your effort on the problem.</strong> Once you fully understand the problem, the solution will follow almost naturally and without too much effort, since the insight you will have gained of the problem will automatically direct you towards the right solution.</p><div class="relative header-and-anchor"><h2 id="h-how-to-resist-the-temptation">How to resist the temptation</h2></div><p>Resisting the urge is easier said than done. There are three things I practice to break away from my bias:</p><p><strong>1. Write it down.</strong> Putting it in writing forces you to produce coherent sentences and thus think it through. Seeing it in writing is a quick way to distance yourself from the issue and be your own first critic.</p><p><strong>2. Go for a walk.</strong> Take a timeout. Grab some fresh air to elevate your mood and attentiveness. Try to forget the issue for a while. This will allow you to approach the issue from a new angle when you return to it. Sleep on it, and maybe your unconscious self will process something while you sleep. Let time pass to increase the odds that you come up with fresh revelation about the issue. Even if no new thoughts come, the fact that time passed without any new aspects surfacing will increase the odds that you have understood the issue well.</p><p><strong>3. Present to or teach somebody else.</strong> Find somebody who is smart and honest. Then either in writing or in person, explain the issue and all the relevant data points. When you try to convince somebody else of your point of view, your own brain will try to anticipate counterpoints, which forces you to do your research, and when you actually present to the other person, their feedback will help you solidify your solution. The feedback from the other person does not need to be correct. If it is, that is great, but even the mere fact that you defended an idea will let you know how you felt about it.</p><p>If you still feel inclined after this, <em>go for it</em>.</p>]]></content:encoded>
            <author>otto@newsletter.paragraph.com (Otto Kekäläinen)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/c2b2fb93ab68f9c1f6245aef8ea1e0be.jpg" length="0" type="image/jpg"/>
        </item>
    </channel>
</rss>