<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>Redd XF</title>
        <link>https://paragraph.com/@reddxf</link>
        <description>Technology, design and other stuff that keeps us up at night</description>
        <lastBuildDate>Sun, 17 May 2026 06:50:58 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>en</language>
        <copyright>All rights reserved</copyright>
        <item>
            <title><![CDATA[From Time-Telling to Clock-Building]]></title>
            <link>https://paragraph.com/@reddxf/from-time-telling-to-clock-building</link>
            <guid>TKSswi50GTomR5UwE1Yi</guid>
            <pubDate>Wed, 22 Apr 2026 12:32:58 GMT</pubDate>
            <description><![CDATA[Why Design Teams Must Make Themselves Unnecessary]]></description>
            <content:encoded><![CDATA[<h1 id="h-the-fear" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The Fear</h1><p>No one imagined AI would be applied in the creative fields first. Everyone expected AI to automate hard manual labour, but clearly we were all wrong. Today, AI can do a lot of the work that can be done with a mouse and a keyboard, including design.</p><p>Today, the central interface of &quot;design tools&quot; is a prompt input box. One needs to simply type in what they need and out comes the often well-designed output. This democratises the authorship of design, and designers feel ever-more threatened.</p><p>But, why?</p><h1 id="h-reimagining-the-design-function" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Reimagining the Design Function</h1><p>The design tools of the past like Figma, Photoshop and Illustrator required time and effort to learn, and this was a natural barrier that allowed the establishment of a specialist class called designers. But the new design tools use the prompt as the primary interface. All you have to do is describe what you need. And that can be done by anyone. Sure, a designer would be more specific about how they prompt the tool, but really, that&apos;s a small delta that will eventually become irrelevant.</p><p>As a design team, you have a choice: do you insert yourself in the middle of this process, or do you enable others to perform the design functions?</p><p>In his book &apos;Good to Great&apos;, Jim Collins introduced the concept of &quot;clock-building, not time-telling&quot;. He argued that great leaders build organisations that can sustain success without depending on any single individual—including themselves.</p><p>If you&apos;re a design leader, you know that your job has been to ensure that the principles of design are applied to achieve the business outcomes your organisation has set out to achieve. The fact that you had people and processes to do this is incidental. There just wasn&apos;t another way to do it. But with AI tools, we finally have the means to allow anyone within the organisation to produce well-designed outputs. We should be redesigning our systems and processes so that this can happen more, instead of figuring out ways to insert ourselves into every process. The role of design is shifting from producing outputs to building systems that produce them.</p><h1 id="h-what-clock-building-looks-like" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">What Clock-Building Looks Like</h1><p>As design teams, we already publish brand guides, design systems, icon libraries, stock images, and templates of various kinds of documents so that anyone within the organisation that needs them can use them. The trouble with that approach was that users would use these resources as starting points only, and it was still possible that they would end up creating poor outputs. I&apos;ve seen documents that have three well-designed icons next to a fourth that surely didn&apos;t come from the same team. Maybe the person creating the document just thought they looked the same? Maybe the design team just didn&apos;t have the time to create that new icon?</p><p>But with AI tools today, you could build tools that make well-designed outputs a reality. You could build tools that would allow the employee from the previous example to just ask for the fourth icon they wanted and have that icon delivered with all the styles and guidelines applied, without a designer ever being the bottleneck in the process.</p><p>Here are three examples of tools and apps that could be built with AI technologies now that will not make the design team a dependency:</p><ol><li><p><strong>Social media images:</strong> Off-the-shelf tools like ChatGPT or Claude can produce any style of images, and that&apos;s the problem. Tools like this would ensure that we can produce any image on-demand while keeping it brand-specific.</p></li><li><p><strong>PowerPoint generator:</strong> This is not just a template, but a tool that produces the required slides and graphics for each, while also making sure the right slide templates are applied.</p></li><li><p><strong>Storyboard generator:</strong> As a design team, we also produce a lot of videos mainly used for marketing. This process involves a lot of back and forth with the marketing team, especially at the storyboarding phase. So we built a storyboard creation tool that can create the various key scenes of the story that the marketing team wants to tell. This helps create a better brief for the design team when it comes to actual production.</p></li></ol><p>I won&apos;t belabour the point, but there are various such opportunities to build tools like these that can be used by the organisation. Building these tools will get us out of the mundane time-telling space and move us into more challenging and interesting territory.</p><h1 id="h-when-clocks-replace-time-tellers" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">When Clocks Replace Time-Tellers</h1><p>Design teams that try to protect their role as time-tellers will find themselves increasingly sidelined. The work they are holding on to is precisely the work that is being automated.</p><p>The opportunity is not to resist this shift, but to move ahead of it.</p><p>AI does not diminish the importance of design—it changes where that importance lies. The routine, repetitive aspects of the craft can now be handled by systems.</p><p>This is the work of clock-building.</p><p>The goal is not to make designers irrelevant, but to remove the organisation&apos;s dependence on manual design execution. In doing so, design teams expand their influence rather than lose it.</p><p>There may be fewer designers focused purely on execution. But there will be greater need for those who can build systems, enforce coherence, and guide the overall direction of design within an organisation.</p><p>The question, then, is not whether design survives this shift.</p><p>It is whether we continue telling time—or learn to build the clocks.</p>]]></content:encoded>
            <author>reddxf@newsletter.paragraph.com (sharanx)</author>
            <category>design</category>
            <category>organisations</category>
            <category>work</category>
            <category>ux</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/a593ad44e541b9d0f127516b8f644966b9bce6c93c737dd4f988cdb9d9d42549.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[The Post-Skill Designer]]></title>
            <link>https://paragraph.com/@reddxf/the-post-skill-designer</link>
            <guid>8Xi1cjMAT7xIxTjvASfT</guid>
            <pubDate>Sat, 11 Apr 2026 22:00:00 GMT</pubDate>
            <description><![CDATA[From tool mastery to strategic thinking — why designers must evolve beyond the 'how' to own the 'why']]></description>
            <content:encoded><![CDATA[<h1 id="h-a-rude-awakening" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">A Rude Awakening</h1><p>I woke up one day and there was an announcement within the company I work with that everyone had to start using AI in their work. Claude Code had become so good recently that entire projects that would take about two weeks or more could now be done in an afternoon. While the biggest impact was meant to be seen in the churning out of code, it wasn't long before the developers started producing frontends that looked pretty well-designed out of the box.</p><p>I lead the design team and it seemed like my entire team's value was being questioned. When someone attacks you, the natural instinct is always to defend yourself. While I pushed back and pointed out the gaps that were in the outputs produced by Claude, (all true by the way), it was also clear to me that there was going to be a point in time when I would be hard-pressed to find issues with the outputs. The writing was on the wall: Adapt, or die!</p><h2 id="h-the-democratisation-of-authorship" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The Democratisation of Authorship</h2><p>So I took a step back and tried to figure out what's actually going on. Engineers were building frontends. Product managers were producing prototypes. Designers were producing code instead of designs alone. There were even marketing people producing websites. But was the output produced good? No. But will they get better over time? Absolutely.</p><p>I still remember the day when I first used a printer. I had to get word processing software on my DOS machine, I think it was a program called Word Star. I used that to write out "Welcome to the Fete!" in the largest font that was possible to produce with it. I then printed it out on this dot matrix printer that I got access to at my cousin's place and I was amazed by the signage that got printed in about 10 minutes! It was exhilarating. Thinking back on it now, the banner looked pretty bad and it wouldn't pass any graphic artist standards. But I think the excitement was largely driven by the fact that I could do all this myself without going to a specialist banner maker and paying the "big bucks" for a banner that I couldn't afford as a 7th grader.</p><p>I feel we are at a similar point in time today. The authorship of websites, applications and other software has become democratised. The skill barrier that existed before has largely been taken down. With the advent of courses on YouTube, this was already the trend, but there was still a barrier as one had to still learn to use the tools. Today, anyone who can manage to type, or even talk to a computer can now produce a website or an app if they wished to do so. Exciting as it is to see our ideas come alive, let's be clear that only the baseline has been raised, not the top-line. AI tools at the hands of an expert would produce much better outputs than someone who doesn't have that domain knowledge. That gap will always remain. But we need to be careful to assess whether that gap is pertinent.</p><h2 id="h-the-8k-television-argument" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The 8K Television Argument</h2><p>The most sold resolution of televisions today is 4K. That's simply because the eye cannot reasonably distinguish between individual pixels at resolutions higher than 4K. So, 8K, while technically superior, is just not worth the premium as it doesn't qualitatively improve our lives.</p><p>The analogy can be seen in enterprises as well. Based on constraints like time and money, the output that "does the job" produced faster will remain more useful than something superior but takes a lot more to produce. This will inevitably be the case with the outputs produced by AI. They will become good enough in most situations.</p><p>So, code produced by designers to create websites or apps will meet that standard. These apps will excel on the design end because it's a designer wielding the tool and will be good enough on the rest of the fronts.</p><p>I use coding as an example skill here, but the argument holds for other domains as well. Marketing, sales, finance, operations, product management, etc. will all be fields in which designers can now participate and contribute to in a bigger way than before. But zoom out and you'll see that the same is possible by people in other fields too. And that, I think is the larger point to pay attention to here -- the lines between functional domains is being blurred.</p><h2 id="h-the-question-of-taste" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The Question of Taste</h2><p>I've previously maintained that "Taste" is one human quality that can't be replicated by AI. But I've seen what's possible with AI tools and don't hold that position anymore. The short version of the reasoning is that taste is a learnt skill. Design schools are doing this every day. And if it can be taught, AI will learn it.</p><p>The slightly longer version of that answer: I understand this is an unpopular opinion and it feels like I am threatening our last stronghold. But wouldn't you rather know and be prepared for the truth than believing in a comforting lie and being shocked when it turns out not to be true? So let me break it down.</p><p>I've learnt that taste is a skill that is developed by working on three things:</p><ol><li><p><strong><em>Perception:</em></strong> Being more observant to notice the details others miss, like negative space or colour interactions.</p></li><li><p><strong><em>Discernment:</em></strong> Knowing why something works. Is it the typography or is it a cultural element?</p></li><li><p><strong><em>Restraint:</em></strong> Knowing what <em>isn't</em> necessary.</p></li></ol><p>All of these are things that can be taught. When training humans, you explain the science behind human perception and then expose them to hundreds and thousands of examples -- essentially pattern recognition and then they are capable of producing tasteful work. If it is pattern recognition, then we're in the domain of machine learning and AI. It has been proven time and again that the outputs from these generative engines are indistinguishable from those produced by humans. So, if the output from this is indistinguishable from the human generated version, by definition, it cannot be the differentiator.</p><p>Therefore even taste delivered by AI will soon (if not already) hit the "good enough" threshold. While some designers will remain and serve the need here, they will be serving an ever diminishing size of market. And when it comes to UX designers specifically, we've got the additional threat of interfaces completely disappearing as agents acting on behalf of uses using other applications solely through Application Programming Interfaces (APIs). This is why I wouldn't stake my flag on this particular quality in the long term.</p><h2 id="h-beyond-the-function" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Beyond the Function</h2><p>Intelligence, labour and coordination have been human driven since the beginning of the human journey. With AI, that fundamental idea has changed. They can think, execute and coordinate faster and in some ways better than any human ever. This is going to force companies to reorganise themselves differently and not worry about functional domains.</p><p>Jobs that were centred around skill, the domain of "how" something is done, will no longer be around. But this isn't something that should give us pause. Being a painter in the past meant that they'd create their own paints, starting from the raw materials. They'd make their own canvases, frames and a whole slew of other jobs. It was only after all this that they could paint and express what they wanted to. Painters don't do that today. Most buy their paints and canvases from a store and focus on the challenge of the blank white space.</p><p>Organisations of the future will follow a similar path. We don't need to do the work of learning different tools, processes and skills. That's going to be abstracted away. So, when creation becomes easy, there's nothing to celebrate in the act of simply bringing something to life. The harder question is and has always been the "why" behind it all and we now get to focus on that. We need to go from the brush and the canvas to the very reason for painting at all.</p><p>So, we've got to stop asking how something should be done and start asking what problem you're really solving and solve that whether it belongs within the "UX Designer" function or not. Put simply, we've got to become entrepreneurs, and that's the real moat.</p><h2 id="h-related-video" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Related Video</h2><div data-type="youtube" videoid="DJxPsWgtEdc">
      <div class="youtube-player" data-id="DJxPsWgtEdc" style="background-image: url('https://i.ytimg.com/vi/DJxPsWgtEdc/hqdefault.jpg'); background-size: cover; background-position: center">
        <a href="https://www.youtube.com/watch?v=DJxPsWgtEdc">
          <img src="https://paragraph.com/editor/youtube/play.png" class="play">
        </a>
      </div></div><br>]]></content:encoded>
            <author>reddxf@newsletter.paragraph.com (sharanx)</author>
            <category>ai</category>
            <category>design</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/97705f5a36032d95bd1ac7f3d0305e4a867858ae46fba5a0f6fc8daf11bbf978.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Evolution, Not Revolution]]></title>
            <link>https://paragraph.com/@reddxf/evolution-not-revolution</link>
            <guid>5ZkuQqwNlpPGGvuYCiFf</guid>
            <pubDate>Sun, 07 Sep 2025 22:00:00 GMT</pubDate>
            <description><![CDATA[How Gradual Progress Can Lead to Success in Product Design]]></description>
            <content:encoded><![CDATA[<p>The Apple Newton was a revolutionary product that was packed with cutting edge technology like touch screens, handwriting recognition and wireless data transfer. It was well ahead of everyone else and was defining a brand new product category. It failed. But not only that, other companies launched only three years later with almost identical products ended up succeeding. This is one of the strangest things that seem to happen in product companies and I wish I had figured out the answer before I did, as it would have saved me making the same mistake two times.</p><figure float="none" data-type="figure" class="img-center"><img src="https://storage.googleapis.com/papyrus_images/bfd947d202a65270146e205b2d05f78ea0ac8e96676e4ffccab1e5adcf9b756f.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAIAAAD8GO2jAAAACXBIWXMAAAsTAAALEwEAmpwYAAAKSElEQVR4nFVWC0wU1xqezWYymWx2MjOdfTH7cmbZnX10X/JYdxcib9EqjVajmFTjhnpZkbik+IAA7gXbW7kQ6vX6uGsk5bqWNIXaC+otj1skGqCKEcJtTcVHRCXUgsiyCy0L3MxM602/TCYnZ3K+//u/8885PwBBECwSiTmgKCKXyUiSTEhIIElSo9EoFAqCIEiSVKtUKIJIWBAEQUilUplUKpfJtFqNyWQ0Go2JiTqlUokgSGZmZltb26VL/zx7/mxaejoAQ5BIBIu4GAiCYBhGEAT7xnGFQqFUKhUKhSKBHfCfEjioVCqNRqNWq9Uq0mg02u12h8Ou0+kI4i2v19vW1nb5cjgcvrR121YAhll2BGHZUS6ARCJRqVRKpVIikWg0aq1WS5KkSqUiSZIgiMTERINBbzabGYbR6XRarZZhGIfDYbPZaJpOUCjWrl3b2dnZ09Pd29tbXV0NIJxwgiBkMhnvDE/HMAYTB51Op1GzYmmalslkFLVmncuVkZHhdrudTqfVajUamdTU1NzcXKfTSdO0yWy6evXqy5cvZ2Zmrly5AvAuIwgilUpVKpVWq6VpWqvRMAyTlpbm9XqNRkZH0xqNRkfTFEWRJGmxWFwul8fjSU9Pz83NTUtLM5tNGRnr09PTjSajTqerqalZWvp1ZWVlaGgIIEmSotbIZDIcx5VKpVarVatZW7RabVJSUsHmAq/XS1EUb7Rer6c5mEymjRs3er3exMREmqZFIpFUKtVoNDRN4zhWUVGxsrISj8evX78O8C5TFKXVas1ms91u5yn0+kSKopKTU7KystauXWtkWBAEgWOYhCBgCFIqE3Q6HcMwmzZtOnz4cEtLS39///DwcF9f39jYaCwWW4ov9fX1ASaTyWKx6PV6hUKBYqhcJtPROgzDpFKpVqux22zZWZlZWZkej8fr9WzYsGHbtq179uypqqoMhUJd3V0TzyYWFxdX/4+V1dXV+HI8thCLL8fv3L4NiMVikUgEwzBBECajcft771VWVra0tAwNDT18+LCxscHn810Oh5+/eD77ejYSibDSlpbe8MWX44uLi9FYLBqNLizEFljEFhcXYrHo8vLy4OAgUFRUdPTo0VAodOvWrSdPnsxF5lZWVpaXl5eWlsrKygAONE2PjIysrKwsLCwsLi4uLC5Eo/PR6HwsxjKyE3/EL7/+EpmPrK6u9t/sB3p6uoeH74yMjIyNjd3/4YcHD34cHx+fmZlpbm4GANZAv9+/5/33jx+viS/HI5FINBqNcfhd7x+xyD6xWHR+fn51dfXbb/8DNDQ0tLS0fP31193d3Tf7+4e+G7p7d/jRo0cnT55EUdTlcvn9/pKSkhMn6iKRuampqenp6RkOr2ZnZ1/Pzv2OSCQSmWefubm52dlXr+dex+Px3t5e4OLFi52dHX19fYMDA3fu3BkdHR0bG5t49mxwcDA1JSUzM3PP+3tCoX98d/u7aDQ6PT39avYVzz73+vXc3Ou5yNz8/HyUzeu3zNhIkQhn9XJXVxdw7ty5L75o7ezs7OvrGxgY6O/vv3379vXr1ysqKmiaNhgMeXl5JSUlZ86c+fnnn6d+mpqcnOQSmH4x+eL58+dPJyYmnk08ezYxMTHx9OnTx48fjY+PT06+ePr0yczMdGtrK7Br166ioiK/3793796DpaXBYLCxsfHosWMYisnlcgiCHA5HUlKS3W7fXVjo8/n27tvX0NAwODj4zTf/7unpvXmzf2Dg1r179x4/efz99/9tb2/v6/v2xo0bbW1fhsPhQCAAtLa2Dg4ODgwMXLgQunz58oULF7q7u+8O301OTk5MTGQYxm63b9iwweVap9FoDAZDUlJSSmrqqVOnjh07FggE7t27OzIy0tb2ZXNzczAYLD9cXlJyYOfOnaWHSgOBQEZGBtDU1NTY2NjQ0PDJJ5+Ew+FDgUMlJSWFhYUpKS7+OCKItw4fOeLz+RjGaLXZcnJyHA7nwYMHm5ubOzo6urq62tragsFgTU3N6b//rb297aOPPtqxY4ff79+9e7fNZgM+/svH58+fv3TpUigUunLlq9DFiw6Hw2635+dvdLvdWVlZAqFg/fr1lZWVdrtdrVZ7PV6Hw+lyrdu5c9fo6Mjk5IuBwYFwONzS8lkoFDpZf7KqqrKmpqauri4cDpceKgVqa/9cVVUVCoVaWlo6Ojqam5sdDofT6fR6vZu3bPF4PARBOJ1Ov9+fnJwskUgEAoHFYl3ndjOMMTMzs6mpqaenp7e3t76+nje92F8cDAYDgcCH5R++++67QGUlGzAYDFZVVZ07d668vJym6ezsnK1bt5UcKNm27T2lUmkymQoKCpKSkvjzUq1Wr6HWSCQSiqJTU1NDoVBjYyNNU/wufrB///79f6qtrT1z9uz+/fuBzs6r4w/Hh4fvjI6NPnjwoLq6ymQ252/I9/l8FRUVh48cSXW5cBx/22pNTNTraNpsNhv0eqPRSFGURqPZvn17dXW12+2WSqXXrl2LxaJTU1MvX/70+PGjHx/8+OmpU+yf/Pnnn7d/1V534sQHRR84nU63x7NlS8HG/I0ej8fhcDAGg46mvV6vx+PJzMzMz88vKioqKysrLi7evn3H5nfecaWmKhSK06dP379/v/kztpaOHDly6NChrq6uM2fOAMePHwcAQCaTQRBMU7Rer7dYLE6nUyqVKpXKlJSUVA42m81qteo46NkMTFlZ2fv27SsuLrbbbE6Hs76+vry8PCMjIzk5xWgykqRi8+bN2dnZwL86OmiaZi8DFBOLxUqSVKvVfD+RnZ3tdru1Wq1EKsUwDEVRBEFEHCAYBoVCMiHB6XTa7Xa321N6sLSsrKygoKC2tjYUCjU0NoZCoV2FhUD9X+vz8nJBEJTL5RiG8ReDWq22WCwqlUrMtTP8hSEEhUKhEIIgoUBAUZTX40ERBIIgsVis0WpzsrMPHDhw9Nixpk+brl27duPGjfr6evbAp2kdgiACgeBtq3VLQYFYLOb7H0SMwNxiCIZYYqEQBIUQBOI4DoGg07k2NycHQ1GhUAjDMIZhMpk8Nye3rq6uuqba5/MV7t4dCATYKpJKpTiOi0QiuVzOMIxcLidwHARBGIZ44TAM/x4AhCBIp9PBMMwwTHp6Ot+ugRzEYjGO4xaTSSgU8tcUiqJvEQSgUqny8vJIkhQIBPwsTwSCIN/oQRx4FhAEMQxDEERJKr3etASFAsNQGIYQBOHdY5tJjYZn4AkBi8ViMBhYKyAIhuE3XKzXQiGO41yBgUKhgM8D42oBx3G5XIYg/PZAUqmUT0UkEkkkEn4MwzAIggCCIAAAiLgGkvcBBIW/7SpHyjW8kjdREQRRKBQYirG9EDcPc5O8IF4liqB8qwiLRGwAtVqdlpZGUzQAACCXAIKwEnjJbHiRCMOwN17z9YZjmEql4nKGMAxl9XFr+DxEIhEICgEBAFAUZTabrVYriqICAcCrEIvFAgHvCfsWCACeXQTDrCkiEcqBIAgURRISEnAch7kMeA/4guaX/A+/+DHmz2lcbwAAAABJRU5ErkJggg==" nextheight="1024" nextwidth="1024" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><h2 id="h-obi-a-learning-experience" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Obi: A Learning Experience</h2><p>Ever since my twin and I learnt to program when we were 8 years old, we had been dreaming of producing products. Our first real product, one that was actually used by others, was an inventory management system built with Visual Basic 5. It was built for our family's jewellery store. We were in the ninth grade and were paid with bhel puri! It wasn't a lucrative project, but it was a delicious one. It was a thrill to see an entire store using and be dependent on our software!</p><p>Later, around 2008, we had a second run at building another version of the inventory management system, when we had a custom software development company and were considering building a product&nbsp; to help us scale our revenues.&nbsp;We looked at the competition and what they offered, their feature sets, their pricing methods and understood all the artificial boundaries they had in place and decided we could do better, way better.</p><p>Our software, named Obi, would be the following:</p><p>1. On the software side -- we would deliver a modern UX that any user could figure out how to use on their own. We didn't want to build the clunky software that required "training" that companies would charge handsomely for.</p><p>2. On the pricing side -- we would deliver the software on the web (the term "cloud" hadn't been invented at this time) and charge per user without bifurcation within user types.</p><p>3. On the upgrades front -- we wouldn't charge for upgrades. We just factored the cost of upgrading once a year and melded that into the monthly cost.</p><p>There were several other aspects that were different in our case, but for the sake of brevity, we were doing exactly what SaaS companies do today.</p><p>We went out the door and started talking to customers and waiting for the hordes to come and break our doors down. But, that didn't happen.&nbsp;The objections&nbsp; we got from our customers were unsurmountable for a small software company like ours and after 1 year of running the operations and only a few clients to speak of, paying us a very small amount, we just couldn't sustain it any longer and had to shut down. This, despite a customer actually walking into our office and offering to pay us more than we asked so that he could keep using it at his store!</p><p>I joined Adobe as a UX designer at the end of 2009 using Obi to demonstrate that I had the experience of building "beautiful software" (the UX term hadn't become well known at the time).</p><p>Two years later, around 2010, most of the competitors had begun to do everything almost exactly as we had and found great success. It would have been more comforting if we had been completely wrong, but&nbsp;our predictions of the future came out to be exactly true. I heard someone say "Being too early is the same as being wrong". While they described my problem perfectly, I still didn't know what the solution was. I needed to break down what "being early" was. In the relative stability of a 9 to 5 job, I finally figured it out.</p><h2 id="h-the-existing-environment" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The Existing Environment</h2><p>An innovator imagines that their ability to look far into the future is what gives them an edge. We build the vision of the future based on a series of logical milestones, and arrive at the shape of what we believe is the future. The fault in our approach was that we didn't consider their existing environment in which our potential customers exist, which is integral to the decisions they make.</p><p>For example, we were right in assuming that most software of the future would be web based because it not only solves the need for installation, but also makes it simple for upgrades and makes the system infinitely more accessible from anywhere on the planet. But our customers lived in a world where nearly all software was sold on CDs that they could hold in their hands. It gave them the confidence that they would always have and be able to use it whereas something on the web could disappear tomorrow.</p><p>They were also transitioning from a world that believed software wasn't something one paid for leave alone pay in perpetuity for. They also believed that choosing to upgrade was meant to be optional. "Normal" upgrade cycles were once a year and done only when they saw that new version had a lot of improvements over the existing one. They wanted to be able to compare and then make the choice to upgrade or not.</p><p>They also saw the training was an essential service that a software developer had to provide because in their experience, they had always used clunky software that required that training. No upstart like us would come and shake that notion with our fancy user-oriented design.&nbsp;But having the infrastructure to provide the training was also a signal that indicated that we were big enough to be around in the future. Inventory management systems are a core part of the operation of a company, so this was an assurance that they would need before they switched to using our software.</p><p>It's quite interesting that I find a lot of parallels exist between architecture and product design.&nbsp;There is a famous quote by Eliel Saarinen, the architect, that goes,&nbsp;“Always design a thing by considering it in its next larger context—a chair in a room, a room in a house, a house in an environment, an environment in a city plan.” He really was onto something there.</p><h2 id="h-the-comfort-of-the-status-quo" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The Comfort of the Status Quo</h2><p>Obi was different in all these respects. It was&nbsp;new and different on too many fronts all at once. In a sales cycle, we may have the opportunity to counter one or two of the objections that a potential customer may pose to us, but if there are so many different aspects that they have to wrap their heads around, then the default decision is to maintain status quo and reject the offer.</p><p>Customers will&nbsp;make the effort of doing the cognitive work if the end result is something that is 10X better than the current situation not when something is only a moderate improvement over the existing solution. In our case with Obi, we were doing a lot of things differently, but the end result was that the customer would still only be able to manage their inventory a little more efficiently, nothing changed apart from that.</p><p>There's the old adage,&nbsp;"One can only be wrong if they try to do something."&nbsp;(which is an interesting antonym of "Only those who try can succeed" which is meant to motivate entrepreneurs, but that's a different discussion altogether). In other words, we simply didn't offer the user a big enough of a reason to experiment with so many different variables. All they'd see at the end was a modest improvement in the operation and a reduction in cost that didn't really make a difference to their profits.</p><h2 id="h-the-enr-framework" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The EnR Framework</h2><p>Given this understanding I began to work on a framework that would allow us to chart a course for how to develop products.</p><p>Starting with a vision of the future is good. It works well as a goal to shoot for but also solves the "faster horses" problem, as the Henry Ford quote goes. No one but the innovator can dream up a future vision of a product. Only solid domain expertise will allow us to see that.</p><p>But once that vision is well understood, the next step is actually to chart a course from the present to that future. You can do that by creating milestones that take into consideration what the current environment is and makes tweaks that will get you from the present to the next milestone. Each milestone makes one change and one change only. This will afford time for the environment to catch up and the user to be brought along the path making only minor changes to their understanding each time.</p><p>It also makes sense to poll users on their opinions of the features developed at this resolution because these changes are gradual. The feedback can be integrated into the development of the next milestone even making fixes wherever required. Before long, we'd have arrived safely at the future you had imagined long ago. This way of getting to that future is a more resilient option as well as it will be on solid footing each time.</p><p>But this framework is heavily dependent on a strong feedback mechanism that allows you to hear from users often and in large numbers. I'll cover more about this in a future article. This framework is also not useful in some leapfrog products such as ChatGPT or the like that had no precedents. So it is not useful in those situations.</p><h2 id="h-conclusion" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Conclusion</h2><p>The 'Evolution, not Revolution' framework is an important concept in design and innovation that I had to develop in order to handle my work in product design.</p><p>If Apple had launched Newton at a lower price point that validated whether someone would want a PDA, they may have given themselves the opportunity to validate that idea before packing in expensive touchpad technology or handwriting recognition. If they had done that, they may have not only cornered the PDA market but could very well have entered the mobile phone market about 15 years before they actually did.</p><p>I hoped they'd have learnt their lesson from previous mistakes, but we're seeing instead that they're repeating them with the Vision Pro. The platform has some great potential, so I hope they fix things soon.</p><br><br>]]></content:encoded>
            <author>reddxf@newsletter.paragraph.com (sharanx)</author>
            <category>technology</category>
            <category>design</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/60d85dc238276ce8cb2a067aaa0ef7c05e8f02f014700f8ef2413a9efaf055ce.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[The Age of Goal Based Computing is Here]]></title>
            <link>https://paragraph.com/@reddxf/the-age-of-goal-based-computing-is-here</link>
            <guid>00eFTwJL7EBknzZ9WOfN</guid>
            <pubDate>Wed, 16 Apr 2025 22:00:00 GMT</pubDate>
            <description><![CDATA[Among the many things AI has enabled, it has now made possible an entirely new method of interacting with our computers.]]></description>
            <content:encoded><![CDATA[<h1 id="h-introduction" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Introduction</h1><p>I have long believed that we are on the precipice of changing the fundamental way in which humans will interact with computers. We are stepping into an age where we are going to simply be able to tell our computers the goals we want to achieve and they will figure out the steps in order to achieve it and then execute it for us. But this powerful idea is founded on the notion that your computer will know what there is to know about you, including your preferences, your data and whatever else you can supply it. But that brings up many new questions regarding privacy, data ownership and data residency. But don&apos;t worry, I&apos;m not here just to whine, (though I will), but I also come bearing a solution and even ways for you to participate in it.</p><h1 id="h-background" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Background</h1><p>To begin to understand the importance of what we&apos;re currently living through, it is important for me to provide you a bit of background about three aspects of this industry.</p><p>First, I want to point out the evolution of computing from the perspective of how much effort it takes to have your computer do what you need it to do, the effort has been reducing over time. We&apos;ve gone from writing our instructions on punch cards to writing programs using almost english sounding programming languages like BASIC or Python. After that, the evolution plateaued until the birth of LLMs. Now, we can speak to them in natural language and have them do things for us.</p><p>The second progression that I want you to pay attention to is the number of things a particular program is capable of doing. You see, at first you needed to employ programmers in your office to write you custom programs because there was just no other way to having your company use the efficiency that computers could bring to your organisation. Post that phase, applications were available to people to just download, install and use. Now app developers who were competing for the user&apos;s dollars, either built apps to serve a niche audience or started building super-apps that could do everything. The latter group quickly ran into the limitations of the Graphical User Interface (GUI) because there was only so many things you can pack into a screen and still be user-friendly. This was the case, again, until LLMs. Now you had computers that could do anything, even write programs to do specific things.</p><p>Thirdly, apps have been very specific solutions to a problem. As such, they&apos;ve been very brittle as you could only ask the app to do certain things and that&apos;s it. Even when speech technology was applied, you still spent time learning the phrases that you could use a la, &quot;Hey Google, turn my lights on in the living room&quot;. If you said that phrase differently than it had been setup with, for example, if you said, &quot;Hey, it&apos;s too dark in the living room&quot; -- you couldn&apos;t expect your app to understand that phrase. This hurdle again has been overcome by LLM&apos;s because they can understand your meaning past your words. For example, you could say &quot;I&apos;m thirsty&quot;, &quot;I&apos;m parched&quot; or even be euphemistic like &quot;My mouth feels like the Sahara desert right now&quot; and it would still get the meaning you want to convey. From a UX perspective, this ability for technology to fit into the life of human beings seamless is what I find most powerful.</p><p>So we are at the threshold of these three progressions having come to a head at once. This means that not only are our computers going to be super capable going forward, it&apos;s going to be incredibly simple to use and no one has to spend any time even learning them. Can you imagine what an unlock this is for society? And this is the first time in history that the benefits of this technology is not just going to those of us who grew up with technology in our lives and we&apos;ve found it comfortable because we spent the time learning them, it&apos;s going to be a massive unlock for those that were so far left behind the technological divide or even the english-language. These include people like my parents who find their smartphones challenging to use.</p><p>This, is huge.</p><h1 id="h-context-is-everything" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Context is Everything</h1><p>For AI systems to go from just being effective on the baseline (based on what it&apos;s learnt about humans as a whole from the internet) to doing things for you the way you prefer it be done requires understanding your context. They need to be able to study your behaviours and patterns in order to be able to emulate them. This may mean giving the AI access to not just your emails, calendars, contacts, photos but even allow it to watch your health data, medical records, your passwords, etc.</p><p>That&apos;s when they can go to a place where they can understand that you prefer to say &quot;clutch&quot; over &quot;cool&quot; or like to order from this particular Chinese restaurant and the other one is there as a backup or that you prefer your screen font size large because you have trouble reading or even to cancel all your appointments automatically if you woke up unwell one day. It can figure all this out without you ever having to tell it these things explicitly.</p><p>But AI isn&apos;t just going to be a useful assistant to have, it will soon become essential. We live in a world where information is being produced at unprecedented rates and we feel compelled to keep up. But we are reaching the limits of human capabilities and that&apos;s just going to continue to be the case. There will be hundreds of decisions that we can delegate to AI that&apos;s working on our behalf, the least of which is deciding which email is spam and which isn&apos;t. If you want to participate in an accelerating world, having an AI assistant will become a necessity in the same way having a smartphone today is seen as a necessity.</p><p>So, the question is, where do you get your AI assistant.</p><h1 id="h-shop-around" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Shop Around</h1><p>If there&apos;s one thing we&apos;ve learnt from the mistakes made by the social media decade is that if the service is free, you are the product. We&apos;ve also seen the many ways data about users have been used against them. To most readers of this blog, I don&apos;t need to list the kinds of things these large technology companies learn about us just through the ways in which we shop or even the intonations of our voice. We&apos;ve seen how effective and accurate they have been, including stories about products being suggested to dads of teenagers who were pregnant, even before the dad knew about that pregnancy!</p><p>I would never trust these corporations with my context data, especially when that data could reveal infinitely more information than my social media profile ever could.</p><p>Let me be clear, I don&apos;t think these corporations are necessarily evil. I find it hard to imagine anyone waking up every morning with the sole intention to destroy other people&apos;s lives. But the incentives of corporations are definitely not aligned to serving the user. They are beholden to their shareholders and are mandated with generating profits for them. This mechanism doesn&apos;t allow anyone to see the user as anything more than a resource to derive value from. If value can be generated by selling their data to the highest bidder, so be it.</p><p>Given this equation, let&apos;s suppose you go ask a question about a certain type of cancer to the AI assistant from one of these large corporations or submitted a scan to this assistant with the intention of getting a second opinion. What if that AI assistant saw that and realised that your insurance company may be willing to pay a huge amount for that information because they could loose a lot more if they continue to cover you if you fell ill. Do you think the assistant would be programmed to act on your behalf or to generate a profit for the corporation?</p><p>The good news is that we don&apos;t need to rely on these corporations. The open source world has realised these pitfalls and have been hard at work developing technologies that will ensure that this kind of a future doesn&apos;t need to befall us. I also realise how important these efforts are and have been contributing my time and efforts to such projects as well.</p><h1 id="h-the-solution-is-simple" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The Solution is Simple</h1><p>While these big companies that own the frontier AI models want to portray themselves as the only option for this sort of intelligence, don&apos;t be fooled. Know that you can have absolutely great stuff available to you without any of them in the picture. You can retain complete ownership of your data while still getting all the convenience. You can do all this locally, in the privacy of your own computer. You don&apos;t have to learn prompting strategies either.</p><p>You can install Ollama and Move 37 on your machine and you&apos;re good to go! All of it is completely free and you can even check and audit the code if you&apos;re so inclined. Granted it&apos;s a little bit of work to get this installed right now, but they are all making things easier all the time.</p><p>If you&apos;d like to see a demo of this, I&apos;d be happy to create one. If you want to find out how to install this also, I can create another video about it. But for now, here&apos;s a very quick walkthrough.</p><p>[COMING SOON]</p><p>As mentioned, these two projects are available for free, from these locations. So go check them out:</p><ul><li><p>Ollama</p></li><li><p>Move37</p></li></ul><h1 id="h-conclusion" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Conclusion</h1><p>If you believe in an open future where individuals retain control over their own destinies, you can start using these technologies for yourself. But if you feel like you&apos;d want to contribute more and be a part of these movements, there are many ways you can participate depending on your skills.</p><p>If you&apos;re technically inclined:</p><ul><li><p>Move37 is looking for help in testing, verifying the code completeness and various other items listed in their roadmap.</p></li><li><p>UX designers can propose new ways to organise the frontend to make things easier and more efficient.</p></li><li><p>The installation process could use a lot of simplifying and that&apos;s something that you can contribute to, especially on Move37.</p></li></ul><p>If you&apos;re not an engineer:</p><ul><li><p>Use and provide feedback and spot bugs</p></li><li><p>Spread the word and get more people to realise the problem and understand that there are solutions</p></li><li><p>Create videos talking about the projects</p></li><li><p>Add new recipes -- the steps for using various applications</p></li><li><p>Suggest new ideas and features that can be built</p></li><li><p>Star the projects on GitHub so that they get more visibility</p></li><li><p>Just donate to the projects if you&apos;ve got the money, they can use it.</p></li></ul><p>But whatever you choose, thanks for being a part of this! Until next time, ciao!</p>]]></content:encoded>
            <author>reddxf@newsletter.paragraph.com (sharanx)</author>
            <category>technology</category>
            <category>ai</category>
            <category>computing</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/4510c74fbfb81afcfe5627403d5f40370a91084de8a79c9e792c0fcd78cc5a3f.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[The New Age of the Design Generalist]]></title>
            <link>https://paragraph.com/@reddxf/the-new-age-of-the-design-generalist</link>
            <guid>AgvEIzQLWdv9jnvXbprr</guid>
            <pubDate>Sun, 29 Dec 2024 23:00:00 GMT</pubDate>
            <description><![CDATA[AI tools have made coding simpler. Skills are no longer relevant. Imagination and envisioning a particular future is more necessary than ever.]]></description>
            <content:encoded><![CDATA[<h1 id="h-the-boring-part" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The 'Boring' Part</h1><p>Recently I've been having a lot of difficulty in staying on top of my social media game. Not being a native to social media, I approached it with a certain disdain in the beginning and completely missed the wave. But given the new course I have set upon, I intend to learn as much as I can and get better at this now.️</p><p>But the tedious aspect of social media has got to be the manual process of posting content on each and every channel that I have become a part of. That's the biggest hurdle and the most boring aspect of the entire process. There are some tools, and I subscribed to Typefully to be able to do this. But, alas, Typefully doesn't support the social platforms on Web3 that I am a part of. So I looked for various solutions and none of them satisfied my needs. It occurred to me to try building an application that would do this for myself with the help of AI and I am so glad I did because, a whole new world has opened up to me now!</p><p>I began down this path in the usual way that one builds software these days. I got access to ChatGPT, but I found the idea of copy-pasting between the chat interface and VS Code tedious and quite problematic as I am not a very good developer and often made mistakes with where I copied and pasted from. Github Copilot Chat was like having the same ChatGPT capability but repurposed for use within VS Code, though I still had to copy and paste between panels. But GitHub Copilot has a new feature called "Edits" which would directly make the edits to your code. This made things a whole lot more convenient. I could even describe what I wanted to do in plain english and it would make the right edits directly in my code. This was a game-changer!</p><p>So, with my new setup in place, I began by describing what I intended to achieve at a high level and asked it to recommend the right technology stack. It recommended that I built with Vue.js for my intended purpose, I said "Fine" and it began whirring away setting up the environment with all the required files and started to build out my views.</p><p>I didn't know what the limitations were going to be, so I wanted to do a trial project first. I gave it some instructions for a "Hello World" program and it performed it beautifully and I got it to work as expected without much difficulty. The trouble came afterwards where I asked it to scrap the trial project and begin working on the real one. It completely couldn't figure out how to take steps backwards. It was easier for me to delete the entire project folder and start again, so that's what I did. But it also highlighted to me that this would be a limitation I would have to deal with using this tool. So I had to chart a course of building features one by one while I get to the final version without having to remove or modify the previously built feature.</p><p>In my plan, I was going to build a social media posting utility that would allow me to post across multiple channels by supplying it with a single post. So the simplest form of that application was going to be a feature that would post a message on a single channel first, then I would add a drop-down with additional channels and then create a broadcaster that would post across every channel, then a scheduler, etc.</p><p>This approach worked out rather well. I had to learn to be very specific at times and very broad at other times and that was straightforward enough. But I hit a road block when, after building out the functionality, I wanted to improve the UI and started making fixes. Now these fixes needed to be made across all the interfaces, but for some reason it had a lot of difficulty doing this across the various modules that I had built. It got to bad that I decided to use an entirely different front-end framework that would keep things standardised out of the box so that I would have to do very minimal work in order to make the UI look good. I chose to build the application with Streamlit.</p><p>At the same time, I heard about Windsurf as an AI infused IDE and saw the demo video and it looked great, even better than Copilot Edits. So I downloaded it and built a simple demo application with the trial credits. It was simply an entirely different experience. I simply describe the application I wanted to build and broke it down into the steps that were going to be required to achieve that and since I had it in the "Write" mode, where it could make file edits and changes, it simply went about doing everything. If I was impressed with Copilot Edits, this blew my socks off. I had the entire application setup and ready to go with Windsurf taking some incredibly good decisions regarding the things I hadn't mentioned. It even named all the modules and variables in the code as I would have done myself! It was simply outstanding. I was able to achieve what I could with GitHub Copilot over the 5 or 6 days in a matter of 1 day. It did help that I was doing this a second time around, so I knew what the right approach was and didn't make as many mistakes.</p><p>But Windsurf hasn't been without it's flaws. There were two times where I asked it to write a module "like the edits that you made to the Twitter module..." and it not only wrote the new module in the way it had done the Twitter module, but went too far and made edits to other modules. I had to learn to limit its scope by adding "Do only this and don't make any other edits" to my instructions and that stopped the problematic edits from being performed. So, the very thing that it was great at and extremely useful at the beginning stages of an application became it's Achilles heel later on. It was quite literally the opposite of GitHub Edits in this regard. So, the workflow that I would use for future application development is to use Windsurf in the beginning stages of the application and then switch to GitHub Copilot for specific edits and module-level enhancements. But of course, I am talking about their qualities as of today. I completely expect that the teams behind these two tools will find ways to integrate the best features of each other's AI code companions.</p><p>But this isn't meant to be a discussion on the usage of these tools at all. Creating a video about the right ways to work with these AI tools is a fool's errand because these tools have obviously been built by very smart people that really do understand what they're bringing to this world and they will keep making improvements until these tools work just the right way it is meant to be by it's users. No, it's a discussion about the fact that I now stand in front of a very designed and well built application that does exactly what I want it to do! The implications of this has in equal parts shaken me and blown my mind, and I had to talk about that!</p><h1 id="h-the-interesting-part" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The Interesting Part</h1><p>The following are the changes that I expect to see on the horizon. There are numerous impacts of the stuff that I've been doing that I can see and I've done my best to list them in a logical order.</p><h2 id="h-os" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">OS</h2><p>Firstly, these tools, in their current form have made a huge leap forward in the span of a year. The capabilities of frontier and open-source AI models are only going to become better over time. This will allow them to factor more parameters into their operations. In other words, you will be able to communicate with the AI engines in as many ways as you need, speaking some actions, writing out some others and even supplying your Figma designs straight into the LLM's for them to understand and execute.</p><p>But then, if you extend that idea, one may ask, why couldn't the AI write the program at the time of need? Or even better, couldn't the AI become the application necessary to do the tasks and instead of providing a chat interface, provide the interface that an application would provide a user. A majority of applications are simply interfaces over some data source, so while there are many qualifiers to this sentence, the short answer is yes, it truly can become that super app and replace the user-facing part of the operating system. The OS will be relegated to managing security, permissions and managing the hardware interfaces. ^5b3b30</p><h2 id="h-skills" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Skills</h2><p>In this world, where would UX design fit in? If an AI can understand the best ways humans interact with interfaces, they can not only produce those interfaces with scarily effective interfaces, but also tune it to the needs of a single individual, no longer serving only the needs of the majority. In such a world, what role would a human UX designer play? I fear, it will no longer be relevant.</p><h2 id="h-roles" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Roles</h2><p>In the short-term however, the role of the UX designer will change. They will no longer be beholden to a developer and their capabilities anymore and can circumvent them and develop a great application on their own -- just as a developer will no longer need a designer for the applications they will create. As AI takes over more tasks, traditional design roles are likely to shift from execution-oriented to focus on strategic direction.</p><p>The job will be to instruct AI agents to perform the required tasks in the manner in which it needs to be done in order to achieve the goal. So practical imagination and planning will be the skills that will set us apart going forward. People like lead designers or lead design organisations are going to be much better equipped to jump into this role more naturally than anyone else. But this is only for the short-term and they too will need to go up the chain and up-skill themselves to think like entrepreneurs in the long term, because AI will take over the planning aspect as well soon.</p><h2 id="h-tools" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Tools</h2><p>Our current design tools are built for specialists and therefore are designed for precision. You can edit just this pixel, move exactly this screen and change exactly this effect. However AI can handle the details and the tools of the future are going to be designed around doing things with broad strokes. So it can take inputs like, "a background of a desert dune" or "a login flow with 2FA", instead. The broad strokes are just going to get broader over time with humans being able to specify things in a increasingly abstracted manner. If you look at Krea.ai as an example and see what they're doing with the Edit feature and compare that with the Photoshop workflow, this aspect will become even more evident.</p><p>Logically, this also means that the target audience is going to change. It's no longer going to be appealing only to a graphic designer anymore but an entrepreneur running a company could use Krea, today, with little to no knowledge of graphic design and create some stunning artwork for themselves.</p><h2 id="h-economies-of-scale" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Economies of Scale</h2><p>The dynamics of the economy will also change within the digital space. One of the biggest challenges in the software industry has been to design features that appeal to every individual. But because it costs a lot to design, code, test and release a certain feature, the decision has always been skewed towards appealing to the majority of users and ignoring the fringe needs. But with AI in the mix, this very dynamic can change. As I was discussing before, every feature and interface can be built to appeal to the individual user. Maybe the workflow will be that the human orchestrated output will be the feature that appeals to the majority of users, but AI can use that as a starting point and fill in the gaps to make that feature work for the fringe users? This changes the entire industry.</p><h2 id="h-unexpected-benefactors" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Unexpected Benefactors</h2><p>One of the biggest benefactors of the economies of scale change are going to be the group of people that are currently disadvantaged by the digital world today. For example, these could be people with eyesight challenges or even the elderly that have so far been left behind by the digital world and are left more and more unsafe because of it. My own father keeps sending me fake videos of various things completely believing whatever thee video says. And since he's at high risk in a digital world, I have had to ask him to not do any digital banking and completely prevented him from using UPI and such. My mother is unable to use these technologies as she's a non-native-English-speaker and most interfaces are designed for English. Imagine user interfaces that will adapt to them without additional costs to the software developer. They would finally be able to cross the chasm and join the rest of the digital world.</p><h2 id="h-the-generalist" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The Generalist</h2><p>In this AI assisted world, there is a power-shift taking place. The users of AI are going to be able to build anything they can imagine. You no longer are constrained by needing a large team to execute on your vision. A lot of the roles of the team that one needed to build in the past can be executed by an AI agent today. It's only going to be able to handle more roles in the future. This means that the required capital to build the companies of the future is only going to reduce. It therefore means that more and more people with the imagination and desire to do something will simply be able to.</p><p>A typical entrepreneur today is a generalist that understands a little bit of all the pieces that are required to execute something and can bring all those pieces together. Broadly speaking, they identify the need in the market, develop a vision for the solution to the problem, then they bring in specialists to fill in the gaps and capitalists to provide the capital required, and create the culture required to execute on the vision. It's this very person's role that will be made infinitely better in the future. In the digital world, we're ushering in the dawn of the age of the generalist.</p><br>]]></content:encoded>
            <author>reddxf@newsletter.paragraph.com (sharanx)</author>
            <category>design</category>
            <category>career</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/843966de7d2ebae51acceb0f55498196175a0bce81352d5cfbfe3342ba898cc0.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[How to start designing for Web3]]></title>
            <link>https://paragraph.com/@reddxf/how-to-start-designing-for-web3</link>
            <guid>23gh1R0yV3qUUrOwPBth</guid>
            <pubDate>Tue, 03 Dec 2024 23:00:00 GMT</pubDate>
            <description><![CDATA[How to go from designing UX in Web2 to Web3Web3 has been built by engineers, for engineers, until now. But the recent DevCon conference hosted by the Ethereum community in Bangkok had a huge focus on user experience and that's never happened before. Given how geeky this community has been, even talking about usability is a huge step! There's a shift from focusing on building infrastructure to building applications on top of that now that a lot of the required infrastructure is in place. There...]]></description>
            <content:encoded><![CDATA[<h1 id="h-how-to-go-from-designing-ux-in-web2-to-web3" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">How to go from designing UX in Web2 to Web3</h1><p>Web3 has been built by engineers, for engineers, until now. But the recent DevCon conference hosted by the Ethereum community in Bangkok had a huge focus on user experience and that&apos;s never happened before. Given how geeky this community has been, even talking about usability is a huge step! There&apos;s a shift from focusing on building infrastructure to building applications on top of that now that a lot of the required infrastructure is in place. There&apos;s a clear realisation within the community to focus on usability with the intention of attracting the mainstream population. This presents an unprecedented opportunity for designers. But transitioning hasn&apos;t been easy and this article intends to make that part simple.</p><h1 id="h-firstly-understand-web3" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Firstly, Understand Web3</h1><p>As UX designers know, to design something effective, we need to begin by understanding as much as possible about the technology underneath. To that end, let&apos;s begin by understanding what is different about Web3 in contrast with Web2.</p><h2 id="h-1-different-stack" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">1. Different Stack</h2><p>In Web2, you have companies that own the entire application stack -- they own the servers, the databases, the back-end software, the payment layer and the front-end UI layer. Users go to these companies and create accounts with them and their usage information is stored on the databases owned and operated by these same companies.</p><p>In Web3, the network is composed of a huge number of nodes and they are the equivalents of a web server in Web2. The front-end may be built by the company and the users that want to use the software can just sign-in with their wallets and begin using it. Their data is stored on the public blockchain.</p><p>The results of these differences is vast. Let&apos;s consider the scenario where the application is a social media app and the company wants to implement a feature that some users object to. In Web2 they have no choice but to leave the platform which also means leaving behind all the data they generated and all the friends they made on the platform. In Web3, there is no such thing as locking out the user as their data is always available on the public blockchain. If one front-end provider is charging a very high price for their app, they could use another front-end provider and still have access to all their data and their relationships.</p><h2 id="h-2-composability" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">2. Composability</h2><p>But Web3 is built on standards and protocols, so they&apos;re highly composable. This means that developers can treat all data sources and all projects as if it were their own.</p><p>For instance, in the example given above, you can imagine a developer creating a different front-end that not only gets the social media posts from one platform, but could choose to include background music from the user&apos;s song library from an entirely different platform and create an audio-visual experience altogether. This is possible because of the composability of Web3.</p><h2 id="h-3-permissionlessness" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">3. Permissionlessness</h2><p>If I wanted to develop this sort of an application on Web2, I need to get API Access from the provider, let&apos;s say for example Twitter. To get that access, I need to firstly submit a short essay on what I intend to do with my API access (to my own account) and then agree to terms and conditions set by Twitter&apos;s lawyers. After this, my access remains conditional, and if I built something super popular but built something Twitter didn&apos;t agree with or maybe felt like they would want to do themselves, they can pull my API access.</p><p>There&apos;s none of that on Web3. Since the data is stored on a public blockchain, it will be publicly accessible for anyone to come and write programs that can use that data without asking anyone for permission to do so and without terms and conditions.</p><h2 id="h-4-ownership-of-data" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">4. Ownership of Data</h2><p>The data that is typically generated by users is publicly stored, but private data is typically encrypted using the user&apos;s public key. This means that the data can only be decrypted by the user and no one else. This puts the user&apos;s squarely in control of their information, not any corporation. Mining this data is also impossible if it&apos;s encrypted, so users don&apos;t ever have to worry about being exploited.</p><h2 id="h-5-pseudonymity" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">5. Pseudonymity</h2><p>It is a misnomer that people think that Web3 is anonymous. It is not. Every transaction on a blockchain is signed by a wallet. So all the data is identifiable as belonging to that wallet holder. The only piece of information that one doesn&apos;t have is who the owner of the wallet is. But if anyone ever establishes that a certain wallet belongs to a specific person for example, they immediately stop being anonymous and their data becomes transparent on chain, otherwise known as getting doxxed.</p><p>Sometimes this is a deliberate thing, for example if a user wants social recognition in a game, they can establish their ownership of several items they have collected in their wallet, by demonstrating their ownership of that wallet. But often, users want to remain unknown and do not want to be doxxed if they can help it.</p><h2 id="h-6-speed" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">6. Speed</h2><p>Because of the decentralised structures, speed is often slow when compared to AWS or other such servers. This is something to keep in mind when designing systems. You have to afford for the latency as the result of actions may actually take a long time to be written to the blockchain and are not as immediate as when using equivalent systems built on Web2 standards. But all this is changing and the lines of division are being blurred.</p><h2 id="h-7-culture" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">7. Culture</h2><p>Web3 has been built by idealists, but in a very pragmatic way in order to have real-world utility. You cannot miss hearing about the &quot;original cypher punk&quot; values that seem to be beneath all of it. Decentralisation is not a buzz word, it is the essence of the culture. Being authentic and original are virtues in this field. Having a different viewpoint is celebrated while corporate behaviour, censorship and authoritarianism are taboos.</p><p>The two biggest projects in the field Bitcoin and Ethreum. Bitcoin&apos;s founder, is unknown, and Ethereum&apos;s founder is Vitalik Buterin, who has actively tried to build a project that doesn&apos;t need him to be around to succeed. He is a 29 year old billionaire who wears unicorn t-shirts and lives a nomadic lifestyle travelling the world with nothing more than what he has in his backpack. I mention these things to highlight the kind of culture that permeates through this world.</p><h2 id="h-8-digital-property" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">8. Digital Property</h2><p>A big part of the reason for the existence of Web3 has to do with property rights, specifically digital property rights. If you&apos;re anything like me, all your work exists on the digital medium alone. How can you claim your ownership of a blog or artwork? With Web3, you can. You can mint it on a blockchain to claim ownership and lock in the date and time of publication.</p><h1 id="h-key-challenges-opportunities-facing-web3" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Key Challenges (Opportunities) Facing Web3</h1><p>For Web3 to become mainstream, there are some key challenges on the user-facing side. These are also opportunities for UX designers to come in and find solutions for.</p><h2 id="h-1-the-wallet" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">1. The Wallet</h2><p>If a user wants to do anything on web3 they need a wallet. While a lot of very user-friendly wallets do exist and some even go so far as to allow Google-sign-ins to create accounts within these wallets, the purpose and usage of these wallets is not very well understood by users. This is an area that needs a lot of work as there are no direct analogies in Web2. Communicating the intricacies of public-private key cryptography is a challenge.</p><h2 id="h-2-managing-security" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">2. Managing Security</h2><p>One of the disadvantages of Web3 is that it only has one level of security. It&apos;s the public private key pair and nothing else. It&apos;s always the highest level of security.</p><p>Unlike in Web2, where you signed in only with a login and password on an app of low importance, and with login, password and 2FA to your email service and possibly login, password, OTP and FaceID on a banking app, there are no other levels in Web3. You can see the problems that occur if users started using the same account for social apps as well as to store their money and there was only one level of security that exists for both and that got compromised, they end up loosing everything.</p><p>Also, Web3 is pseudonymous, not anonymous, so if a user doxxed themselves on a social app, their social network could easily find out everything about their finances.</p><p>Finally, in case a user forgets a password in Web2, they know they can go through some steps and regain access to their accounts. This is not the case in Web3 as there&apos;s no recovery. If the user doesn&apos;t do a good job storing their passwords, they may loose access to their account forever.</p><p>These are all challenges that need design intervention to solve as an industry.</p><h2 id="h-3-liquidity-fragmentation" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">3. Liquidity Fragmentation</h2><p>The Blockchain Trilemma expresses the problem of blockchains against the dimensions of decentralisation, scalability and security and states that any blockchain can have any two of the three qualities but will need to sacrifice the third. For this reason, there are several blockchains out there that cater to different needs with some providing security, while others are tuned for scalability. Consequently different amounts of capital are spread across these blockchains creating fragmentation of liquidity and users feel the need to constantly move their capital between these chains. But I think design could solve this problem in a number of different ways.</p><h2 id="h-4-public-relations" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">4. Public Relations</h2><p>Some regulators hate this field because they see it as a threat to their ability to control their population. Crypto has therefore long been portrayed as a scam by the mainstream media. The field has also had several grifters who perpetrated scams and those have also created very bad press. But just like the airline industry, each mishap has informed the field about how to improve. Web3 truly has reached a level of maturity today where it can serve mainstream use cases and make the lives of end-users a lot better. But we&apos;ll have to overcome this PR hurdle. As a designer, you can help.</p><h2 id="h-5-the-search-for-mainstream-use-cases" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">5. The Search for Mainstream Use Cases</h2><p>While there is already a Defi use case, there are NFT’s and it is used to transfer money between people and to pay for crypto-use cases like gas fees, there are very few use cases where a mainstream user can use crypto for. There are “real world assets”, “crypto collaterals for fiat loans” and “pay with crypto” as potential ideas, but there may be many, many more. UX Designers that have worked in other fields could probably identify the application of this technology in those fields.</p><h2 id="h-6-ai-and-crypto" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">6. AI and Crypto</h2><p>No list would be complete without the mention of AI, and that’s true here too. There are very interesting applications where AI agents have been creating crypto coins and utilising it for things determined by the AI agents. But this is just the tip of the iceberg and designers would be able to identify more opportunities as this is a very creative space for it’s application.</p><h1 id="h-trends-to-keep-in-mind" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Trends to Keep in Mind</h1><p>There are some trends already taking place that you should pay attention to, and change if you&apos;d like.</p><h2 id="h-1-demand-for-design" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">1. Demand for Design</h2><p>Unlike any other point in time, there is a deep desire from projects to improve their design. Vitalik Buterin, the founder of Ethereum has highlighted the importance of UX in several of his talks. He is of course talking about technological improvements that would improve the user experience on the surface layer of tech. But nonetheless there is a focus on UX like no other point in time and we;’re seeing the conversation shift towards making things more usable by regular, mainstream users.</p><h2 id="h-2-changing-demographics" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">2. Changing Demographics</h2><p>While Web3 was built by engineers, for engineers, it is changing today. It is attracting far more non-technically inclined people who may be considered early-adopters. So interfaces have to cater to both audiences at the same time.</p><p>While engineers may need all the details and want to exercise control over how a transaction is performed, the early adopters tend to flock towards apps that remove the complexity and offer an opinionated (fastest transaction speeds), but simple experience. Building interfaces that cater to both may be the order of the day where interfaces can reveal more complex controls on demand.</p><h2 id="h-3-changing-visual-styles" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">3. Changing Visual Styles</h2><p>A couple of years ago, all of Web3 was specifically looking inwards and there was a strong visual identity. Colours were saturated, there were a ton of gradients, a lot of space themes, a lot of orbs, balls and atoms and felt very cutting edge. And everything was dark themed, possibly because the audience it was catering to were mainly male developers. But things have been changing. Things have grown a lot lighter now and the visual language has become a lot more accessible to the mainstream. The copy used on the websites also seem to be plain English these days and not filled with jargon and acronyms.</p><h1 id="h-whats-in-it-for-you" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">What&apos;s in it for you?</h1><p>After all that&apos;s been said above, if you still need a reason to get into web3, here are a few:</p><h2 id="h-enormous-growth-in-the-field" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Enormous Growth in the Field</h2><p>2024 was a great year, but 2025 is looking to be an amazing year ahead for the industry. Now is the best time to to get in, learn and contribute to this field. Imagine being able to catch the next wave of technological growth at the right time. You’ll do phenomenally well.</p><h2 id="h-professional-growth" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Professional Growth</h2><p>If you&apos;re tired of building ecommerce portals, come over to Web3 because you&apos;ll be working on something brand new everyday. Decentralised Finance, NFT projects, decentralised social, decentralised science, etc. these are all fields within Web3 that are being discovered everyday. If you&apos;re interested in learning new things and staying on the bleeding edge, you should definitely consider the shift.</p><h2 id="h-set-standards" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Set Standards</h2><p>There are several aspects that haven&apos;t been figured out as yet. If you&apos;re anything like me, you enjoy working in the unknown and figuring out and building standards.</p><p>As an example, a project called Uniswap presented a very simple interface for swapping tokens and that&apos;s become the de-facto standard interface that is being used anywhere tokens are swapped. Or take the way Metamask did on-boarding, that&apos;s become the standard for all wallets today. In the same way there are a ton of other aspects of Web3 that are still being figured out and you could be there contributing new ideas into the mix.</p><h2 id="h-do-it-for-the-money" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Do it for the money</h2><p>If nothing else, do it because the money is good. Web3 projects typically pay very well because there aren&apos;t a lot of designers who understand it. But that&apos;s not all. Contributors are typically also given project tokens as part of the compensation in place of ESOPS. While some ESOPS go up by 20% if you&apos;re lucky, tokens on the other hand can sometime go up by 20X, so in case of an upside, the values could be life-changing.</p><h1 id="h-conclusion" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Conclusion</h1><p>Just as AI transformed from a sci-fi concept to an everyday reality, Web3 is following a similar trajectory. We&apos;re already seeing traditional companies like MicroStrategy transform its business by simply holding Bitcoin, growing its market value from $1.1 billion in 2020 to over $8 billion in 2021. And that&apos;s just from using cryptocurrency as a treasury asset. You can imagine the potential when companies fully embrace Web3&apos;s capabilities: decentralised operations, user-owned data, and transparent transactions.</p><p>As a UX designer in Web3, you&apos;re not just designing interfaces – you&apos;re designing the future of digital interactions. The challenges are significant, but so are the opportunities. Start small, focus on user needs, and don&apos;t be afraid to challenge conventions.</p><p>Remember, every great interface began with someone asking, &quot;Couldn&apos;t this be simpler?&quot; That&apos;s your mission in Web3. The technology is ready; now it&apos;s time to make it human.</p><h1 id="h-accompanying-video" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Accompanying Video</h1>]]></content:encoded>
            <author>reddxf@newsletter.paragraph.com (sharanx)</author>
            <category>web3</category>
            <category>design</category>
            <category>tutorial</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/911a4d94b4da568d2fb9d3c0be1445abfc65bad644c4a01fea44e73417c4df81.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[The Ultimate Guide to Hiring the Right UX Design Agency]]></title>
            <link>https://paragraph.com/@reddxf/the-ultimate-guide-to-hiring-the-right-ux-design-agency</link>
            <guid>0bHpLn1DXkXBVARWOpVy</guid>
            <pubDate>Thu, 14 Nov 2024 23:00:00 GMT</pubDate>
            <description><![CDATA[It doesnt matter that the design agency has a high rating if they arent a good fit for your project. But how do you find the right one?]]></description>
            <content:encoded><![CDATA[<h1 id="h-introduction" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Introduction</h1><p>When I used to run my UX design agency, we had a lot of clients that came to us after already having worked with another agency for the same project. Upon enquiring as to the reason for them to switch to us, we often were told some version of “The other team just didn't get what we wanted”. The vain side of me would have been pleased, and I often was in the beginning, until I learnt that these situations weren’t only caused by mistakes made by the vendors, but also by the businesses that hired them.&nbsp;</p><p>If you’re a business that’s interested in hiring a UX design agency, read this article to avoid paying twice for the same thing and learn to hire the right agency for your project the first time!</p><h1 id="h-understanding-ux-design" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Understanding UX Design</h1><p>A sure-fire way to hire the wrong agency is to not understand what UX design can do for you. Design, unlike art, is about solving problems. The job of a UX designer is to develop a solution to the business problem that you articulate. They translate the business objective that you set, into the form of an application that will help achieve that goal. They do this while balancing the needs of the customers against the business interests while working within limitations posed by technology and also by your company’s brand.</p><p>It isn’t a simple task but I hope you can see that it takes a special blend of someone that is business oriented, technically adept, artistically inclined, detail-oriented as well as sensitive to culture to do this job right. A good design agency will be a partner in helping you achieve your goals, not someone who will spit out something pretty to slap onto your website and call it a day. You on the other hand, should be figuring out the right way to engage with such a person or a team and demanding the right kind of output from it.</p><p>So, typically people hire UX designers to do one of the following:&nbsp;</p><ul><li><p>You have an idea for solving a certain business problem with software and you want to develop a prototype in order to further refine that idea and bring it into a form that is more real than just an idea.&nbsp;</p></li><li><p>You want to reduce the overall costs of software production by working out all the kinks and details on paper as it were, before handing it off to a developer to build out — the “measure twice and cut once” approach.</p></li><li><p>You need to bring all the disparate ideas of a bunch of stakeholders from across your organisation and put it into a cohesive form of a software application.</p></li><li><p>You want to build a prototype and build consensus across all the major stakeholders within your organisation before allocating budgets and building it out.</p></li><li><p>You want to get the product designed in order to plan out the efforts and expenses required in order to get this product built. It will also help you decide what the budget allocations need to be.</p></li></ul><h1 id="h-preparing-a-brief" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Preparing a Brief</h1><p>As highlighted above, it is imperative to the success of the project that you take the time to define your business objectives clearly. Try being as specific as possible, by describing things like, “We have had most of our customers purchasing items from within a single vertical, but in an ideal world, every user landing on the website would generate a cart value of $2,000 with products from across three different product verticals”, or, “Our customer service division has some of the highest churn rates and this makes it difficult to impart the brand values of our company onto new hires so that they can interact with customers in line with the way we do things here”, etc.</p><p>Unless it is critical for some reason, do not make the mistake of specifying how this should be done too early on. For example, don’t say, you need a website, an iOS app, etc. to do these things. It’s the design team’s job is to figure out the “how” of it all. Even if they will eventually land at the point of developing a website, etc. they sometimes may surprise you by solving the problem in another way that you may not have anticipated.</p><p>One of the interactions that I remember fondly was with this client who was patiently listening to my team’s pitch as to the right solution to the problem he had stated previously. When one of his employees started to correct me about the mental model that I had used for designing the application, the boss, knowing fully well why he had hired us, politely interjected and asked the employee to allow me to finish. In the end, they were pleasantly surprised that we had removed an unnecessary layer while still achieving their objective. This wouldn’t have been possible if they had been hyper prescriptive about the solution they needed.</p><h1 id="h-finding-potential-agencies" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Finding Potential Agencies</h1><p>While the best method of finding a UX design agency is through a referral of someone that has worked with the agency because you’d also understand the fit, searching online and finding them on lists such as Clutch may be inevitable.&nbsp;</p><h2 id="h-a-rankings" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">a. Rankings</h2><p>One thing to keep in mind is that the rankings are not always completely objective. Their methods of evaluation aren’t so granular that you can reliably pass a judgement that someone ranked 20 is definitely better than someone ranked 30.</p><p>While we were ranked among the top 20 within India and 100 in the global ranking, I really would pay more attention to whether the agency is a good fit for you, which is the most important thing. Still, these lists are undeniably a great resource to rely upon while looking for design partners.</p><h2 id="h-b-local-vs-remote" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">b. Local vs. Remote</h2><p>Design agencies can work remotely, but, I’d give the one that can come into your office a few times a greater preference score than an agency that absolutely cannot. There are specific stages, like discovery discussions, brainstorming, user testing, pitching ideas that could benefit from being done in person. So if the agency you are looking for can come into your office for these meetings specifically, it would be great, but if that’s not at all possible, it’s should not be a deal-breaker in any way.</p><h2 id="h-c-industry-experience" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">c. Industry Experience</h2><p>If your project has to do with improving an existing system of some kind, then industry expertise may help as you’d probably spend less time in explaining some fundamental ideas to the designers. But if you’re breaking new ground, I’d give less importance to the designers having the industry experience because most designers will learn the intricacies of the industry anyway, but also because designers that are new to a field may evaluate it with fresh eyes and see things in a different way which may be exactly what the doctor ordered.&nbsp;</p><h2 id="h-d-alignment-of-values" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">d. Alignment of Values</h2><p>There is a subjective aspect to design and that’s to do with taste. Some design agencies are drawn to flash while others are drawn to quiet efficiency. Some designers may take short-cuts to solving problems quickly while others take the long but reliable paths. In all these areas, taste matters. Don’t make the mistake of hiring a designer that clearly shows you one quality in all their work and expect that they will deliver on another quality that you want. You are never going to be able to demand every little be made a certain way. Engaging an agency inherently means that you are relying on them to make several micro-decisions on your behalf. So engage wit the agency that resonates with you so that you are going to be comfortable with these choices that they will make on your behalf.&nbsp;</p><p>Important Side Note: Do not evaluate design portfolio’s solely based on visual design expertise, unless that’s the requirement.</p><h1 id="h-filtration" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Filtration</h1><p>Okay, so you’ve succeeded in short-listing a few design agencies that you want to work with. Now you’ve got to figure out which one of these is the right fit for you. The filtration process should help you find someone that is technically proficient, someone that has not portrayed themselves as something more than they are, someone who fits your culture and finally fits your budget.&nbsp;</p><p>To do this, you could begin by requesting a proposal from each of the agencies. You can call these agencies up and have a brief discussion with them on the phone. They may request a brief from you which you can send out to them. How long they take to send out a proposal to you would itself indicate whether they have their internal processes figured out or not.</p><p>Once you get the proposal, that itself becomes a good device to evaluate the capability of the agency. You see, if they are good UX designers, they would make this process easy for you right from the get go. Their proposal will clearly spell out what is involved in the process, give you firm estimates of time and costs and also detail out the payment schedules. It would have been designed to answer all the questions in your mind about engaging the agency. If it does not, this is a sign of immaturity and likely a strike against the agency.&nbsp;</p><p>Post this, typically the agency may request a time slot to walk through the proposal with you. You could also request the same if they don’t offer one. In this call, make sure to cover the following questions:</p><p>-</p><p>How do you evaluate the success of your design? Design is a solution to a problem, and UX design is usually employed to solve business problems. The work that UX designers do there are not subjective but instead very clear solutions to business problems. If the design isn’t helping achieve that business goal, then the design isn’t successful.</p><p>-</p><p>Have you done any work in my field? If not, what would your approach be in the design of our application? If they have worked in the same field, you’d be able to evaluate things on an apples-to-apples basis, but if they haven’t, you’ll understand how they’d approach learning about the field first and then solving the problem.</p><p>-</p><p>What has been the effect of your application design? Is there data to substantiate the success? While the external design agency may not always have access to the data and analytics that emanated from their design work, this question will still let you assess whether the designer understand that their work is always based on a hypothesis and always an experiment that needs to be evaluated.</p><p>-</p><p>What has been the biggest reasons for delays in your past projects? This question evaluates whether they’ve seen delays in past projects. There definitely will be a few where delays have occurred, and this is quite natural in endeavours that have a lot of interdependencies between clients and vendor teams. So the point is to probe whether they’ve understood what causes a lot of delays in projects and what steps they’ve taken to improve the process.</p><p>-</p><p>Have you determined issues with your design and fixed them after the initial design was delivered? This goes back to the point about what designers think about their work. No matter how much research is done ahead of time, every project is based on several assumptions from the clients and from the designers, they are all based on a hypothetical understanding. The truth can only be learnt by putting the design in the hands of users and evaluated at that time. There will be things learnt that will reinforce our assumptions, but there will also be things that we will learn that could be improved. There is never a situation where everything is perfect. In reality, the designer may never have information on how things performed because the client may never share such information with external vendors. But the question is meant to probe whether designers understand the temporal nature of their work and what they do to improve their understanding and provide better solutions.</p><h1 id="h-red-flags" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Red Flags</h1><p>An architect friend once mentioned this idea of inspection points. They said that if you want to examine the quality of construction of a building, you don’t need to go floor by floor and examine the beams, you can just go to the basement and look at the pillars there. If there’s any major faults in construction, the first place the weaknesses will show up will be on these pillars. In that same spirit, the following are some things that will indicate the quality of the agency that you’re working with.</p><h2 id="h-1-acronym-overdose" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">1. Acronym Overdose</h2><p>Acronyms are useful to express some ideas a little more efficiently. But it’s overuse stems from ineptitude or from a desire to hide behind them to hide a lack of knowledge. In any case, the job of a UX designer is to be understood, not to obfuscate. So if they’re not demonstrating an ability to speak to you, their potential clients, in a way that you can understand, they definitely will not be able to help you communicate any better with your end-users.&nbsp;</p><h2 id="h-2-disinterest" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">2. Disinterest</h2><p>For a designer to really help you succeed, it is imperative that they understand your true motivation behind working on this project. A good designer will definitely probe to find out this answer. If any of these designers haven’t bothered to do this, they will definitely not be delivering anything great for you.</p><h2 id="h-3-absolute-certainty" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">3. Absolute Certainty</h2><p>Regardless of how much experience an agency has, all design work is based on a hypothetical understanding of what the problem is and the design itself is a best-guess estimate of the solution. This is because the number of variables change from project to project including end-users. So be vary of the agency that states with absolute certainty that they have the right solution. They are more likely to make mistakes than those that approach the project with humility.</p><h2 id="h-4-financial-discipline" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">4. Financial Discipline</h2><p>If a designer doesn’t convey their payment schedules in their proposal, they are inexperienced and don’t have enough of an understanding of the impact of finances on the execution of a project. There have been agencies that have had to stop working on their client projects midway. This is clearly a result of the agency not understanding their processes well enough and knowing how much money they will burn while working on your project.</p><h1 id="h-budget-considerations" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Budget Considerations</h1><p>Even if you’ve not previously estimated how much you should spend for UX design, you would now have a great idea of how much it would cost based on the proposals that you would have received from the agencies that you reached out to.&nbsp;</p><p>But I understand that a lot of businesses are perplexed by the idea of paying too much for something and they want to know what the “fair market value” is. I find the idea absurd as this is not a commodity where you can substitute one designer’s output with another’s. If you find a designer whose output you like, then the “fair market value” is whatever they want to charge you then, isn’t it? But then this argument may fall on deaf ears and sound less “rational” to some, so in an effort to provide a rational answer, you could do the following exercise.</p><p>First, find out how much it would cost to employ a UX designer with about 2 years of experience on a full-time basis in your area. Divide that by 12 to get their monthly salary. Then, multiply that by 5 as a project typically requires the efforts of 4 people with varying levels of experience on a non-100% utilisation basis. That number is what you should expect to pay on a minimum (not maximum) for each month of engagement with any agency. There is no maximum that you can pay as the sky’s the limit. There are premiums you could add onto this for star designers on the teams or if the agency can bring additional skills to table, like motion design, film production, over an above the base level UX design.</p><p>If your company is cost conscious and wants to find a way to reduce the costs, you could try to work across geographies. Vietnam, India and Ukraine have some great English speaking UX design agencies that can provide you a lot of value for up to 40% less than what it would cost you in the US or Western Europe.</p><p>Important Side Note: The idea that you could hire four designers only for a few months and have them pull off a great output is absurd. A few of my clients at my agency have been those that have tried to do this and failed and then decided to hire an agency after all. They typically spent 1.5 to 2 times what it should have cost them in the first place, so keep that in mind.&nbsp;</p><h1 id="h-reviewing-agreements-and-contracts" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Reviewing Agreements and Contracts</h1><p>Quite often large companies have standard boiler-plate engagement contracts for service providers, but in case you don’t you could request the engagement contract from your agency. Very often, if the proposal is detailed enough, that itself could be used to sign and formalise the engagement with the agency as it would contain all the usual terms that are covered in legal contracts.&nbsp;</p><p>While there are very few “gotcha’s” to look out for, here are some key clauses that you want to examine before you sign the agreement.&nbsp;</p><h2 id="h-1-scope" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">1. Scope</h2><p>Please make sure that all that you intend to cover in this project is covered in the scope section. Despite any conversations that you may have had, the engagement will only be decided based on what’s covered in the scope section. &nbsp;</p><h2 id="h-2-delivery-formats" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">2. Delivery Formats</h2><p>For UX design, the output is typically required by development teams, so make sure that they are receiving the following to make sure that your&nbsp;</p><h2 id="h-3-fixed-costs" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">3. Fixed Costs</h2><p>Please make sure that you are working with the agency on a fixed-cost basis. They may ask you to pay on an hourly-basis for additional work, but whatever is covered within the scope should be delivered within the costs specified.</p><h2 id="h-4-ownership-of-intellectual-property" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">4. Ownership of Intellectual Property</h2><p>While most agencies will be on a “work-for-hire” contract basis, be sure to clarify that all intellectual property developed through this project will belong to your company. This will help you use the output of the project to your benefit and further development in the future.</p><h2 id="h-5-non-disclosure-agreement" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">5. Non-Disclosure Agreement</h2><p>There will be several things that you discuss about your business in order for the agency to design their application. Make sure that the agency cannot disclose this information about your organisation without your permission. Having this agreement in place will also allow your team members to speak openly and freely with the designers allowing them to develop applications that more accurately serve your needs.</p><h2 id="h-6-support" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">6. Support</h2><p>Every design agency will need to provide support post the delivery of their work. Documentation will only go so far and some live support will be required once in a while to explain some intricacies of the design. Make sure that the contract includes a support period post delivery and also a method to seek support well after that as that is required sometimes. Get this for remote support as well as for in-person support if that’s an option.</p><h1 id="h-freelancers-vs-agencies" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Freelancers vs. Agencies</h1><p>I must speak a little about hiring freelancers as that’s often an alternative that’s considered by businesses often. As I’ve been a freelancer before building an agency, I think I am in a good position to answer this and also argue for one over the other.&nbsp;</p><p>It is my considered opinion that businesses should only hire freelancers in very specific situations:</p><ul><li><p>The project is very cost sensitive</p></li><li><p>The project is a one-off and does not have long term impact</p></li><li><p>You already have done enough research to guide the designer</p></li><li><p>You are okay with delays</p></li><li><p>You are comfortable with the skills that are being provided</p></li></ul><p>For all other situations, there is absolutely no reason to hire freelancers, because with a freelancer, you get the wisdom of just one person. Design requires a lot of debate, and that’s hard to achieve as an individual, not impossible, but hard. Their learnings may also be limited as they typically are employed only on small and short term projects.</p><p>I’ve seen cases where the freelancers don’t show up to meetings, they have ended projects without notice, or have behaved in unprofessional ways. Clients have thereafter had to find ways to scramble and recover from these setbacks.</p><p>So for these reasons, I suggest working with teams where these problems are usually not prevalent.</p><h1 id="h-conclusion" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Conclusion</h1><p>A friend of mine sells bicycles and I went to him when I wanted to get into the sport. When I asked him to recommend a bike, he showed me something that was well over the amount that I had anticipated paying for a bike. So I said that since I was just getting into riding bikes, I wanted to get one that wasn’t so expensive and then if I like the sport, I would buy something more expensive.&nbsp;</p><p>What he said in response remains with me till today because it is extremely relevant to my field. He said that if I was serious about getting into the field, I should be getting the best bike I can afford, because buying the wrong bike will actually make me hate the sport.&nbsp;</p><p>I relate this story because it’s a great metaphor for choosing the right agency. If you make the wrong choice of agency and develop the wrong app based on their work, you may end up attributing the failure of the app to the weakness of the idea behind it. You may even end up making the wrong business decision whereas you may have come to a very different conclusion if you had just made a better decision while choosing the right agency.</p><p>Having said that, I hope this article is useful to you in making the right choice. But if you need any help, do reach out to me once you’ve shortlisted your agencies, and I’d be happy to help you make the final choice!</p><h1 id="h-accompanying-video" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Accompanying Video</h1><div data-type="youtube" videoid="pWzm4IwFXZI">
      <div class="youtube-player" data-id="pWzm4IwFXZI" style="background-image: url('https://i.ytimg.com/vi/pWzm4IwFXZI/hqdefault.jpg'); background-size: cover; background-position: center">
        <a href="https://www.youtube.com/watch?v=pWzm4IwFXZI">
          <img src="https://paragraph.com/editor/youtube/play.png" class="play">
        </a>
      </div></div><br>]]></content:encoded>
            <author>reddxf@newsletter.paragraph.com (sharanx)</author>
            <category>design</category>
            <category>business</category>
            <category>hiring</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/8803fd821bc87f12794d5d7db6732b13e9b354319b2c6273386d865d7ce35fd5.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Design Proposals that Boost Conversion Rates]]></title>
            <link>https://paragraph.com/@reddxf/design-proposals-that-boost-conversion-rates</link>
            <guid>sBt2eyuVIeoyTkj6fk77</guid>
            <pubDate>Mon, 04 Nov 2024 23:00:00 GMT</pubDate>
            <description><![CDATA[Creating a Winning UX Design Proposal: A Comprehensive GuideCrafting a UX design proposal can be challenging, but it is also an invaluable tool in securing new projects and demonstrating professionalism. A well-structured proposal not only showcases your understanding of the client’s needs but also reflects your agency’s strengths and your ability to foresee potential project challenges. Based on my years running a UX design agency, here’s a guide to creating a compelling proposal that can bo...]]></description>
            <content:encoded><![CDATA[<h1 id="h-creating-a-winning-ux-design-proposal-a-comprehensive-guide" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Creating a Winning UX Design Proposal: A Comprehensive Guide</h1><p>Crafting a UX design proposal can be challenging, but it is also an invaluable tool in securing new projects and demonstrating professionalism. A well-structured proposal not only showcases your understanding of the client’s needs but also reflects your agency’s strengths and your ability to foresee potential project challenges. Based on my years running a UX design agency, here’s a guide to creating a compelling proposal that can boost your conversion rates and help you stand out.</p><h2 id="h-1-understanding-the-purpose-of-a-proposal" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">1. Understanding the Purpose of a Proposal</h2><p>A proposal is essentially the answer to the client’s questions before they commit to hiring you. They want to know if you understand their project, if you’re aligned on goals, and if your capabilities and deliverables match their expectations. Answering these questions clearly within your proposal gives clients the confidence they need to move forward with you.</p><h2 id="h-2-key-components-of-a-ux-proposal" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">2. Key Components of a UX Proposal</h2><p>To make your proposal thorough and professional, include these eight essential components:</p><h3 id="h-project-summary" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Project Summary</h3><p>The summary should ensure everyone is on the same page. Here, outline the specific project objectives, emphasising the agreed-upon approach. If your agency has unique strengths, such as a technical or development background, this is the place to highlight them. This section serves as a commitment to the client, defining exactly what you’ll focus on.</p><h3 id="h-feature-list" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Feature List</h3><p>One of the most time-intensive sections, the feature list outlines the project at a high level, covering major elements without getting overly technical. For instance, if the project includes user authentication, mention this as a feature but avoid specific technical details like whether it will include OTP verification. This list demonstrates thorough planning, showing that you’re not just sending a generic proposal.</p><h3 id="h-scope-of-work" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Scope of Work</h3><p>Outline all the deliverables you’ll provide. This section can also serve as a showcase for your agency’s full capabilities, even if some services aren’t part of this specific project. For example, list everything from wireframing to prototyping, and clarify which tools and formats you’ll use. Ensuring clarity on deliverables avoids confusion later on and establishes a sense of professionalism.</p><h3 id="h-team-structure" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Team Structure</h3><p>Explain who will be working on the project, their roles, and their experience levels. This can include a project manager, UX/UI designers, and other support roles like content creators or icon designers. Detailing your team structure justifies your pricing, showing the client that the work involves skilled professionals rather than being handled solely by a generalist.</p><h3 id="h-engagement-guidelines" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Engagement Guidelines</h3><p>Set out expectations on both sides. Define a single point of contact (Spock) on your team and request the same on the client’s end to streamline communication. Specify a regular meeting cadence, like weekly check-ins, to keep the project on track. This section also covers managing scope changes and clarifies your availability to avoid client requests at odd hours.</p><h3 id="h-support" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Support</h3><p>Support goes beyond project completion, as clients may need assistance once the design phase is over. Typically, offer 30 days of support post-handover to handle clarifications and minor adjustments, but make it clear that extensive support beyond this window will incur additional charges. This also includes defining the method of support, whether remote or on-site, with associated travel costs if needed.</p><h3 id="h-cost-and-packages" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Cost and Packages</h3><p>Costing can be a make-or-break aspect of your proposal. Offering tiered packages—bare minimum, standard, and premium—can appeal to different budgets and give clients flexibility. Using a worksheet to customise package features and pricing quickly allows you to respond to client requests efficiently, showing professionalism and organisational readiness.</p><h3 id="h-payment-schedule-and-protection-clauses" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Payment Schedule and Protection Clauses</h3><p>Establishing a payment schedule is essential for cash flow and client commitment. A common approach is 50% upfront, 25% midway, and the remaining 25% upon completion. Clearly state conditions for payment delays, such as the right to halt or terminate the project if payments are not made on time. Including protection clauses safeguards your interests, and clients typically view them as standard business practice.</p><h3 id="h-additional-annexures" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Additional Annexures</h3><p>Finally, append any extra information that can strengthen your proposal, such as:</p><ul><li><p>Company Overview: Highlight your agency’s expertise and how you stand out. Large clients, especially, may appreciate a quick reference on your background without needing to research separately.</p></li><li><p>Client Portfolio: Showcasing similar past projects builds credibility. If your client seeks an e-commerce solution, for example, list other e-commerce projects you’ve completed to demonstrate your experience.</p></li><li><p>Team Bios (Optional): If certain team members have notable expertise, add brief bios. This personal touch can reinforce the unique value you bring to the project.</p></li></ul><h2 id="h-3-recognising-common-proposal-traps" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">3. Recognising Common Proposal Traps</h2><p>Not all proposal requests are genuine. Some clients may seek proposals simply to benchmark costs, fulfil internal bidding quotas, or collect ideas without intending to hire. Though you can’t always avoid such situations, by treating ideas as renewable and presenting your unique perspective, you position yourself as a resourceful, innovative partner for potential clients.</p><h2 id="h-4-automating-for-efficiency-to-an-extent" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">4. Automating for Efficiency (to an Extent)</h2><p>Automating repetitive proposal elements, like feature lists or pricing structures, can streamline your process. However, automation shouldn’t replace human interaction altogether. Proposals play a vital role in assessing potential clients and understanding their unique needs, so a fully automated system risks undermining this relationship-building opportunity. I chose to keep personal contact integral to my process, which allowed me to enjoy a diverse range of projects with clients I valued.</p><h1 id="h-conclusion" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Conclusion</h1><p>A well-crafted UX design proposal is more than a sales document; it’s an opportunity to build trust and set clear expectations with clients. By including comprehensive components, adapting to different client needs, and presenting a strong sense of your agency’s capabilities, your proposal can serve as a valuable tool for both winning projects and ensuring smooth collaboration throughout.</p><p>With this framework, I hope you’ll find the proposal process smoother, more efficient, and ultimately more successful.</p><h1 id="h-accompanying-video" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Accompanying Video</h1><h1 id="h-useful-items-mentioned" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Useful Items Mentioned</h1><ul><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.patreon.com/reddxf/shop/general-ux-design-proposal-template-585912?utm_campaign=productshare_creator&amp;utm_content=join_link">General Purpose Proposal Template for UX Projects</a></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.patreon.com/reddxf/shop/general-ux-design-proposal-worksheet-586169?utm_campaign=productshare_creator&amp;utm_content=join_link">General Purpose Proposal Worksheet</a></p></li></ul>]]></content:encoded>
            <author>reddxf@newsletter.paragraph.com (sharanx)</author>
            <category>design</category>
            <category>ux</category>
            <category>business</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/894f5e06453eacce6ae3b6d955d9cce23b1431e0e59638dc7053d1188faf1379.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Designing Web3 Wallets for Mainstream Users]]></title>
            <link>https://paragraph.com/@reddxf/designing-web3-wallets-for-mainstream-users</link>
            <guid>tfp39RVEhRTNRNm4ihUn</guid>
            <pubDate>Sun, 29 Sep 2024 22:00:00 GMT</pubDate>
            <description><![CDATA[There is a problem with Web3 wallets, they're just too complex. But where's the problem stemming from and how do we solve it?]]></description>
            <content:encoded><![CDATA[<h1 id="h-web3-wallets-for-mainstream-users" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Web3 Wallets for Mainstream Users</h1><p>Web3, the next phase of the internet powered by blockchain technology, promises to revolutionise industries, but its complexity continues to alienate mainstream users. At the heart of this challenge is the Web3 wallet—a fundamental tool required for engaging with decentralised applications (dApps), managing cryptocurrencies, and accessing blockchain networks. While Web3 enthusiasts have embraced the wallet system, its cryptographic foundations and technical jargon have left everyday users overwhelmed and frustrated. To drive mainstream adoption of Web3, we need to focus on fixing the wallet experience, making it intuitive and accessible.</p><p>In this blog, we’ll explore why the wallet is the crux of Web3 usability challenges and how a thoughtful redesign grounded in familiar concepts, like banking, can unlock its full potential.</p><h3 id="h-the-wallet-conundrum-complexity-meets-confusion" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">The Wallet Conundrum: Complexity Meets Confusion</h3><p>The primary function of a Web3 wallet is to store a user's private and public keys. These cryptographic keys are used to authorise transactions and prove the user's identity within the blockchain ecosystem. But this process, while secure and revolutionary, is incredibly foreign to non-technical users. Terms like "private key," "public key," and "signing messages" can cause even the most tech-savvy Web2 users to shy away. The need for simplification is apparent.</p><p>In Web2, users can easily send and receive money through banks or payment apps without ever having to understand the mechanics behind these systems. Web3, by contrast, demands users grasp cryptographic principles or risk losing access to their digital assets. The complexity isn't just a barrier; it's a deterrent.</p><p>Many existing wallets, such as Zerion or Rabby, have attempted to ease the user experience by iterating on older designs. And while they have indeed made Web3 more user-friendly, they still fall short of the mainstream appeal necessary for broad adoption. These wallets are excellent by Web3 standards, but they remain inscrutable for the average Web2 user, leaving us to ask: Why can't a Web3 wallet be as intuitive as the apps we're already accustomed to?</p><h3 id="h-terminology-matters-the-language-of-web3" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Terminology Matters: The Language of Web3</h3><p>A significant part of the problem lies in the terminology itself. In Web3, terms like "wallet" are used inaccurately. In traditional banking, a wallet implies a place where you store something—cash, cards, and other financial instruments. However, in Web3, a wallet doesn’t "store" money in the traditional sense. Instead, it acts more like an access card to funds stored on the blockchain. Misleading terminology adds to the confusion, making it harder for users to understand how to manage their digital assets.</p><p>Open-source development, which is central to the Web3 ecosystem, has contributed to this confusion. Over time, various developers have used different terms to describe the same concepts, leading to inconsistent vocabulary across platforms. For instance, "public key," "wallet address," and "externally owned account" all refer to the same thing, but new users won’t know that. This inconsistency hinders their ability to trust and navigate the system confidently.</p><p>Language is powerful, and when users encounter familiar terms, they can better grasp unfamiliar concepts. Web2 sites like e-commerce platforms utilise relatable language ("add to cart," "checkout") that immediately makes sense to users. The same approach needs to be applied to Web3 wallets if we want to make them accessible to the average person.</p><h3 id="h-building-familiarity-banking-as-a-metaphor" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Building Familiarity: Banking as a Metaphor</h3><p>To solve the wallet problem, we need to rethink the entire framework. Instead of framing wallets with cryptographic jargon, we should use a metaphor that resonates with mainstream users. And what’s more familiar than banking? Banking is a 2,000-year-old institution, and most people are already acquainted with its core concepts.</p><p>By reimagining the Web3 wallet as a piece of "banking software," we can translate complex blockchain concepts into terms users already understand. For instance, the wallet itself could be referred to as "banking software," the wallet address as an "account number," the private key as a "signature," blockchain networks as "networks," and the various fees associated with a transaction such as "tip", "gas" and such could be simplified into "transaction fees," just as people are used to paying on traditional payment platforms.</p><p>This banking analogy would not only make Web3 wallets more relatable but also help ease users into the system without overwhelming them with technical details.</p><h3 id="h-a-three-account-structure-enhancing-security-and-usability" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">A Three-Account Structure: Enhancing Security and Usability</h3><p>In addition to simplifying the terminology, a redesigned Web3 wallet should also mimic the structure of traditional banking. A banking app typically offers various types of accounts—checking, savings, and investment—and the same can be applied to Web3.</p><p>-</p><p>Checking Account: This would be where users conduct everyday transactions with known entities or individuals, similar to a checking account in traditional banking.</p><p>-</p><p>Savings Account: This would serve as a secure place to store assets, where users could stake cryptocurrencies or simply keep them safe. It would allow users to generate interest on their holdings without needing to interact with external applications.</p><p>-</p><p>Investment Account: For users seeking higher risk and reward opportunities, this account would enable interaction with decentralised applications (dApps) for staking, lending, or other investment activities.</p><p>This compartmentalisation not only makes the system more intuitive but also enhances security. By limiting the interactions of the checking and savings accounts with external applications, users can reduce their exposure to potential security risks. Only the investment account, which is meant for higher-risk activities, would be open to external dApps.</p><h3 id="h-the-road-ahead-evolving-web3-for-the-mainstream" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">The Road Ahead: Evolving Web3 for the Mainstream</h3><p>In conclusion, the wallet is the linchpin of Web3 adoption. If we can make the wallet experience seamless and intuitive, we can unlock Web3’s immense potential for a wider audience. The key lies in rethinking both the terminology and structure, using familiar banking concepts to demystify the process. By addressing these foundational issues, we can bridge the gap between Web2 and Web3, paving the way for mass adoption.</p><p>For designers and developers working on Web3 applications, this means prioritising usability over technical prowess. It means considering what mainstream users already know and building upon that knowledge, rather than introducing new, foreign concepts. The future of Web3 is bright, but only if we can make it accessible to everyone.</p><h1 id="h-video-with-screens" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Video with screens</h1><div data-type="youtube" videoid="flGWL54hJR4">
      <div class="youtube-player" data-id="flGWL54hJR4" style="background-image: url('https://i.ytimg.com/vi/flGWL54hJR4/hqdefault.jpg'); background-size: cover; background-position: center">
        <a href="https://www.youtube.com/watch?v=flGWL54hJR4">
          <img src="https://paragraph.com/editor/youtube/play.png" class="play">
        </a>
      </div></div><br>]]></content:encoded>
            <author>reddxf@newsletter.paragraph.com (sharanx)</author>
            <category>web3</category>
            <category>ux</category>
            <category>design</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/5a6040f6151331ee1ae25a8e710972f95fa4743e18be5761c3f62ffe2dc47736.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Five personal AI assistant devices reviewed and what they need to fix]]></title>
            <link>https://paragraph.com/@reddxf/five-personal-ai-assistant-devices-reviewed-and-what-they-need-to-fix</link>
            <guid>G4kTBZOgOvSQARrrNzcg</guid>
            <pubDate>Wed, 11 Sep 2024 22:00:00 GMT</pubDate>
            <description><![CDATA[Personal AI Assistants like the Rabbit R1, Humane AI Pin, the Meta RayBan, the Limitless Pendant and the 01 Light need to fix 7 things in order to make them the devices we will all covet. But will that be enough? Dive in and find out what mistakes are being made and what the right solution is.]]></description>
            <content:encoded><![CDATA[<h2 id="h-why-arent-personal-ai-assistants-taking-off" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Why Aren’t Personal AI Assistants Taking Off?</h2><p>Before diving into the nitty-gritty, let’s get one thing clear. The personal AI assistants I’m referring to aren’t like Siri or Google Assistant on your phone. We’re talking about devices that can break down complex tasks and execute them for you—whether it’s running apps, searching the web, or even coding in some cases.</p><p>These devices are mostly aimed at early adopters right now, but the goal is to create something for everyone. If you picture the product adoption lifecycle, we’re still in the early adopter phase. Right now, companies are testing their products in different niches, trying to figure out what sticks. It’s a chaotic phase, and we’re seeing an explosion of ideas—some good, some not so much. A lot of these ideas won’t make it to the mainstream, but that’s what makes this phase so exciting.</p><p>Now, let’s dig into the seven mistakes these AI assistant products are making and how I think they can course-correct.</p><h3 id="h-1-not-using-the-right-metaphor" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">1. Not Using the Right Metaphor</h3><p>The first mistake is starting with the wrong metaphor. Products need to fit into a framework that users already understand. People don’t like adopting something completely alien. I always say evolution trumps revolution, especially in mass-market products. Finding a metaphor that connects the new with the familiar is crucial. AI assistants should be positioned as an evolution of tools people already know, not a radical shift that alienates potential users.</p><h3 id="h-2-limited-accessibility-and-reachability" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">2. Limited Accessibility and Reachability</h3><p>For an AI assistant to succeed, it needs to be extremely accessible. Think about the evolution of computing—from desktops to laptops, tablets, smartphones, and now wearables. The goal is to bring tech closer to the user. Personal AI devices need to occupy that space between a smartphone and the user—something wearable that’s always available at a moment’s notice.</p><p>Users are going to need to learn a new behaviour—one where they think in terms of goals, not apps. The easier we make it for them to access the assistant, the quicker they’ll adapt. ^1c44a7</p><h3 id="h-3-lack-of-multimodal-inputs-and-outputs" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">3. Lack of Multimodal Inputs and Outputs</h3><p>Let’s be honest—audio alone isn’t enough. Yes, voice commands are intuitive and efficient for certain tasks, but information needs to be delivered in various ways. Visual feedback, like images and text, allows users to process information faster. Devices that rely only on voice, like some AI pins or pendants, miss out on the efficiency that visual data provides. The solution is to design devices with both visual and auditory outputs.</p><h3 id="h-4-poor-data-privacy-protections" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">4. Poor Data Privacy Protections</h3><p>With AI, the question of privacy becomes even more critical. These devices will have access to mountains of personal data, and without strong privacy protections, they could easily become surveillance tools. The safest way to ensure privacy? Keep the data on the device itself. I think open-source projects like Open Interpreter, which runs locally on the user’s device, are on the right track. This model protects privacy and offers immediate feedback—something that cloud-based models struggle with.</p><h3 id="h-5-lag-in-response-time" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">5. Lag in Response Time</h3><p>For AI assistants to feel natural, they need to respond quickly. Research shows that interactions that take longer than 400 milliseconds feel slow and clunky. Devices relying on cloud-based AI models are always going to have some lag, no matter how fast the server is. To hit that magical threshold of 400 milliseconds, AI models need to run locally on the device. Otherwise, users will experience delays that break the flow of interaction.</p><h3 id="h-6-lack-of-interoperability" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">6. Lack of Interoperability</h3><p>We’re transitioning to a new kind of computing—one where AI agents handle tasks for us. These agents need to be able to interact seamlessly with existing systems and applications. If a user wants their AI assistant to book an Uber or play a song on Spotify, the interaction needs to be flawless. Devices that struggle with this will quickly fall out of favour. The future belongs to AI assistants that can bridge both the current and future computing models.</p><h3 id="h-7-failure-to-create-universal-interfaces" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">7. Failure to Create Universal Interfaces</h3><p>The final mistake I see is the lack of universal, adaptable interfaces. Every request a user makes might require a different interface—one that adapts in real-time. If I ask my AI assistant to book an Uber, the interface might display a map. If I want to pick between two songs, I’ll need a list. These interfaces need to be generated on the fly, based on the user’s specific request. A rigid interface simply won’t cut it.</p><h2 id="h-the-future-of-ai-assistants" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The Future of AI Assistants</h2><p>In my opinion, the best form factor for personal AI assistants is a wearable device—specifically a smartwatch. It’s close to the user, easy to access, and can incorporate the necessary screen for visual feedback. While today’s smartwatches may not have the power to run advanced AI models, the tech is catching up. With chip technology becoming more efficient and AI models getting smaller, we’re not far from a future where AI-powered smartwatches will be the norm.</p><p>Ultimately, while none of the current products on the market have quite nailed it yet, some are closer than others. But in the end, it’s the big players—Google, Microsoft, and Meta—that are best positioned to dominate the personal AI assistant space. They have the data, and in AI, data is king.</p><p>As much as I root for the underdog, I wouldn’t bet my money on any of the current startups trying to break into this space. The race will be won by those with the resources to gather and process the vast amounts of data required to create truly effective AI assistants.</p><h1 id="h-accompanying-video" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Accompanying Video</h1>]]></content:encoded>
            <author>reddxf@newsletter.paragraph.com (sharanx)</author>
            <category>ai</category>
            <category>hardware</category>
            <category>review</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/0a2648cf90e2bc4d704a0b6f7bf8d8218f81cd0e8c642298da154fa4014f4bad.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[DIY: A Raspberry Pi based Time Capsule]]></title>
            <link>https://paragraph.com/@reddxf/diy-a-raspberry-pi-based-time-capsule</link>
            <guid>2nd7mGRW6PxtG12kfJ5r</guid>
            <pubDate>Sat, 03 Aug 2024 22:00:00 GMT</pubDate>
            <description><![CDATA[I built an efficient Apple Time Capsule clone that will perform the task of allowing any externally connected hard drive to be used as a backup destination using the Time Machine setup on your Mac.]]></description>
            <content:encoded><![CDATA[<h2 id="h-what-this-is" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">What this is</h2><p>If you want to build your own Apple Time Capsule clone using the Raspberry Pi and an external hard drive, this is the slimmest and most efficient solution that you could build in under an hour, without the bonus 3D printing bit in the end of course!</p><p>You may also be successful if you just follow the steps below, but there&apos;s a step where you need to find your Raspberry Pi on your network and reserve a specific IP address in your router which I won&apos;t be going into that may require some prior knowledge.</p><h2 id="h-whats-unique-about-this-solution" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">What&apos;s unique about this solution</h2><p>This solution has the following characteristics that make it different from other solutions that you may have seen:</p><ul><li><p>You don&apos;t need any other software such as Open Media Vault, which is great if you want to do a whole bunch of other things, but is too much of an overhead if all you want to do is build a network backup solution</p></li><li><p>This could be setup on very low horse-powered boards like the Raspberry Pi 3A+ as it runs with the services built into the OS</p></li><li><p>This solution works with more than one Mac on your network</p></li><li><p>It can do this for guest logins on a network and doesn&apos;t require an account to be setup on the Pi, so this makes it easy to use for a home environment</p></li></ul><h2 id="h-requirements" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Requirements</h2><p>You need the following as a minimum:</p><ul><li><p>Raspberry Pi 3 Model A+ with compatible power supply</p></li><li><p>Micro SD Card with 4GB or more</p></li><li><p>External hard drive of any capacity that is compatible with the Raspberry Pi&apos;s USB port</p></li><li><p>The drive will work with most laptops using different OS&apos;s in case that&apos;s needed at any time</p></li><li><p>Raspberry Pi Imager software</p></li></ul><h2 id="h-software-setup" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Software Setup</h2><h3 id="h-step-1-create-the-sd-card-image" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Step 1: Create the SD card image</h3><p>For this, simply do the following:</p><p>1.1 Download the Raspberry Pi Imager software and open the file and follow the instructions to install the software on your Mac as you would any other software</p><p>1.2 Plug in the Micro SD card into your Mac. It doesn&apos;t matter if it isn&apos;t already formatted.</p><p>1.3 Choose the Raspberry Pi OS Lite (64-bit) image to write to the Micro SD card</p><p>1.4 Choose the Micro SD card that you just inserted into your Mac as the storage destination and click &apos;Next&apos;</p><p>1.5 In the next step of OS customisation, make sure to Edit Settings and in the first tab</p><p>1.5.1 Setup the host name -- this is the name of your Pi on the network. I&apos;ve called mine &quot;pinstripes&quot;, but you could just call it &quot;timecapsule&quot; or anything else you&apos;d like</p><p>1.5.2 User name and password for the machine. I suggest you set it to the default username and password for now which are &apos;pi&apos; and &apos;raspberry&apos;. You can change this later if you&apos;d like.</p><p>1.5.3 Provide the SSID and password to enable it to connect to your Wifi network. You don&apos;t need this if you&apos;re using a different Raspberry Pi that connects using a LAN cable.</p><p>1.5.4 On the second tab called &quot;Services&quot;, make sure that you enable SSH and use the password authentication mechanism. This will allow you to remotely log into your Pi with the user name and password that you setup in the previous step.</p><p>1.5.5 Hit &quot;Save&quot; and in the next screen click &quot;Yes&quot; and then &quot;Next&quot; to begin the imaging step. You may need to type in your login and password for your computer to tell your computer that you authorise this action.</p><p>In about 15 minutes your Micro SD card is going to be ready. Just eject your card if not already done before pulling it out of your computer.</p><h3 id="h-step-2-initialise-your-pi-and-set-it-up-on-your-network" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Step 2: Initialise your Pi and set it up on your network</h3><p>2.1 Setup the host name -- this is the name of your Pi on the network, so you could just call it &quot;timecapsule&quot; for simplicity or name it anything else you&apos;d likePlug in your Micro SD card into your Pi and then plug in the power cable to the Pi. After about 5 minutes, the Pi would have booted up and then connected to your network over Wifi (or LAN in case that&apos;s how you&apos;d connected). You don&apos;t need to plug in your external hard drive to the Pi as yet.</p><p>2.2 Open up a Terminal window on a laptop on the network and type in ping timecapsule.local. Replace &quot;timecapsule&quot; with whatever you named your Pi in Step 1.5.1. This should return the IP address of your Pi on the network.</p><p>2.3 Now type in arp -a and get the MAC addresses of all the devices connected to your network. Look for the Pi&apos;s MAC address that corresponds to its IP address you found in the previous step.</p><p>2.4 Now log into your router which assigns the IP addresses on your network and setup a fixed IP address for your Raspberry Pi&apos;s MAC address. This whole thing may work without this address reservation, but I&apos;ve never tested it. So if you absolutely cannot perform this step, just go through the rest and let me know if it works for you.</p><p>2.5 If you&apos;ve reserved a specific IP address for your Pi and it is different from the one it is currently assigned, you will need to reboot your Pi by turning off the power.</p><p>2.6 Once your Pi comes back on your network, go back to your Terminal window and type in the following to log into your Pi through SSH - ssh <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="mailto:pi@192.168.0.10">pi@192.168.0.10</a> where &apos;pi&apos; is the user name and the IP address is the one that you reserved for your Pi in step 2.4. If everything is setup properly, your Pi will ask you for a password and you could just type in &apos;raspberry&apos; or whatever else you set up in Step 1.5.2.</p><h3 id="h-step-3-prepare-your-external-hard-drive" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Step 3: Prepare your external hard drive</h3><p>3.1 Get your external drive. Be sure you&apos;re using a hard drive that doesn&apos;t have any data that you want to retain as it will be lost forever once you&apos;re done with this step. Connect the hard drive to your Mac.</p><p>3.2 Use &quot;Disk Utility&quot; to Erase and initialise your external drive. Name the drive anything you&apos;d like such as &quot;TimeCapsule&quot; but make sure to use &quot;Exfat&quot; and if asked, the &quot;GUID&quot; options for maximum compatibility of the drive across different OS&apos;s.</p><p>3.3 Once the process is complete, eject your drive from your laptop. It is ready for plugging into your Pi in the next step.</p><h3 id="h-step-4-get-the-software-setup" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Step 4: Get the software setup</h3><p>4.1 While connected to your Pi through a Terminal window, do the following to update your Pi&apos;s OS:</p><ol><li><p><code>sudo apt update</code></p></li><li><p><code>sudo apt upgrade</code></p></li><li><p><code>sudo apt install exfat-fuse -y</code></p></li><li><p>`sudo apt install samba -y</p></li><li><p><code>sudo apt install avahi-daemon -y</code></p></li></ol><p>4.2 Once the above packages are installed, let&apos;s mount the external drive.</p><p>4.2.1 Plug in your external drive into the Pi</p><p>4.2.2 Type in sudo df -Th which will list all the drives connected to the Pi. Find the identifier for your drive which should most likely be &quot;sda2&quot; but could be different too.</p><p>4.2.3 Create a mount point by typing in the following commands sudo mkdir /mnt/timecapsule and sudo mount -t exfat /dev/sda2 /mnt/timecapsule. Just substitute &quot;sda2&quot; with whatever the drive&apos;s identifier actually is.</p><p>4.2.4 Type in the following to find the UUID of your drive lsblk -f and all the drives connected to your Pi will show up (including the Micro SD card). Look for the UUID of the drive with the label &apos;TimeCapsule&apos; if that&apos;s what you named the drive in Step 3.2. Cross verify against the size of the drive to be sure. Copy the UUID into the clipboard.</p><p>4.2.5 To make the mount point persist across reboots of the Pi, you will need to add the following code to the file named &quot;fstab&quot;. You can do that by: sudo nano /etc/fstab</p><p>4.2.6 Paste the following in the end of the file UUID=your-uuid /mnt/timecapsule exfat defaults,uid=1000,gid=1000,umask=000 0 0 but replace the UUID with the one you copied to the clipboard in the previous step.</p><p>4.2.7 Press Ctrl-X to save the file, Yes to write the buffer to file and enter to rewrite the file.</p><p>4.3 Now write the configuration file to Samba, which is the file networking service that shows the drive on a network in a form that is compatible with Macs and Windows. To do this, you need to write the following code into the smb.conf file by typing in sudo nano /etc/samba/smb.conf which should bring up the existing configurations in the file. Press the page down key on your keyboard to get to the end of the page and paste this and make sure the mount points are correct:</p><p>4.4 Save the file as the other files in step 2.5.3 and then restart the service with the new configurations by typing in the command into your command prompt in the Terminal sudo systemctl restart smbd</p><p>4.5 We&apos;ve got to do the same for the avahi service configuration. To edit the file, sudo nano /etc/avahi/services/samba.service and paste the following code. There are no changes needed to be made.</p><p>4.6 Save the file as in step 2.5.3 and then restart the service by typing in the following command sudo systemctl restart avahi-daemon</p><p>I suggest restarting the device itself to make sure that it works even if it ends up restarting on it&apos;s own in the future. To do this, type in sudo reboot now into your Terminal command prompt. This will disconnect your session and restart the Raspberry Pi.</p><p>That&apos;s it, you&apos;re ready to go and setup the Time Machine backup from your Mac as always!</p><h2 id="h-troubleshooting" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Troubleshooting</h2><p>If for any reason you don&apos;t see the newly setup drive when you are setting up Time Machine on your Mac, try to connect to the drive first on the network by going to the Finder &gt; Go &gt; Connect to Server and then typing in smb://192.168.0.25 replacing this with the IP address of your newly setup Time Capsule. You should be able to connect to the Raspberry Pi drive. Post that the Time Machine should automatically find the drive on the network. If for any reason this still doesn&apos;t work, there&apos;s probably some step above that you&apos;ve missed. So give that a shot again.</p><h2 id="h-bonus-step" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Bonus Step</h2><p>Make a case for the Raspberry Pi and the hard drive to be situated together. I&apos;m designing something for this purpose and should have a 3D printable file posted here soon. Check back in a bit. But if you&apos;d like to print something else, look at the plethora of stuff that&apos;s available to you on Printables.</p><h1 id="h-accompanying-video" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Accompanying Video</h1>]]></content:encoded>
            <author>reddxf@newsletter.paragraph.com (sharanx)</author>
            <category>diy</category>
            <category>product</category>
            <category>design</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/59983ae039ce66c792d0c9b2557e376f81cce70ec80d96545204c936e1d80d1f.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[My experiments with 3D printing]]></title>
            <link>https://paragraph.com/@reddxf/my-experiments-with-3d-printing</link>
            <guid>qDojRqsDCNRZOfvTnhKt</guid>
            <pubDate>Tue, 23 Jan 2024 23:00:00 GMT</pubDate>
            <description><![CDATA[My foray into the world of 3D printing and my learnings and insights so far. This is work in progress and will continue to be edited.]]></description>
            <content:encoded><![CDATA[<h1 id="h-3d-printing-a-journey-into-a-promising-yet-challenging-field" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">3D Printing: A Journey into a Promising, Yet Challenging Field</h1><p>Back in around 2015, I stumbled upon 3D printers during their nascent stage. The potential was clear, but the technology wasn&apos;t quite there yet, and it was relegated to a niche space for tech enthusiasts. As a UX designer, my curiosity was piqued, especially considering the initial buzz surrounding its potential impact on ecommerce and logistics. However, being based in India at the time, affordability, availability of replacement parts, and local support were significant barriers to entry.</p><p>Fast forward to 2023, I was blown away by an amazing YouTube video showcasing a user designing and printing all the shelves he needed for his desk, as well as custom SD card holders. This experience resonated with me, reminding me of the excitement I get when visiting a hardware store or a craft shop, imagining the possibilities of creating useful things. I was eager to explore this technology further and get my hands on a 3D printer to discover what I could create.</p><h2 id="h-the-right-printer-for-me-balancing-usability-and-performance" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The Right Printer for Me: Balancing Usability and Performance</h2><p>As the 3D printing industry evolved, so did its focus on user experience. Although there are numerous resources available online for fine-tuning printers, many models offer excellent results straight out of the box.</p><p>In terms of the printing technologies, I narrowed down my choices to Fused Deposition Modelling (FDM) and Vat Polymerisation (VP) printers due to their popularity in the consumer space. VP printing introduced some unique challenges like the need for ventilation and dealing with smells arising from resin. Given that I wanted the printer to remain in my study, I opted for FDM. After thorough research, I selected a printer that offered ease of assembly, maintenance, and accessibility to support. The Bambu Labs X1 and Elegoo Neptune 4 Plus stood out, but ultimately, I went with the Neptune 4 due to its lower cost and absence of proprietary parts.</p><h2 id="h-unboxing-and-setting-up-overcoming-challenges-together" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Unboxing and Setting Up: Overcoming Challenges Together</h2><p>The delivery took around a month due to pre-ordering, arriving during Christmas holidays, offering ample time for me to delve into learning about 3D printing. The setup was straightforward, but there were some complications like ensuring the correct cables were connected and checking voltage settings on the printer bed. Although these tasks weren&apos;t particularly challenging for a technologically inclined user, they required careful attention and a bit of research to ensure a seamless experience.</p><p>Despite the occasional hurdles, the experience was rewarding, as I was able to build my 3D printer from scratch with all necessary tools supplied. The desktop software that came with the Neptune 4 pleasantly surprised me with pre-installed printable files of a tool stand for the printer!</p><h2 id="h-designing-the-gap-in-the-ux-of-3d-modelling" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Designing: The Gap in the UX of 3D Modelling</h2><p>The design process in 3D modelling is complex and requires a good understanding of visualising 2D cross-sectional shapes to create 3D objects. Sculpting tools like Blender are suitable for creating aesthetically pleasing, non-functional designs, while functional 3D design tools like OnShape offer the best capabilities for product designers.</p><p>OnShape, a cloud-based platform, boasts responsive and great tools that keep users updated with the latest features. However, the pricing could use improvement to cater to hobbyists and make entry into this space more accessible. Simplifying design tools by incorporating user-friendly features specific to 3D printing would further enhance the overall UX.</p><h2 id="h-design-marketplaces-tools-and-toys-for-everyone" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Design Marketplaces: Tools and Toys for Everyone</h2><p>Websites like Printables and Thingiverse offer a wealth of user-generated designs, allowing anyone to download and print objects using their 3D printers. This sense of power to create and own is truly exhilarating, enabling users to go from ideation to tangible results in a matter of hours! However, there are challenges like inconsistent quality checks and potential issues with new designers&apos; designs not being fully optimised for mass consumption.</p><p>Improving the user experience could involve implementing better quality control measures or allowing experienced designers to review and approve new designs before they are released to the public. Additionally, providing more comprehensive information about dimensions, filament requirements, and print time would significantly improve the overall experience.</p><h2 id="h-printing-turning-ideas-into-reality" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Printing: Turning Ideas into Reality</h2><p>Printing in 3D isn&apos;t without its challenges – files need to be converted from design formats (usually STL) to Gcode using slicing software like Cura before being printed. This step involves setting various parameters that determine the quality, strength, and speed of the final print. Although these steps are essential, they can be complex for new users. It is also not apparent to users that they may affect the final look of the object with the changes made to some of these settings.</p><p>Streamlining this process by incorporating it into authoring tools would greatly improve the user experience. Until then, there are numerous resources available online to help users learn the intricacies of 3D printing, allowing them to enjoy the satisfaction of turning their ideas into tangible objects.</p><h1 id="h-conclusion-a-promising-future" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Conclusion: A Promising Future</h1><p>The potential of 3D printing is immense, and I&apos;m confident that significant improvements will be made to address the unique challenges in this space while keeping user experience at the forefront. Like with all the challenges solved in the 2D printing world, I envision 3D printers becoming an integral part of every household or as more likely, in the neighbourhood print shop. The key to realising this lies in simplifying the design authoring tools and making them accessible to everyone.</p><h1 id="h-hurdles-and-insights" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Hurdles and Insights</h1><ul><li><p>The plugging in of the cables for the gantry motors still requires some amount of understanding of connectors. This could be further simplified in future versions.</p></li><li><p>The printer plate travels outside the frame of the printer itself which allows a user to make the mistake of placing it too close to a wall and having to learn during printing that it needs to be moved. This could be explained as well.</p></li><li><p>Going from 2D to 3D is incredibly difficult and staying in the 3D space from the start would be ideal, but the tools we have today are limited in their ability to express these ideas.</p></li><li><p>Marketplaces are currently aimed at those that own 3D printers, whereas they should be aimed at the end-users of the products that are available on the sites.</p></li><li><p>There are two outcomes to the world of 3D printers, they will either become a part of every household of the future as 2D printers today are, or they will become a part of every neighbourhood print shop.</p></li></ul>]]></content:encoded>
            <author>reddxf@newsletter.paragraph.com (sharanx)</author>
            <category>diy</category>
            <category>technology</category>
            <category>3d-printing</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/5475fb538ba4f31f718cdaa12fa002f3a70bedfbd23352f9ee8b46f98c7d6693.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Review of the Rabbit R1]]></title>
            <link>https://paragraph.com/@reddxf/review-of-the-rabbit-r1</link>
            <guid>YvFWvdPaLRpdVrF1z9V8</guid>
            <pubDate>Tue, 09 Jan 2024 23:00:00 GMT</pubDate>
            <description><![CDATA[A detailed review of the newly launched Rabbit R1, exploring its hardware and software design, its strategic position in the market and its potential challenges]]></description>
            <content:encoded><![CDATA[<h1 id="h-the-launch-of-something-new" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The Launch of Something New</h1><p>To say that the Rabbit team mimicked Steve Jobs&apos; keynote would be an understatement! The only thing missing was the Vera Wang turtleneck—it was a black t-shirt instead. Otherwise, the presentation featured the same slide style from Apple Keynote with the gradient, the same format, the hand gestures, and even the &quot;One more thing...&quot; announcement at the end. But OnePlus did this before too, turtleneck included. While it was considered cringe-worthy at the time, the product compensated for the lack of originality in the presentation. So, let&apos;s not dwell on style and focus on the substance instead.</p><h1 id="h-background" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Background</h1><p>The tech world has been in pursuit of two things over the past couple of years. The first is an answer to what will replace the smartphone. Depending on whom you ask, the smartphone has remained relatively unchanged since its debut in 2007. It has been upgraded in small ways with better screens, cameras, and sensors, but otherwise, the form factor has stayed largely the same since the beginning, despite some major flaws in the structure. But it&apos;s been more than 16 years, Steve Jobs isn&apos;t around, and everyone is looking for the smartphone killer.</p><p>Secondly, the success of OpenAI&apos;s ChatGPT has demonstrated to everyone how powerful AI assistants can be. The open-source space has also made giant leaps with Llama 2 and Mistral AI models, and you can achieve the same capabilities as ChatGPT, and even the interface, running on your local computer with relative ease using tools like Ollama.</p><p>With the addition of vision, audio, and speech capabilities into AI models, they are now ready to move beyond the browser and integrate into people&apos;s lives, understanding instructions within their own contexts. For this, they may need to leave desktop browser windows and move onto mobile phones instead. But mobile phones are still a reach-into-your-pockets-or-handbag away, and that&apos;s not good enough either. People need something even more readily available. Enter Meta Ray-Ban, Humane&apos;s AI Pin, etc.</p><p>Thirdly, these models have so far been hamstrung by their inability to perform actions on behalf of the user. They cannot yet click buttons on screens and other interfaces, and the OS architecture and security do not allow an app to interact with the interfaces of other apps as yet, and rightly so. But this also means that we have to duplicate actions and information across apps in order to achieve certain goals.</p><p>With this backdrop, I think we can evaluate what Rabbit R1 is doing much more accurately.</p><h1 id="h-hardware-design" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Hardware Design</h1><p>It seems they hired Teenage Engineering (nice name), which appears to have a penchant for creating retro-futuristic tech products, reminiscent of Dieter Rams&apos; Braun designs. The design features a very cool, Lego-inspired shell that houses a great-looking touchscreen, a camera on a swivel, a scroll wheel, microphones, a slot for USB and SIM cards, and a push-button on the right side of the product. The touchscreen is a great addition, making interactions with apps, information delivery, and answering questions much faster than the audio-based delivery chosen by the AI Pin by Humane.</p><p>The camera on a swivel seems like a good idea, as the R1 can potentially scan the environment to find what a user may be referring to in their instructions. However, the plastic above the camera prevents the device from being held at an angle less than about 45 degrees, meaning the user must position the device almost vertically, like a smartphone, for the camera to see. This is odd, and I&apos;m not sure why this choice was made.</p><p>Why not just position the camera on the very edge without the top plastic part causing an obstruction?</p><p>I&apos;m also struggling to understand the purpose of the analogue scroll wheel. Is it meant to help make selections within the touchscreen interface? That doesn&apos;t seem likely, as it would potentially be faster to use your finger to scroll on the touchscreen itself. Is it intended for manually positioning the camera? If so, positioning it to the right of the camera would have been more logical. Is the idea to enable one-handed scrolling on the touchscreen? If I were holding the device in my left hand, using my index finger to scroll might be easier than using my thumb on the screen, but I would still need to make button selections, and for that, I might be hitting the push-button. This seems like a learned behaviour, but it&apos;s the only explanation I can come up with given my limited understanding. However, if you hold this device in your right hand, the positioning of the scroll wheel becomes even more perplexing! You can&apos;t scroll with the thumb that&apos;s holding the side of the device, and you can&apos;t use your left hand to operate the scroll wheel without blocking your view of the screen. So, why is it designed this way?</p><p>Finally, I didn&apos;t notice any way for the product to be attached to a jacket or shirt, and the placement of the screen and push-button suggests that the R1 is intended to be carried in a pocket and pulled out when needed. This raises a significant concern for me. An AI companion needs to be readily accessible within the user&apos;s physical space to understand instructions better (and to keep the instructions simpler). If it needs to be pulled out of a pocket—or more likely from a bag, since the smartphone will probably be in the pocket—it won&apos;t be as easily accessible. The AI Pin addressed this better by being always accessible from the t-shirt or jacket where it is clipped. The Meta Ray-Ban was another good attempt, but since they are sunglasses, wearing them all the time is nearly impossible.</p><h1 id="h-software-design" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Software Design</h1><p>Jesse Lyu, the founder and CEO, begins his keynote by recognising one of the most fundamental issues with the way smartphones are designed. This is an extremely deep insight, and I&apos;m so glad someone on such a large stage was able to express it. During the early days of the smartphone, Apple proudly used the line, &quot;There&apos;s an app for that,&quot; in their marketing to highlight how many apps there were in the App Store. You see, in their view, there was an app that could achieve any task that you wanted to do. But the mental model of a person who wants to perform a specific task requires them to choose the app that would enable them to do it, then open that app and make the right choices in the interface in order to achieve the task. If you&apos;ve got more than one app that could help you do that task, there&apos;s some time spent in your mind deciding between them. And if you have a task that needs more than one app to be achieved, you&apos;ve just made the choices even more complex. Given that most people have about 90 different apps on their phones on average, this isn&apos;t a small matter. You also need to have downloaded these apps ahead of time in anticipation of needing them in the future. But that&apos;s another story altogether.</p><p>This is also third-generation thinking, where you have to consider the objective you need to achieve and then break it down into the steps required for the computer to help you achieve it. Contrast that with fourth-generation thinking, which simply requires the user to clearly express the goal, and the computer then breaks down the tasks into atomic bits, figures out the best tools to use to solve the problem, and solves it. This was science fiction before the advent of AI. This is reality today. There&apos;s also the duplication of data and instructions between the apps to contend with. For example, travelling somewhere on a vacation requires an app for flight bookings, an app for hotel bookings, an app to book a ride, and another to research the highlights of the destination. Each of them will ask you for your name, dates, times, locations, your companions, your preferences, over and over again. And that&apos;s not even considering the fact that you need to register with each app too!</p><p>This is a problem today, and Jesse and his company used this idea as the foundation to build their solution to combat this problem. Kudos to them for being able to figure out a way around it with their Rabbit Hole interface, even though it doesn&apos;t yet completely solve all the problems. There&apos;s much more to say about the software, the interfaces, and the UI design, which are all brilliant but table stakes for a game this big. The fact that they understood the above point and also broke through the 500ms (while not yet hitting the Doherty Threshold) makes the device feel really responsive, which is just amazing. ^a18b65</p><h1 id="h-business-design" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Business Design</h1><p>The fact that the product was launched at $199 is simply a masterstroke. A device that purports to be an AI Companion is really only that useful today, and the price point is perfect. But how could they not have a recurring monthly fee? They must have servers running in the background to serve the needs of the users. How do they expect to fund this? To me, this is where the rabbit hole comes in. I think the apps need to pay to be listed there. Maybe not initially, but eventually. They also may have a local model running on the device that handles the majority of the daily queries and only passes along the complex ones to the server, à la Mixtral of Experts. This would keep their costs low too. There is a possibility that they will benefit from a model that understands the physical world of the user better. This is greenfield at the moment, and they are going to be one of the first to occupy this space. But this is me just conjecturing, and the answer may be far simpler; the operational costs may just be borne through VC funding until they hit some kind of threshold. Or maybe it has to do with advertising, a.k.a. &quot;recommendations&quot; that the agent provides.</p><h1 id="h-conclusion" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Conclusion</h1><p>The R1 is intriguing and addresses some key concerns of mobile computing. However, several challenges persist. While the Rabbit R1 excels in software, the hardware design falls short of being an effective AI companion. A smartwatch with a camera still remains the form factor to beat. Given these shortcomings, I predict limited usage and abandonment by most users within a few months unless these improvements are implemented by the time they start shipping in March. With regard to competition, this concept could easily be replicated by smartphone manufacturers. If they cannot produce it independently, an acquisition could be on the cards, which may be the anticipated endgame anyway.</p><p>For any AI companion, the major obstacle remains payments. The demo didn&apos;t quite show how the many payments that were alluded to actually took place, and I&apos;m curious to see how this aspect works. While I don&apos;t personally wish to purchase this product due to the aforementioned issues and deficiencies, I still regard it as a &apos;directional innovation&apos; that pushes the industry in the right direction. There will be numerous iterations before the ideal form factor for the product is realised, as well as the perfect business models for the companies backing them. Consequently, I intend to keep a close eye on Jesse and his team to see if they adapt and iterate as I anticipate they will.</p>]]></content:encoded>
            <author>reddxf@newsletter.paragraph.com (sharanx)</author>
            <category>ai</category>
            <category>hardware</category>
            <category>review</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/82b8c3f616bf5f6b64ef54aac949b0fa5603c197423b0b28003968a6c5941e58.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Is there room for another dating app?]]></title>
            <link>https://paragraph.com/@reddxf/is-there-room-for-another-dating-app</link>
            <guid>VpNO4PeRHLW45dE0Ky8f</guid>
            <pubDate>Mon, 17 Jul 2023 22:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<h1 id="h-the-need" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The Need</h1><p>Meeting the person that is now my wife had been quite a challenge. I probably have been helped by friends, family and work colleagues and when that didn&apos;t work, I even joined a singles group, signed up online on matrimonial sites and even dating apps. I finally met my wife on a dating app called OkCupid. To say that this field of allowing people to find each other is dear to me would be an understatement.</p><p>I&apos;ve been happily married for a while now. I have several single friends my age and while a few of them want to find someone and get married eventually, some are choosing to stay single and are quite content doing so. However they would still like companionship and would like to meet people that are similarly inclined.</p><p>In a recent conversation with four twenty-something user experience designers, I asked about dating apps and whether they are serving the needs of people their age group. But surprisingly the complaints were the same as for those that belong to the older group I spoke of above. All of them said they had issues meeting people and that no dating app was serving them well.</p><p>On a broader societal level India has always been thought of as a nation with very traditional values and therefore the population had a majority of married people. But this trend is changing. People are getting married later in life if at all and many are also choosing to stay single, not seeing getting married and having kids as the final goal. Divorce laws in India are also complicated to say the least and this may be a contributing factor to people choosing to stay single.</p><h1 id="h-the-problems" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The Problems</h1><p>In my research, I&apos;ve funnily had absolutely no trouble getting people to tell me about the issues they&apos;ve faced while using dating apps. It usually ends up becoming an hour-long conversation and everyone wants to explain their individual journeys and issues that they&apos;ve run into. In a lot of ways they want to vent their frustration and when someone like me comes along and asks what the issues are, it&apos;s like opening the flood gates to a dam. But I feel their pain as I&apos;ve been there myself.</p><p>To whit, the issues they presented for dating were as follows:</p><ul><li><p>Men find it harder to meet women as the percentage of men on dating platforms is much higher. This may be because it&apos;s traditionally been uncommon for women to put their profiles online.</p></li><li><p>It is not easy to convert an online encounter into a physical meeting due to concerns regarding security. They typically want to be very sure of the person before they meet.</p></li><li><p>People prefer to meet others in real life rather than online as that is a more authentic way to find someone.</p></li><li><p>Creating a good profile is key and people request others who have found success to create their profiles. But this inadvertently also adds to the catfishing concern as the profiles may not be authentic at all.</p></li><li><p>People don&apos;t always know that the person they are meeting online are verified in any way and this adds to security concerns.</p></li><li><p>Dating apps that require women to initiate the connection may be adding a hurdle to introverted women. They may prefer to be approached instead. But this an an all-or-nothing kind of offering on such platforms.</p></li><li><p>Ghosting is a big concern as people may be spending inordinate amounts of time trying to build a relationship with someone they met online whereas the other person may be not be interested at all but doesn&apos;t know how to end this.</p></li><li><p>The online medium is skewed towards good-looking people. The fact is that most platforms offer huge databases of people and it&apos;s practically impossible to sift through all without being extremely quick to judge either based on photos or based on scanning of profiles very quickly.</p></li><li><p>There are several scams that have occurred based on people meeting each other online in India. Scams involve extortion, confidence scams, honey traps and lots of other such issues that make this kind of meeting very dubious.</p></li></ul><p>There are some things that could be solved through better employment of technology. But there are a lot of things that can be solved in the real-world realm.</p><h1 id="h-room-for-another" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Room for Another</h1><p>With the passage of time, it&apos;s a good idea to evaluate old decisions over again. While I had previously thought that this market is overcrowded and that there is dating-app-fatigue that has set in, I think the advent of AI has made it possible to evaluate this once again.</p><p>Yes, I am very aware that platforms like e-harmony and OKCupid, and lots of others, have indeed marketed based on this idea. However, I don&apos;t think they had the tools we have today to make that claim in earnest. Besides, the one flaw in their thinking was that they were only able to go so far as making a recommendation of who they would match with; they were never able to evaluate whether they had made the right prediction because they had no mechanism to receive such feedback. If they didn&apos;t get any feedback, it definitely didn&apos;t help make future recommendations any better either.</p><p>If one were to venture into this field, this is a key aspect that I would evaluate: did they have good recommendation engines that not only had recommendations but also had feedback loops to make future recommendations better?</p><h1 id="h-the-business-case" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The Business Case</h1><p>In my quick study of this field I&apos;ve come to the conclusion that the demand is off-the-charts. No platform has solved this problem as yet, so there&apos;s definitely room for another entrant. There is no dearth of money as people are willing to pay out of money in order to meet the right person. So it can definitely be a profitable venture. But the solution has to be packaged correctly and the right promises need to be made.</p><h2 id="h-the-goal" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The Goal</h2><p>First and foremost it is important to set the right goals for a platform. Is it meant to result in a date, is it meant to result in a relationship, is it meant to result in a marriage or is it simply to develop companionship in this changing world. I think apps and platforms that promise finding the person to get married to are obviously overselling what they are capable of. I&apos;d go so far as to say that even if they promise a great date. It takes a lot more to make a date memorable than the matching of characteristics in databases. I&apos;d love to see an app that promises that the user would meet good people and that&apos;s it, because that&apos;s a promise that can be kept.</p><h2 id="h-target-audience" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Target Audience</h2><p>While some people may be looking at getting married, some may simply be looking for companionship or even to meet other singles because the conversations with their married friends maybe something they can&apos;t relate with anymore. I think it would be prudent to simply focus on the people looking to make friends. If something more were to develop from this, that&apos;s just the cherry on top and not the baseline expectation.</p><p>This may also therefore not be an age or gender-related qualification.</p><h2 id="h-packaging" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Packaging</h2><p>In my mind, the following are the core areas to keep in mind when designing a solution:</p><h3 id="h-1-security" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">1. Security</h3><p>As mentioned above, we need to make security the main focus area. We need to find ways to make sure that people feel confident when participating on a platform. Vetted profiles, membership through recommendations, attracting people through the right channels, etc. are some of the things that we can do to make sure of this.</p><h3 id="h-2-authenticity" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">2. Authenticity</h3><p>While making an effort to make all the profiles on the platform look good, the authenticity of the profile must not be lost. There is a fine balance here that must be found. Maybe the profile is written by those trained in it, but there could also be a video that the user records of themselves saying things that should be added with it.</p><h3 id="h-3-matching" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">3. Matching</h3><p>At the fundamental level this is a matching problem. There is a lot of noise to sift through to find good information and characteristics to match people against. People often lie about themselves in order to appear a certain way and no questionnaire can overcome this hurdle. Or, they may not express what they want for fear of being judged for it. Yet again, they may simply not know what they want. All of this makes the matching problem more difficult for any system, but not for machine learning systems. Building a good matching engine that powers all of this is therefore the most important task. And for this, we need as many signals and feedback as possible.</p><h3 id="h-4-efficiency" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">4. Efficiency</h3><p>This is one of the most ignored metrics within this field. People spend inordinate amounts of time trying to find the right person that they would like to be with. This increases the fatigue that they face and makes a lot of people give up in the middle of the process. Any new system that is built should definitely consider making this entire process more efficient. The system should try to match a person with a group of people that they share common ground with. The individual they meet within that group that they connect with is almost unnecessary to focus on.</p><h2 id="h-pricing" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Pricing</h2><p>Pricing will obviously play a key role here. Pricing can be used to create an elite group that keeps the non-serious participants out, but too high a price and it will become elitist and exclusionary. People should also not be paying for individual participation, but rather for a few different events together so that they interact repeatedly as that is key to the success of this program.</p><h1 id="h-conclusion" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Conclusion</h1><p>The world is yearning for deeper connections. With the power of technology at our fingertips, it&apos;s time to harness it and create platforms that truly bring people together. We must seize this opportunity to build communities, foster companionship, and combat the growing loneliness in our society. It&apos;s time to reimagine dating platforms as platforms that serve this need, where the focus is on meeting good people and forming meaningful connections. Let&apos;s embrace the advancements in AI and recommendation engines to make this vision a reality. We have the tools, the potential, and the responsibility to improve the human connection. It&apos;s time to take action and create a platform that truly changes lives. It must be attempted because it is important to achieve.</p>]]></content:encoded>
            <author>reddxf@newsletter.paragraph.com (sharanx)</author>
            <category>design</category>
            <category>product</category>
            <category>startup</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/687c2adb3d870fe9526c2e4ed5411acb391b9da14f8e9c6a3f58d90730cb3d3d.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Generative AI Art Experiments]]></title>
            <link>https://paragraph.com/@reddxf/generative-ai-art-experiments</link>
            <guid>tBkDNRlQ2W9iqNoqyCmU</guid>
            <pubDate>Wed, 12 Jul 2023 22:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<h1 id="h-introduction" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Introduction</h1><p>Artificial Intelligence (AI) based design tools have captivated our collective imagination in the field of design. Being able to imagine something and bring it to life in the form of an image or video is a tempting prospect. While one part of the design world lamented the fact that these tools may replace our roles, I belong to the small minority that looks at these new technologies with the hope that it would lead to better design. I could after all never call myself a designer if I had never run into Photoshop all those years ago.</p><p>So I dove in head-first into figuring out what this new frontier holds for us. This is an on-going experiment and I am just documenting my learnings as I go along.</p><p>If not for Photoshop, I may have never become a designer</p><h1 id="h-what-did-i-expect" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">What did I expect?</h1><p>While I started exploring this simply out of curiosity, it later was also a business decision as I anticipated that it would help my small team do more. After all we were about to launch a second product at work and I had to find ways to  sustain all of it with the current team size.</p><p>Therefore, I started by trying to find answers to the following:</p><ul><li><p>What are the tools that are available?</p></li><li><p>What’s the learning curve to use these technologies?</p></li><li><p>How much control do I have over the output?</p></li><li><p>How much post-processing is required?</p></li><li><p>Can they produce consistent quality and style over a period of time?</p></li><li><p>Are the technologies mature enough for everyday use?</p></li></ul><h1 id="h-october-2022-ai-based-art-nfts" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">October 2022: AI based art NFTs</h1><p>I had heard of AI generated art and was amazed by what I saw in some videos but I had no idea how this was done or how. I happened to run into NFT’s on Rarible at that time and saw this collection of AI based images that someone had made. I was just so amazed with its accuracy that I ended up buying a few of them simply in the hope that I was supporting someone who was working on this and possibly getting flack for not being a true artist.</p><h2 id="h-learnings-from-this-phase" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Learnings from this phase:</h2><ul><li><p>AI art is dividing the world into the group that thinks this is art and the group that thinks it isn’t</p></li><li><p>No one expected AI would set its sights on the creative fields first.</p></li></ul><h1 id="h-february-2023-midjourney" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">February 2023: MidJourney</h1><p>I ran into MidJourney back in late 2022 through a colleague who was using it to create some artwork that I was quite impressed with. I thought to try it out but the version that existed at the time was just not that great with the simple “prompts” I was able to provide it. It took too many tries and much too long to generate something just remotely close to what I wanted.</p><p>I saw others developing images that looked a lot better and saw what they had prompted MJ in order to produce it. I copied one of them that had produced a very cool image of a cat to produce an image of a dog. My dog image had three eyes!</p><p>My attempts at producing human portraits didn’t fare much better. All of them had messed up hands or too many fingers for some reason. I was producing Dali-esque images without intending to.</p><p>I walked away from a two hour session kind of impressed, but feeling that the tech wasn’t there yet.</p><h2 id="h-learnings-from-this-phase" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Learnings from this phase:</h2><ul><li><p>The learning curve was too great as I would never be able to understand all the words and prompts and styles and exclusions if I had to type all of this out</p></li><li><p>The interface (Discord) was absolutely not the one that I could imagine using to produce images. It was a terrible fit and since I was in a public channel typing out my requests, I felt like I was being watched while I fumbled around trying to produce the output I wanted.</p></li></ul><p>The tech wasn’t there yet, but I saw potential.</p><h1 id="h-march-2023-midjourney-v5" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">March 2023: MidJourney V5</h1><p>MJ V5 launched this month and I got really interested. This was a big update and they got a whole lot of changes in. They got the hands right, they got faces right, the quality of images was just amazing and the initially released with the Zoom Out feature that essentially imagined the parts of the frame that wasn’t there before and gave images this depth and drama that didn’t exist before.</p><p>The pace of smaller updates thereon has just been incredible and it was time for my team to start experimenting with it. As we had been working on building an NFT collection for use with the main product as rewards, this ability to generate a lot of graphics in a short period of time was important. The challenge was being able to get MJ5 to output the kinds of images that we had been producing until now with Blender.</p><p>The alternative was to produce a new style that worked with MJ5 but that also meant updating all the graphical properties that the company had, including the website, marketing collateral, transactional content and other assets such as NFTs.</p><p>So the team started to work with this and try and achieve our existing styles of artwork with the new tools. However this yielded pretty bad results. There was still a lot of manual effort required to bring the artwork to the same style. So we abandoned this approach and started pursuing the second strategy and came up with a new direction altogether.</p><h2 id="h-learnings-from-this-phase" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Learnings from this phase:</h2><ul><li><p>The tooling was still not good enough for a designer. It seemed like you can only do a full-image edit and not something specific within it. For that you’d still need tools like Photoshop. Photoshop’s generative fill provides much more control.</p></li><li><p>Knowing that we can produce the required graphics with such ease suddenly allowed us think about doing sweeping changes that we would never think about doing before. We’re now able to think about updating a website in preparation for a launch event.</p></li><li><p>I feel that we’re still at a stage where you can still tell when artwork has been produced by AI. It’s just “too polished” and accurate. Not sure what this means as yet, but I’m trying to understand it more.</p></li><li><p>Stable Diffusion came onto my radar as a platform that provides us more control within the AI generative tools space. So I started exploring that next.</p></li></ul><h3 id="h-scribe-characters-created-with-blender" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">‘SCRIBE’ CHARACTERS CREATED WITH BLENDER</h3><h3 id="h-scribe-characters-created-with-midjourney-50" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">‘SCRIBE’ CHARACTERS CREATED WITH MIDJOURNEY 5.0</h3>]]></content:encoded>
            <author>reddxf@newsletter.paragraph.com (sharanx)</author>
            <category>ai</category>
            <category>art</category>
            <category>design</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/e7c2de2f8899005704a5db0daff69e82d427d3185d5fa5ff38276d6824e46480.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Better tools should be better employed]]></title>
            <link>https://paragraph.com/@reddxf/better-tools-should-be-better-employed</link>
            <guid>U8hvoxGYa3ZYGudW8qER</guid>
            <pubDate>Wed, 28 Jun 2023 22:00:00 GMT</pubDate>
            <description><![CDATA[When you have access to tools that provide you unlimited capability, how should these tools be used.
]]></description>
            <content:encoded><![CDATA[<h1 id="h-introduction" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Introduction</h1><p>In the age of AI, the only limitation to creating things is our ability to imagine them. While AI tools allow for new possibilities, there is a real growing concern that our ability to imagine itself may be eviscerated. But uniquely to our time, there are dire consequences to using and not using the technologies. The only way forward is to pay attention to where we employ these tools and where we don’t.</p><p>Design is about changing and managing perceptions and ideas about the world we live in. It’s powerful because when employed well, it works wonders as a way of wielding influence. This influence can broaden our horizons, make us aware of our hidden biases and even train good behaviour. Or, on the other hand, it can block progress and foster addictive and unhealthy behaviours such as providing positive reinforcement when we buy stuff we don’t need.</p><h1 id="h-two-stories" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Two Stories</h1><p>I recently watched this wonderful video by Dami Lee where she analysed the fantastic amount of detail that the people at Studio Ghibli went into while creating some of the award-winning animated movies that they are known for. Since Dami Lee is an architect, she analysed the movies from that perspective and highlighted how thoughtfully the buildings were detailed, including the details of lobbies, the rooms themselves, and even what the long passageways mean in terms of the Japanese culture they represent. She’s fascinated that the studio is able to be this observant and dives in to find out what about their design process makes this possible. And this, to me, in my current role as the head of a design team, was what I found most interesting.</p><p>In the video, there are clips of interviews with one of the founders Hayao Miyazaki where he states that they force everyone to animate things in the traditional way “by hand” even though this is far more painstaking and tedious. He says that this, in fact, is the reason the studio has been able to be as observant as they have been. He insists that this is the case and goes on to berate anyone that wants to use technology or find a faster way to do things. He famously has said, “Anyone that eats instant ramen three times a day cannot be an animator,” further emphasising the idea.</p><p>&quot;Anyone that eats instant Ramen three times a day cannot be an animator&quot; -- Hayao Miyazaki, Co-Founder, Studio Ghibli</p><p>In a completely different part of the world, a friend of mine had developed an appreciation for calligraphy and had begun dabbling with it. He showed me the various pages where he had written the same letter over and over again. Being extremely observant he had observed how he had to move his hand in order to achieve the right shape of the stroke and paid attention to how the ink flowed on the paper. He had even crafted the right kind of pens and nibs to be able to do this easily. But his biggest learning was that the effort involved in the art actually made him think many times about what it is that he wanted to convey. It had to be something important as the very act of embellishing the letters took a lot of time and effort.</p><p>He went on to remark at the virtues of the art of letter writing of the ages past that our grandparents were very familiar with. He saw that there was something beautiful in the effort one put not only into the writing of the letters, but also the pasting, stamping, walking the letter over to the nearby postbox that somehow conveyed to the recipient and made them so happy or at least important for being the beneficiary of such effort.</p><p>It seems there was something to the deliberation involved and the lack of speed, that made for greater appreciation. And this got me thinking about the design process I use in my daily work as a user experience designer.</p><h1 id="h-a-better-ux-design-process" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">A Better UX Design Process</h1><p>User Experience Design as a field gained foothold with the advent of computers and the invention of the Graphical User Interface. It&apos;s been a digital pursuit since it&apos;s inception and as such technology has been used to improve every aspect of the process, right from research, wire-framing, visual design, implementation and testing.</p><p>For those unfamiliar with this field, the main job of a UX designer is to visualise how an application would look and feel like before it has been developed by programmers, much like an architect would render drawings of a building on paper before the building gets built. The task therefore is of communication to all the stakeholders. The hope is that any fixes that are required are done at the design stage when it is cheaper rather than at the development stage when it is definitely much more expensive.</p><p>But even a considerably simple application requires more than fifty screens to be designed if every scenario anticipated while using it needs to be thought through. For example, how does a user login, how do they set their preferences, how do they communicate with support personnel, how do they bring their friends onto the app, and so on. When you factor in the additional operating systems and the various devices and screen sizes that the application should work on, you can easily understand how many screens need to be developed.</p><p>Imagine if something that appears on all the screens needs to be changed (which happens more frequently than one thinks), a designer will have to manually locate the said element in each of the screens and replace them. Using Photoshop, Illustrator or Fireworks, which were tools we used to use to design these screens, this took hours or even days of mind-numbing effort and designers hated doing this. They would do anything to avoid rework, including pushing back on any change requests, pushing the task to the developers to handle at the time of development or even just changing the process to seek inputs and feedback on just key screens before all the associated screens in the process are completely developed which is not really the best way to do this as the stakeholders are evaluating based on insufficient information.</p><p>Today, we have much better tools like Figma which make these changes across hundreds of screens in a matter of seconds. This one ability has allowed us to create much better designs, seek feedback after a lot of screens covering a lot of scenarios are actually developed improving the quality of the output immensely. From my vantage point, it&apos;s clearly evident that this kind of technology has been great for the industry.</p><p>But in this mad rush to achieve outputs faster, the same tools also offers the ability for designers to use templates to speed up the work even further. There are AI design tools that are being created that would create entire workflows and in some cases even apps with a single prompt. Like painters of canvasses, UX designers have also got to pay attention to every pixel that they place on the screen. Everything is deliberated upon and nothing happens by accident which is why the outputs are meaningful. This part of the process should remain slow for a reason. If technology is used to reduce the deliberation in an effort to ease this part of the process, the outputs of such processes will undoubtedly be mediocre.</p><h1 id="h-counter-arguments" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Counter Arguments</h1><p>In some cases, &quot;mediocre&quot; may be a desirable step up. For example, A business owner for instance may simply want to test out an idea for an app without the kinds of expenses involved in the design and development of a custom-built application. But there&apos;s a risk here. A friend of mine advised me about which bicycle to buy this way, &quot;Don&apos;t get a basic bike if you intend to get into this sport. You won&apos;t like the bike and you&apos;ll end up rejecting the sport instead.&quot; I think there&apos;s a lot of truth in that statement and I&apos;d like to provide the same warning to the business owner who may be tempted to use this route. Testing out a good idea using mediocre ways to get you to the output is actually not testing out the idea at all.</p><p>A UX designer also may be tempted to use these tools to define a baseline for the app that they want to develop. These tools may still serve a purpose, but fixing a bad output is probably harder than creating the right output from scratch. There&apos;s also the possibility that not every aspect of the output is examined as well as when creating the output in the first place. Would you want your app to go out to developers with these flaws still in it?</p><p>&quot;Don&apos;t get a basic bike if you intend to get into this sport. You won&apos;t like the bike and you&apos;ll end up rejecting the sport instead.&quot; -- Rohan Kini, Bums on the Saddle</p><h1 id="h-conclusion" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Conclusion</h1><p>I think technology is a tool like any other and can be beneficial or detrimental depending on how we use it. It will always lure us with its promise of speed, scale, spread, smarts and asynchrony. While it can improve our lives, indiscriminate employment of it will only result in poor outputs. We must be vigilant about what we employ these tools for. Reduction of effort is a great reason to use technology, but if it can impede our thoughtfulness, it is probably a bad use of it.</p>]]></content:encoded>
            <author>reddxf@newsletter.paragraph.com (sharanx)</author>
            <category>design</category>
            <category>tools</category>
            <category>technology</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/43125946fb8b2625ffdc27a3c32f8c38a7def659edbad5230ba333b9809ad6b0.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Eight Essential Rules for Mastering Ethical UX Design]]></title>
            <link>https://paragraph.com/@reddxf/eight-essential-rules-for-mastering-ethical-ux-design</link>
            <guid>pzkE7pNfUIsfxj47HFYk</guid>
            <pubDate>Thu, 20 Apr 2023 22:00:00 GMT</pubDate>
            <description><![CDATA[Reflecting on the work of many others and my own work within the field, I have distilled the following rules to act as a guide]]></description>
            <content:encoded><![CDATA[<h1 id="h-introduction" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Introduction</h1><p>While UX design is a relatively young field, its importance in a product’s success cannot be overstated. Its impact on the world today is clear, but sadly its ability to effect positive change is coupled with its ability to inflict harm. As with anything that has such powerful duality, a code of ethics is necessary to act as a lodestone to guide those venturing into this field in the future.</p><p>Reflecting on the work of many others and my own work within the field, I have distilled the following rules that should act as a guide:</p><h1 id="h-rule-1-your-foremost-duty-is-to-help-your-client-succeed" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Rule #1: Your foremost duty is to help your client succeed</h1><p>I started my design career at Adobe Systems and then went on to build my own UX design agency. My ideas regarding this subject have evolved from “The designer’s job is to advocate for the user within a company” to “The designer is the servant of two masters and needs to figure out how to balance the two”, and finally to understanding that a company simply cannot have interests in opposition to those of its customers for any reasonable length of time, as customers are not naive people waiting to be tricked and will just switch to the nearest substitute. So, it’s enough to simply take the interest of your client — their true long-term interest — and design for that.</p><h1 id="h-rule-2-your-task-is-to-help-a-user-simply-navigate-a-complex-world" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Rule #2: Your task is to help a user simply navigate a complex world</h1><p>Software is usually employed by a user for the completion of reasonably complicated tasks. Depending on the user, they may be unfamiliar with how to use the software, use the hardware, understand how technology functions, or even have physical limitations such as not being able to see small fonts clearly. Our job as designers is to help them navigate the technology in such a way that it is very simple for them to achieve their tasks. We may need to help them understand the decisions they need to make better, they may need settings that make interfaces more visible, they may rely on the software to make certain choices for them, etc. The better we understand our users, the better we can serve them.</p><h1 id="h-rule-3-dont-treat-the-user-like-a-commodity" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Rule #3: Don&apos;t treat the user like a commodity</h1><p>The anonymity one has on the internet is a double-edged sword. While it encourages free speech and discourse, it allows people to behave worse online than they would in real life. In the commercial world, this extends to thinking of users of a product in the abstract, reducing them to single dimensions as pockets that need to be picked, resources meant to be exploited, rats to be subjected to dopamine experiments, or as commodities meant to be traded. This is not good for the company in the long run (a la rule #1), but people attempt to do it anyway. Designers can play a big role in breaking these mindsets and making product managers or companies look at the long-term welfare of their users and, consequently, their own companies.</p><h1 id="h-rule-4-your-counsel-is-as-important-as-your-labour" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Rule #4: Your counsel is as important as your labour</h1><p>Far too often, we measure our work by the effort we put in or the number of screens and experiences we produce. But I’ve always felt that they hire you for your skills in the early part of your career, and when you’ve gained more experience, they hire you for your opinions . When we’re designing something, we’re deeply considering a specific point, making decisions based on them and learning from their outcomes. It is this distillation of experience that is more important to a client than our labour. They shouldn’t have to spend new money to learn old lessons.</p><h1 id="h-rule-5-a-designer-must-respect-cultures-genders-physical-abilities-political-leanings-and-privacy-preferences-and-must-design-to-enhance-all-of-them" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Rule #5: A designer must respect cultures, genders, physical abilities, political leanings and privacy preferences and must design to enhance all of them</h1><p>Much too often we think only through the lens of our biases and it’s all too easy to exclude certain people or groups from our thinking. I am myself guilty of not accommodating people with poor eyesight and early on in my life as a designer, I’d design applications with small fonts because they made the interfaces so cool. But this meant I’d ostracise a large group of people that found it hard to read the small fonts.</p><p>An even more subtle example is the frustration that one feels while using tools like Google Docs or Microsoft Word or many other software, where they have to change the locale setting every time they create a new document as it usually defaults to ‘US English’ even if you’ve previously gone and set this to ‘Indian English’, ‘UK English’ or ‘Australian English’, or any of the many other dialects. I’ve seen people delivering food in India wearing t-shirts that say “Hunger Savior” instead of “Hunger Saviour” because someone forgot to change the locale setting while writing the copy for that t-shirt ! But there are even worse examples I have seen websites that use “Father’s Name” as a label for a field where “Parent’s Name” would have been more appropriate. Should someone really pick the option that says “childless”, even if they chose not to have children and wouldn’t choosing “0” against children not be a more neutral choice! But here’s something that probably affects a lot more people — the methods of proving that you’re a human by typing in the English text displayed in a box. The majority of the world doesn’t speak English natively!</p><p>While it may be challenging to cater to every user’s unique preferences, being aware of these potential biases is the first step in creating more inclusive designs. By being open to feedback from users and making an effort to understand their perspectives, designers can work towards creating applications and products that are welcoming and accommodating for everyone.</p><h1 id="h-rule-6-protect-the-privacy-of-your-users" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Rule #6: Protect the privacy of your users</h1><p>In the digital age, privacy has become a vital concern for users as their personal information is often at risk of being exploited by powerful entities. Although many people understand the importance of privacy, not everyone is aware of its implications on their own lives or how to select platforms that prioritize this aspect.</p><p>As a result, it falls upon product designers to make critical decisions on behalf of their users, ensuring that privacy is protected throughout their designs. By being conscious of potential privacy risks and implementing robust security measures, designers can help safeguard users’ personal information and maintain their trust in the products and platforms they use.</p><p>Anne Cavoukian, the Information and Privacy officer of Ontario, proposed a set of seven foundational principles to protect the users of most information systems called “Privacy by Design” (PbD). The core idea is to address privacy concerns from the beginning, instead of adding privacy measures as an afterthought. PbD comprises seven foundational principles:</p><ul><li><p>Proactive not Reactive; Preventative not Remedial: Privacy by Design emphasises anticipating and preventing privacy invasions before they happen, rather than waiting for breaches to occur and then taking remedial action.</p></li><li><p>Privacy as the Default Setting: Privacy should be built into systems and services by default, so users don’t have to take any action to protect their privacy. This means that personal data should be automatically protected without requiring any user intervention.</p></li><li><p>Privacy Embedded into Design: Privacy should be an integral part of the design and architecture of IT systems and business practices. This ensures that privacy is not treated as an add-on but is a core component of the system or service.</p></li><li><p>Full Functionality — Positive-Sum, not Zero-Sum: PbD advocates for a “win-win” approach where both privacy and functionality are achieved, without compromising either. It rejects the idea that privacy must be sacrificed for security or other objectives.</p></li><li><p>End-to-End Security — Lifecycle Protection: Privacy by Design emphasises the need for strong security measures throughout the entire data lifecycle — from collection to use, storage, and eventual disposal. This involves implementing robust access controls, encryption, and other security techniques.</p></li><li><p>Visibility and Transparency: PbD emphasises being open and transparent about privacy practices, allowing users and other stakeholders to verify that privacy measures are in place and functioning as intended. This fosters trust and confidence in the system or service.</p></li><li><p>Respect for User Privacy: Privacy by Design puts users at the centre, giving them control over their personal data and making it easy for them to exercise their privacy rights. This includes mechanisms for obtaining consent, providing access to personal information, and allowing users to correct or delete their data.</p></li></ul><p>To make applications respect users’ privacy, follow these steps:</p><ul><li><p>Conduct a privacy impact assessment at the beginning of the project to identify potential privacy risks and develop strategies to address them.</p></li><li><p>Design the application with privacy in mind, incorporating the Privacy by Design principles from the start.</p></li><li><p>Collect only the minimum necessary personal data and anonymise or pseudonymise data where possible.</p></li><li><p>Implement strong access controls and encryption to protect personal data in transit and at rest.</p></li><li><p>Establish clear and transparent privacy policies, making them easily accessible to users.</p></li><li><p>Offer users options to manage their privacy settings, giving them control over their personal data.</p></li><li><p>Regularly review and update privacy practices, staying informed of changes in regulations and industry best practices.</p></li></ul><h1 id="h-rule-7-find-every-opportunity-to-reduce-the-carbon-footprint" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Rule #7: Find every opportunity to reduce the carbon footprint</h1><p>When e-commerce companies first launched in India, every step from the moment a user completed payment on a website to the final delivery of the product to the user’s home was new and untested. One such process was the packaging method.</p><p>It appears that packaging and shipping standards from another country were adopted, with every product arriving in bubble wrap and oversized boxes, regardless of the contents. However, it is well-known that India is averse to waste, be it food on a plate or in business, nothing is allowed to go to waste. When oversized boxes started appearing at people’s doorsteps for items as small as lipstick, deodorant, or a bar of soap, people began to complain!</p><p>This led to someone in the logistics department reevaluating the situation and adapting the Standard Operating Procedures (SOPs) to suit waste-intolerant Indian conditions. Packaging underwent a rapid transformation across all e-commerce players: boxes were replaced by envelopes when possible, clothes were wrapped in brown paper, the number of shipments decreased, and items were grouped. Other changes included shipping items to a neighbourhood store for customer pick-up and making numerous other adjustments in a short time. Encouraging a company to adopt environmentally-friendly practices is challenging, but e-commerce companies embraced these strategies because they aligned with their interests.</p><p>This prompted me to think about design interventions that could benefit the company, customer, and environment. A few ideas immediately came to mind that e-commerce companies could implement:</p><ul><li><p>Allow users to specify their preferred packaging for the products they order.</p></li><li><p>Let users indicate if they need their items urgently or are willing to wait for grouped deliveries with others in the area, enabling the packaging department to avoid always defaulting to the “safest” option.</p></li><li><p>Food delivery companies can improve the value-per-mile metric for their delivery personnel by allowing them to collect used plastic containers from customers, streamlining recycling efforts.</p></li><li><p>Offer subscription options for frequently ordered products, optimising delivery routes and making them more predictable.</p></li><li><p>Allow users to inform product manufacturers of their preference for eco-friendly alternatives, which could encourage manufacturers to develop such options if there is sufficient demand.</p></li></ul><p>These are five ideas with me spending a few hours thinking about the problem. Imagine the possibilities if designers everywhere also focussed on this problem. And that’s the main call to action in this rule.</p><h1 id="h-rule-8-design-for-delight-not-for-addiction" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Rule #8: Design for delight, not for addiction</h1><p>Design is an incredibly powerful tool that can help achieve any goal. There have been numerous shows and documentaries highlighting how design can make applications addictive. It is possible to devise countless ways to captivate users, keeping them glued to your application. The primary benefits of this approach are pushing more ads to users or spying on them to build a profile for sale to the highest bidder. However, these are shortsighted goals and tactics. Awareness is growing among people, who now understand that if the product is free, they themselves are the product.</p><p>Moreover, there is a trend towards paying for software nowadays, as demonstrated by the growth of SaaS products, consumers’ willingness to pay for digital content on platforms like Netflix, Apple+, and others, and the increased use of cloud-only products such as the popular design tool Figma. Such services depend less on ad revenue and more on providing quality products and delightful experiences.</p><p>Users of these services have demonstrated that they are not only willing to pay for quality products and services but will also remain loyal to the company long-term. This contrasts with the software of the past, which would be uninstalled as soon as users had finished their intended tasks. In an era when the cost of acquiring customers is skyrocketing, wouldn’t you prefer to be the one retaining customers for the long haul?</p><h1 id="h-conclusion" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Conclusion</h1><p>Ethics encompass the principles and values that guide human behaviour and decision-making, with a focus on concepts such as fairness, justice, and moral obligation. Ethics inspire individuals and organisations to contemplate the wider implications of their actions on others and the environment. By embracing these values, we can forge the kind of societies in which we yearn to live.</p><p>As UX designers, we now hold immense power and the capacity to shape the digital worlds we increasingly inhabit. It is time for us to take responsibility for our actions, learn from the best practices, and strive to create a positive impact that fosters long-term well-being. Let us seize this opportunity, embrace our ethical obligations, and design a brighter future for all.</p>]]></content:encoded>
            <author>reddxf@newsletter.paragraph.com (sharanx)</author>
            <category>ux</category>
            <category>design</category>
            <category>ethics</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/447fe36ae9734f472c77a2c8ee782c8c0cd3752511355f4e2219094f9039ced4.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Fixing Censorship with Web3]]></title>
            <link>https://paragraph.com/@reddxf/fixing-censorship-with-web3</link>
            <guid>NGQbMYjYpxSi50lqUbmx</guid>
            <pubDate>Mon, 27 Feb 2023 23:00:00 GMT</pubDate>
            <description><![CDATA[It’s important to keep striving for better models of censorship that can benefit society without infringing on individual rights and freedoms.]]></description>
            <content:encoded><![CDATA[<h1 id="h-introduction" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Introduction</h1><p>Censorship has been around as long as information has been recorded and shared. Different groups, including religious authorities, governments, and corporations, have used censorship as a way to control what people can see and hear, sometimes for legitimate reasons but often to influence public opinion in their favour. It&apos;s important to keep striving for better models of censorship that can benefit society without infringing on individual rights and freedoms.</p><p>The emergence of both Web3 and AI presents an opportunity to find new solutions to this problem. These two fields intersect and could potentially lead to innovative approaches that balance the need for censorship with respect for individual liberties. The following is such an idea.</p><h1 id="h-the-need-for-a-redesign" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The Need for a Redesign</h1><p>The shortcomings of the current censorship system have led to a demand for a new approach. To illustrate this, consider a tech company that creates a massive social media platform with over a billion users across several countries like Twitter or Facebook.</p><p>On this platform, let&apos;s take one example of a video that was actually posted by a user that depicted the burning of an American flag. This is very likely to offend some users who may hail from the country that the flag represents. The offended users have a few options at their disposal. They may respond to the initial post and ask the user to take down the post because they&apos;re offended, or, they may report the post to the platform for being offensive. The platform then decides whether the post is required to be taken down.</p><p>Then some form of the following decision tree ensues within the department of the social media platform in charge of handling such user reports:</p><ul><li><p>Is the post in line with the published set of standards related to posting content on the platform?</p></li><li><p>Was the first user right to post the content?</p></li><li><p>Was the second user right to be offended by the content?</p></li><li><p>Is the offensiveness &quot;sufficient&quot; to warrant the removal of the post?</p></li><li><p>How much of a political or economic backlash will the platform face if it chooses to leave the post online or take it down?</p></li></ul><p>Consider another example that occurred more recently. The artist Lindsay Mills shared a photo of her baby on various social media platforms. The photo showed the mother and baby, with only the baby&apos;s butt visible. While some platforms allowed the photo to remain, one platform not only removed the post but also banned Mills&apos; account. This decision was made by an algorithm without human involvement, and appeals to restore the account were unsuccessful. The incident gained attention because Mills is married to Edward Snowden.</p><p>These examples demonstrate the gray areas surrounding censorship, as society was divided on the appropriate course of action in each case. They highlight the problems with the mechanisms behind censorship, as corporations hold significant influence over society but are not elected to make these decisions.:</p><p>Furthermore, the following questions arise:</p><ul><li><p>Will the platform make a decision based on standards that have been clearly published?</p></li><li><p>What was considered during the setting of these standards?</p></li><li><p>Who was involved in setting these standards? Were the people that weighed in on the standards representative of the population of users that will eventually use it or was it more representative of the standards of the corporation and where it was founded?</p></li><li><p>Should the standards be the same across geographies and cultures? Is it a lowest-common-denominator approach that is safe for all or were other factors taken into account for setting these standards?</p></li><li><p>What is the basis for judging whether the content that clearly lies in the grey areas should be taken down or not? Is it simply how large a body of people it offends? How is the voice expressing the minority opinion protected?</p></li><li><p>Do these standards change based on who the audience is, by age, by gender, by their own level of maturity? Can a user have control over what they want to be able to see or not?</p></li><li><p>Can I make up my own mind after exposure to the said content?</p></li><li><p>Does it evolve as a society&apos;s views on issues change over time? What doesn&apos;t change?</p></li><li><p>How does the platform handle pressure from external entities such as influential people, large groups or even governments of countries? What are the concessions it will make in this regard?</p></li></ul><p>The platform cannot avoid adopting a specific political viewpoint because there&apos;s no way around it. But if it holds any political leaning too strongly, it may loose users to rival platforms that do the same thing but only differ in their political ideologies. This is exactly what happened when Twitter decided to de-platform Trump. The problem with this then is it creates echo chambers where users are surrounded only by people that agree with them and view anyone with an alternative viewpoint as the other. We loose the public town squares where ideas are debated which is essential to building stable societies with moderate political leanings instead of societies that may erupt into civil wars at any moment.</p><p>Furthermore, we are entrusting the responsibility of making decisions beneficial for society as a whole to a corporation whose sole purpose is generating profits for its shareholders. The mechanism does not allow for the consideration of societal well-being, as corporations must prioritise profits and their ability to sustain losses in a confrontation with political figures or governments pressuring them to make specific decisions. A single misstep can lead to lawsuits that could bankrupt the company.</p><p>When did it become a tech company&apos;s responsibility to determine censorship implementation on a platform? Why do we accept this as the only approach when tech companies have consistently demonstrated their inadequacy in handling the responsibility of building good societies?</p><h1 id="h-a-decentralised-solution" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">A Decentralised Solution</h1><p>A more effective approach would be to begin with the recognition that free speech is a fundamental right for all individuals. From this foundation, the solution should not focus on prohibiting certain forms of expression, but rather on preventing individuals from encountering offensive content. Although the distinction may appear subtle, the solutions for each approach differ significantly. By focusing on the latter, several benefits arise, as outlined below:</p><ul><li><p>The fundamental right to freedom of speech is maintained with no one being de-platformed</p></li><li><p>The user can control what they want to be exposed to or not be exposed to on an individual level and it can change over time on an individual basis</p></li><li><p>No other individual or entity dictates the standards for determining what someone should or should not be exposed to</p></li><li><p>This system cannot be corrupted by external forces putting pressure on any single body</p></li></ul><p>While researching how a solution for this could work, the movie industry stood out as having a model that we could build upon. The film industry rates movies with suitability ratings like &quot;PG-13,&quot; indicating whether it&apos;s appropriate for viewers aged 13 or older, and streaming services like Netflix provide additional information such as &quot;Contains adult themes&quot; or &quot;Has crude humor&quot; to help audiences make informed decisions. Even if two audiences are demographically identical, they may choose differently based on their preferences. This model has been effective in movies, so what if it was applied to content on the internet with the necessary modifications?</p><p>One potential approach is to assign tags to all posts on a fictional social media platform that describe the content. Viewers could then apply filters based on these tags to allow or prevent the content from appearing in their timeline. This central idea serves as the starting point for outlining how such a system could operate.</p><h2 id="h-step-1-tagging-the-content" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Step 1: Tagging the content</h2><p>While AI models had to be trained to properly identify content in the beginning, the models have become pretty good at this task today. Now the model can be tested against the content to produce the right tags in the content. This is true for text, audio or video based content and across languages. So in the first pass, any new content that&apos;s posted on a platform can be processed through AI models first to create a tag cloud for the content.</p><p>Tags can be improved upon by groups of people post the initial stage to provide an additional layer of meaning through the lens of different individuals. This should happen over time as well so as to not allow the meaning of the content to stagnate in time as the same content could change its relevance at a different point in time.</p><h2 id="h-step-2-setting-up-filters" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Step 2: Setting up filters</h2><p>On the receiving end of the content pipeline, the users need to mark content that they find offensive. But this process could be explicitly done with the user flagging some content as offensive, but also could be done through observing the viewing patterns such as the user scrolling past the content faster than they do on other content or hitting the &apos;skip&apos; or &apos;stop&apos; buttons on videos. One may argue that platforms like YouTube are already doing some version of this, but there are two big differences. Firstly, the data is maintained in a silo on the YouTube servers and not by the users themselves. Secondly, the goal of these platforms is self-serving and to increase viewership on the platform, not to do what the user wants.</p><p>While the above process gets better over time, users could also adopt filter sets from other pre-built filters created by people they trust, as starting points for their own filters to achieve immediate relief. This set then gets modified with the users own interactions with various platforms over time using on-device AI engines that are becoming more and more prevalent.</p><h2 id="h-step-3-viewing-content" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Step 3: Viewing content</h2><p>This is probably the simplest part. There can be timelines titled &quot;For You&quot; which is a view that applies the preference filters on the content being viewed and a second timeline titled, &quot;Unfiltered&quot; which shows all the content on the timeline if the user so wishes. This again is not even a new mechanism as most content platforms already use such devices.</p><h1 id="h-a-solution-for-the-age-of-ai-and-web3" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">A Solution for the Age of AI and Web3</h1><p>While thinking through the solutions here, it was clear that the solution to the problem of censorship is not based on the imagining of technologies that don&apos;t yet exist but the repurposing of those that already do. So why hasn&apos;t it been done yet? I believe there are two reasons. Firstly, as anyone in software can tell you, the higher the amount of input you require from a user, the lower it&apos;s adoption rate is going to be. So a self-learning system like AI or ML was necessary to exist for this to work otherwise it would have been humanly impossible to build a system that works. Secondly, the filters and learnings that a company like YouTube would develop about it&apos;s user&apos;s preferences were guarded as intellectual property of the company and not shared with even the user. So a decentralised system like Web3 needed to exist for such a system to work across the web.</p><p>We&apos;re living in some really interesting times when these kinds of ideas will suddenly come to life because of the fertile ground created by AI and decentralisation that have culminated to put the user at the centre of the technological universe.</p><h1 id="h-conclusion" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Conclusion</h1><p>There are obviously a lot more steps and nuances to consider when building this system, for example, how would such a system work for children and those that are not as tech savvy or even handle content that is clearly illegal across all nations. But I&apos;m sure there are smarter people that could propose better ideas based on the foundation that has been proposed here. I just didn&apos;t want to write an article pointing out all the problems without at least proposing a potentially better solution. If you are interested in discussing any of the ideas proposed here a little further, please do reach out.</p>]]></content:encoded>
            <author>reddxf@newsletter.paragraph.com (sharanx)</author>
            <category>web3</category>
            <category>technology</category>
            <category>censorship</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/cdd159626e44ff9e1ab738b8580cdb4bd4c0db90070a17c6b601c1b14408ef91.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Seven Structural Elements to Keep in Mind for Web3 UX Design]]></title>
            <link>https://paragraph.com/@reddxf/seven-structural-elements-to-keep-in-mind-for-web3-ux-design</link>
            <guid>YNPbH7P1jttwq0CmgI2r</guid>
            <pubDate>Sat, 11 Feb 2023 23:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<h1 id="h-introduction" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Introduction</h1><p>Web3 is a decentralised internet powered by cryptography and blockchain technology that allows for transacting with any entity in the world without the need for trusted middle men. It has specific uses in fields where having a trusted third-party middleman may prove to be corruptible, inefficient or expensive. But far too often UX designers are starting from equivalents in Web2 creating poor user experiences.</p><h1 id="h-unique-elements" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Unique Elements</h1><p>There are several effects that the difference between these two structures affect the UX of applications built on top and the following are some main differences that UX designers must account for while designing Web3 applications:</p><h2 id="h-1-decentralisation-is-slow-and-expensive" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">1. DECENTRALISATION IS SLOW AND EXPENSIVE</h2><p>Since nodes in Web3 are distributed and are of different technical capabilities, data transmission across the network takes a lot of time compared to centralised systems. These processes can cause delays on the application layer, making them slow and expensive. In order to combat this, transactions are bunched up together before being written to the blockchain as a single write operation if far less expensive than several transactions writing the same amount of data.</p><p>It’s therefore essential for a UX designer designing applications to decide which data needs to be written to the blockchain and also when in a user’s sessions that needs to happen. For instance, in building a decentralised version of Instagram, would you record every “like” of the user as they occur, or collect all of them and commit them only at the end of the user’s session?</p><p>Many applications also adopt a Web2.5 design approach, where less critical interactions such as “likes” are stored on a centralised server for faster processing, while more critical data, such as who posted which picture, is written to the blockchain.</p><p>But there may other approaches that suit your applications and the right solution would entirely depend on the specifics.</p><h2 id="h-2-the-blockchain-has-the-data" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">2. THE BLOCKCHAIN HAS THE DATA</h2><p>One of the most surprising ideas to me was that in the Web2 world, an application typically includes both a front-end and a back-end/storage layer delivered as an app. However, with data and storage openly available on a blockchain, the user interface and user experience is the application. This means that someone else could write a new UX layer for the same data, and the user could choose which application interface they prefer, losing nothing when switching between apps.</p><p>So, what’s the knock-on effect of this? It makes the UX layer super competitive, and the quality of experiences will only improve going forward. The app no longer stands for how well it manages to hoard and guard user data, but rather how good the experience is.</p><h2 id="h-3-its-community-owned" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">3. IT’S COMMUNITY OWNED</h2><p>Unlike products and services in Web2 that are owned by corporations driven by the interests of investors and shareholders, Web3 has “projects” and “protocols” that are owned by communities, not individuals, due to the philosophy of decentralisation.</p><p>While there are participants attempting to amass wealth rather than distribute it to the community, more and more projects are evaluated on their decentralisation and ownership distribution. This impacts how things are designed because in traditional Web2 UX design, a good designer tries to balance the user’s needs against the needs of the company. However, in Web3, the success of the project is aligned with its users, as they hold the tokens and assign them their value.</p><p>With data no longer locked into an app residing in a database, it brings honesty to the equation. Apps that prioritise user interests will win in the future. App that use dark UX patterns to deceive users will not be successful in Web3.</p><h2 id="h-4-no-communication-channels-exist-yet" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">4. NO COMMUNICATION CHANNELS EXIST, YET</h2><p>In Web3, there is a lack of email or chat communication tools for peer-to-peer communication, and there is no de-facto protocol that can be assumed to be universally subscribed to such as email in Web2. Although tools like Push protocol and EthMail exist, they are not yet widely used and cannot be relied on for communication. This means that if someone buys something from your Web3 marketplace, there is currently no way to follow up with them to update them on the status of their purchase or to fix any issues, and they must come back and check with you.</p><p>Asking for their email address is not always an option, as it could compromise their pseudonymity, allowing you to connect their email address to their wallet and potentially uncover all their blockchain activities. Therefore, it is important to design around this depending on the context.</p><p>What I did on one project where a certain process would take upwards of a few days to complete was to allow the user to download a calendar appointment that stated when they needed to come back and check on the status of this process. You just need to be innovative about how to solve these kinds of problems.</p><h2 id="h-5-the-wallet-is-everything" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">5. THE WALLET IS EVERYTHING</h2><p>In Web3, everything is based on cryptography using the concept of a pair of public and private keys. This provides users with pseudonymity, but not anonymity, in an otherwise transparent and open world. These keys are essential not only for authorising payments but also for verifying wallet ownership, interacting with smart contracts and linking all purchases to the wallet. Essentially, everything is connected to the wallet.</p><p>However, this user-facing aspect of Web3 is largely unfamiliar to most people. Although it is vitally important to understand, most people do not know what these keys are. Losing or exposing the private key can result in the loss of all assets tied to it. Therefore, this area requires considerable design intervention to help people understand the significance of the keys, the right way to generate key pairs and the proper ways to store and protect them.</p><p>Then comes the usage of the wallet itself. It is used to sign messages from applications to authenticate yourself and also authorise transactions being made on your behalf. But these messages keep popping up with messages asking you permission for something reminding me of the early days of Windows 10 where the user kept getting asked to authorise even the most banal tasks. A lot more polish is required to only seek the user’s attention when you absolutely need it.</p><p>The technology is still so new that sometimes the messages from the underlying stack are exposed to the user, and they are presented with only hexadecimal code, asking them to sign the transactions. Wallet developers have a lot of work to do to make this a lot more user-friendly.</p><h2 id="h-6-smart-contracts-rule" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">6. SMART CONTRACTS RULE</h2><p>This is another concept in Web3 that has no equivalent in Web2. Smart Contracts are computer essentially programs that run on a network that was created to enforce an agreement between two parties. This means that two parties can rely on a transaction completing as agreed without the need for third-party guarantors and any associated expenses or delays, making transactions more efficient.</p><p>However, since Smart Contracts are written in code and they can be difficult for people to understand. Although the code behind these contracts can be viewed on blockchain explorers, more work needs to be done to make them accessible to the mainstream. When this happens, it will revolutionise everything!</p><h2 id="h-7-censorship-is-done-differently" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">7. CENSORSHIP IS DONE DIFFERENTLY</h2><p>In Web2, platforms that are owned by individuals or a small group have another problem in that they are also held responsible for implementing the censorship on them. In theory this sounds okay and the obvious thing to do, but if you explore this question more closely, there are many problems with this supposition:</p><ul><li><p>They may very well be ill-suited to doing the job and may end up censoring things that shouldn’t have been censored excluding ideas that shouldn’t be excluded.</p></li><li><p>They have no option but to impose their world views on their systems and if they have a very large world-wide user-base, they will essentially be imposing their views on all of them superseding what’s culturally right.</p></li><li><p>Because they’re a central authority that can make these changes, they may be pressured by external individuals or groups to impose censorship that they themselves may not agree with.</p></li><li><p>There is always the question of “But what about the children?” that sneaks into every conversation about censorship and since there’s no argument against that, the lowest common denominator is set as the bar.</p></li></ul><p>The political leanings of a platform have been the reason for new ones to be setup creating silos and echo chambers that only confirm your already held beliefs without public discourse or the meeting of minds in the middle. This technology has the potential to fracture societies instead of bringing them together.</p><p>Web3 has a different architecture that does not have central authorities that can censor others, making it a haven for free speech absolutists. Adults who believe they should be exposed to all kinds of ideas to progress in life can do so without censorship. For individuals who want a certain level of censorship, they can choose to use a filter and block certain types of content. This step allows individuals to make their own choices without resorting to drastic measures of stifling someone’s voice or de-platforming them. Although it is not a whole solution, it is a step in the right direction. A lot of work has to be done in this field in terms of making this UX better.</p><h1 id="h-conclusion" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Conclusion</h1><p>A decentralised internet can solve a lot of the issues that plague the world right now and give rise to new ways of interacting with others around the world. For this to become a reality, it needs mass adoption and the only thing stopping it is the complexity. User Experience designers can make a huge difference to adoption by making things much simpler for the everyday user. So if you’re a designer on this journey, kudos to you, you’re making a huge difference to the world.</p><p>However, things need to be evaluated from the ground up based on the underlying decentralised architecture in order to deliver superlative experiences that are either at par or even better than the Web2 versions as this is indeed possible.</p>]]></content:encoded>
            <author>reddxf@newsletter.paragraph.com (sharanx)</author>
            <category>web3</category>
            <category>ux</category>
            <category>design</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/c79eb434cde850935f1d1ff9e4be7ca7e56f5d16320ec1f05d70af641abba434.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[A Curated List of Videos to Understand Crypto]]></title>
            <link>https://paragraph.com/@reddxf/a-curated-list-of-videos-to-understand-crypto</link>
            <guid>kz1jEYmFyiKRRSvtW6sc</guid>
            <pubDate>Mon, 10 Oct 2022 22:00:00 GMT</pubDate>
            <description><![CDATA[This list of videos will help you understand crypto and Web3 better.]]></description>
            <content:encoded><![CDATA[<h1 id="h-the-basics-of-crypto" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The Basics of Crypto</h1><p>The following is a collection of great videos to watch to understand the field of crypto and Web3. I hope to keep this updated with new things I learn.</p><h2 id="h-economics-concepts-yes-start-here" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Economics Concepts (Yes, start here)</h2><ul><li><p>What is money? Short version, medium version and the long version.</p></li><li><p>How is money created? <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://youtu.be/mzoX7zEZ6h4">https://youtu.be/mzoX7zEZ6h4</a></p></li></ul><h2 id="h-blockchain-concepts" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Blockchain Concepts</h2><ul><li><p>What is a blockchain? The concept and the visual example and a demo</p></li><li><p>What is a Bitcoin? <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://youtu.be/bBC-nXj3Ng4">https://youtu.be/bBC-nXj3Ng4</a></p></li><li><p>Consensus mechanisms: Proof of Stake vs. Proof of Work and other mechanisms</p></li><li><p>What are smart contracts? Visual explanation</p></li><li><p>What are NFT’s? Basic and more detailed</p></li></ul><h2 id="h-additional-information" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Additional Information</h2><ul><li><p>The evolution of the web: With Marc Andreesen and Chris Dixon</p></li><li><p>Mental models to understand Web3: By Chris Dixon</p></li><li><p>A good overview of the crypto industry: By Chris Dixon</p></li><li><p>Future of Applications: By Balaji Srinivasan</p></li></ul><h1 id="h-glossary-of-terms-to-understand" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Glossary of Terms to Understand</h1><p>Wallets The software that allows you to interact with a blockchain and stores your private key to encrypt messages and public keys to decrypt them</p><p>Hardware Wallets Software wallets that are running on independent devices and are considered more secure as they cannot be hacked remotely</p><p>Private and Public Keys A long set of hexadecimal numbers that can be used to cryptographically sign transactions. The Private Key is used to encode a message (or transaction) and the public key can be used to decrypt that message.</p><p>Coins A unit of account on a blockchain that is usually used to pay for conducting operations on that chain. For example Bitcoin, Ethereum and Atom.</p><p>Tokens It is a unit of account created for specific use that is based on the code of a different coin and therefore doesn’t have a blockchain of its own. For example Matic, DAI, UNI, etc.</p><p>Alt Coins Anything that’s not Bitcoin</p><p>Shit Coins Any coin that doesn’t have good fundamentals and instead probably built to simply pump its value and dump it on the market. Doge coin is a prime example.</p><p>Metamask A software wallet that is used as a browser plug-in</p><p>HODL Means Hold On for Dear Life. It is an investment strategy where people buy a coin and hold on to it for many years instead of selling it.</p><p>WAGMI “We Are Going to Make It” or “We All Gonna Make It”. A positive affirmation.</p><p>BTFD Buy The F*&amp;king Dip. An indication to buy a token or coin when the prices are low.</p><p>BUIDL Build, spelt differently to indicate building in Web3</p><p>Bear Market When sentiments in the market are low</p><p>Bull Market When sentiments in the markets are high and when new all-time-high’s are usually being hit</p><p>Layer 1 Chains Blockchains that are the final settlement layers and provide their own security</p><p>Layer 2 Chains Blockchains built on top of L1 chains that usually do not perform their own security or finality and are instead built to handle fast and cheap transactions</p><p>App Chains Application specific blockchains as opposed to general purpose blockchains</p><p>DEX Decentralised Exchanges. They are typically exchanges where one coin or token can be changed for another coin or token. They usually do not have any KYC requirements. For example UniSwap, SushiSwap, DYDX.</p><p>CEX Centralised Exchanges where the same operations as DEX’s can take place. They usually have KYC requirements and make it simpler for new users.</p><p>Stable Coins Any coin whose value is pegged to a US dollar or any other Fiat currency. They have properties of tokens so that they can be used in crypto transactions (as Fiat currencies can’t be used in this way). But their value is based on the real-world value of the Fiat currency it represents.</p><p>DeFi Decentralised Finance. It represents an industry within crypto that is trying to build equivalent infrastructures that are present in traditional finance institutions like banks. They enable people to deposit their currencies and earn interest, lend their money to others, borrow money from others, etc.</p><p>Cefi, TradFi The slang for centralised finance or traditional finance</p><p>Staking The act of securing a Proof of Stake network by depositing coins into staking pools</p><p>Delegation If the amount of a coin or token you have is low and you can’t meet the minimum requirements of staking, you can delegate your tokens to someone else and they will then stake your tokens once they have the required number of tokens.</p><p>Liquidity Pools</p><p>Automatic Market Makers</p><p>The Merge This is the point where the Ethereum blockchain switched its consensus mechanism from Proof of Work to Proof of Stake. It was a major milestone and therefore got a name.</p><p>ERC20 A coding standard for tokens that are built on the Ethereum blockchain. Examples of this include ETH, OP, ARB and others.</p><p>ERC 721 A coding standard for NFT’s that are built on the Ethereum blockchain. This is usually used when you have only one copy of an NFT, called “1 of 1”. Usually used to represent ownership of expensive objects.</p><p>ERC 1155 A coding standard for NFT’s that are built on the Ethereum blockchain. It is usually used when multiple copies of an NFT are produced by the creator.</p>]]></content:encoded>
            <author>reddxf@newsletter.paragraph.com (sharanx)</author>
            <category>crypto</category>
            <category>web3</category>
            <category>education</category>
        </item>
    </channel>
</rss>