<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>0x125c</title>
        <link>https://paragraph.com/@0x125c</link>
        <description>undefined</description>
        <lastBuildDate>Wed, 22 Apr 2026 12:17:57 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>en</language>
        
        <copyright>All rights reserved</copyright>
        <item>
            <title><![CDATA[8 billion stories and the preservation of Humanity’s Will]]></title>
            <link>https://paragraph.com/@0x125c/8-billion-stories-and-the-preservation-of-humanity-s-will</link>
            <guid>wAJpkZJrdqLgCSsA4eBO</guid>
            <pubDate>Wed, 11 May 2022 01:19:55 GMT</pubDate>
            <description><![CDATA[ProposalHere’s my half-baked proposal — that all 8 billion of us start writing down our thoughts and experiences, from our deepest, darkest secrets to our most inane, frivolous musings, and use time-lock cryptography to send these fragments of our lived experiences into the future. That all of humanity begins fostering a culture that promotes the continual recording and eventual transmission of collective human wisdom for posterity’s sake. That we recognize the inherent uniqueness and value o...]]></description>
            <content:encoded><![CDATA[<h2 id="h-proposal" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Proposal</h2><p>Here’s my half-baked proposal — that all 8 billion of us start writing down our thoughts and experiences, from our deepest, darkest secrets to our most inane, frivolous musings, and use time-lock cryptography to send these fragments of our lived experiences into the future. That all of humanity begins fostering a culture that promotes the continual recording and eventual transmission of collective human wisdom for posterity’s sake. That we recognize the inherent uniqueness and value of both each person’s lived human experience as well as the importance of recording the collective human experience, as an idea and as a lived practice.</p><p>I view this proposal to be a moral imperative that was previously impossible from a technological standpoint, now made possible through near ubiquitous penetration of personal computing devices, planetary-scale computation, and sufficiently decentralized cryptographic protocols. What was previously a technological impossibility for all of human existence until now is now a possibility, only limited by social and cultural adoption —put simply, it is possible for all 8 billion of us to digitally record, securely store, and conditionally transmit our thoughts into the future.</p><h2 id="h-feasibility" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Feasibility</h2><p>The feasibility of this proposal depends on four factors, three of which are technical and more or less solved, with the last unsolved factor being social and cultural adoption:</p><ul><li><p>[1] <strong>near ubiquitous penetration of personal computing devices</strong></p><ul><li><p>smartphone and internet penetration in 2022 is around 80% and 57%, respectively, depending on what data source you consult</p></li><li><p>smartphone penetration varies widely by country (Sources: NewZoo, Wikipedia) and given that those countries with lower smartphone/internet penetration also tend to be the least underrepresented in the media, I would argue this factor is more solved for developed countries and less solved for underdeveloped countries</p></li></ul></li><li><p>[2] <strong>planetary-scale computation</strong></p><ul><li><p>centralized cloud service providers (CSP) have more than enough computation and storage capacity to handle this proposal</p></li><li><p>assuming that it requires 1 megabyte to store 500 pages worth of text and all 8 billion of us write down 500 pages worth of thoughts, the total storage cost of this proposal would require only 8PB (petabytes) of storage</p><ul><li><p>depending on the storage provider, storing 8PB of data costs somewhere on the magnitude of $280,000 (2019 quote of $35,000/PB from Backblaze); while this does not take into account extra storage costs associated with data duplication or ingress/egress fees, I’d bet that every major CSP would eat the full cost of storage/transport to be able to boast about storing</p></li><li><p>8PB of data would fit inside hard drives amounting to a medium-sized closet</p></li><li><p>it must be said that 1MB per person is unrealistic given that I would expect people to store image and video files in addition to text files; however, the extra storage requirements would be at least partially offset by employing file compression algorithms</p></li></ul></li></ul></li><li><p>[3] <strong>sufficiently decentralized cryptographic protocols</strong></p><ul><li><p>four concepts are worth mentioning here: time-lock encryption, dead-man switches, Proof of Humanity, and decentralized storage</p><ul><li><p>[i] <em>time-lock encryption</em></p><ul><li><p>the idea of &quot;timed-released cryptographic protocols&quot; was first proposed by Timothy May (a founding member of the Cypherpunks mailing list) in 1993; May explicitly mentions the use case of “’In the event of my death’-type messages” (see: “<em>Time-Release Crypto</em>” by <strong>Timothy May</strong>)</p></li><li><p>the goal of “timed-release crypto” is “To encrypt a message so that it can not be decrypted by anyone, not even the sender, until a pre-determined amount of time has passed” (Rivest et al, 1996)</p></li><li><p>a comprehensive and rigorous survey around time-lock encryption is beyond the scope of this note, but gwern provides a succinct overview of different techniques in this space (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.gwern.net/Self-decrypting-files"><em>https://www.gwern.net/Self-decrypting-files</em></a>) [I believe using “Bitcoin as a clock” or some other method of decentralized timekeeping (i.e., decentralized oracle network or using connecting a self-executable program that keeps track of Ethereum block size) is the most promising for our purposes]</p></li></ul></li><li><p>[ii] <em>dead-man switches</em></p><ul><li><p>From <em>Sarcophagus: Decentralized dead-man switch — a self-sovereign inheritance protocol for the metaverse</em> by <strong>Felix Machart</strong>: “The user that encrypts a set of data and saves it on Arweave for access by the recipient at ‘resurrection time’ (e.g. in 1h or 1 year) in case she fails to perform a sign of life (the re-wrapping message).”</p></li><li><p>while the primary utility of this mechanism stems from self-executing wills to redistribute digital assets to loved ones, it can also be used to ensure execution of other programs like sending messages to loved ones (e.g., if there are things that you want people in your life to know in the case of an accident)</p></li></ul></li><li><p>[iii] <em>Proof of Humanity</em></p><ul><li><p>proving personhood will be required to ensure that the collective human diary is not Sybil attacked for political and ideological purposes</p></li><li><p>various solutions towards this end are currently being developed, from iris scanning to decentralized verification protocols to trust/reputation-based protocols to POAPs; all of these solutions can be composed together (i.e., the diary repo will accept some combination of PoH and/or POAPs and/or DiD and/or reputation score, etc.)</p></li></ul></li><li><p>[iv] <em>decentralized storage</em></p><ul><li><p>the existence of decentralized file storage mechanisms and protocols, powered by cryptoeconomic assurances of data availability and persistence, provides an alternative against storage guarantees from centralized CSPs</p></li></ul></li></ul></li></ul></li><li><p>[4] <strong>social and cultural adoption</strong></p><ul><li><p>While the aforementioned technological primitives more or less solve for the technical feasibility of this proposal, the social feasibility of the proposal is as yet unsolved. Why doesn’t our society do a better job promoting the recording and sharing of lived human experience?</p></li></ul></li></ul><p>How these journal entries will ultimately be used is still an open question. It may be that people in 2100 can spend their free time reading through the billions of [anonymized or pseudonymized (using ZK identity mechanisms)] journal entries either randomly or with particular interests in mind — e.g., Individuals could explore “What did people in 2050 think about issue XYZ?” by reading their experiences. It’s very likely that these journal entries would also be used as ingest into ML models. The level of permissioning would ultimately be up to the individual recording their experiences to a certain extent — I might want to only permit my family to be able to see the entirety of my entries but I might permit a ZK (zero knowledge) ML model [or another kind of privacy-preserving ML technique like one utilizing homomorphic encryption], for example a “Private Input, Public Model” (see: <strong>0xParc</strong>, “<em>ZK Machine Learning</em>”), some restricted level of access to the data (e.g., sentiment, word frequency, certain standardized variables, etc.) in my journal entries. Assume that, in the future, there will exist a hypothetical protocol that allows each verifiably unique individual (to prevent Sybil attacks, in this case to minimize data harvesting) to unlock one random (or non-random and interest-dependent) journal entry a day/week/month, while the identity of the journalist could be obscured, one would have to expect that this data would be recorded unless we discover some way to enforce “read but don’t record” for content. More on this as this proposal goes from half-baked towards fully-baked.</p><h2 id="h-the-imperative" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The Imperative</h2><p>I call this distributed “project” to be a moral imperative because I believe that our unfiltered and uncensored human experiences will be of value to a humanity that is accelerating towards what many (including myself) expect to be a radically different future. There will be a day in the relatively (in historical terms) not too distant future when the last person who grew up without ubiquitous global information and communication technologies (i.e., smartphones) dies, their lived experiences potentially forever lost to time. Unless, that is, they are written down.</p><p>In more concrete terms, somewhere on the order of 150,000 people around the world die every day (Sources: WEF, worldpopulationreview.com) from various causes — I suspect that only a minuscule percentage of these people have recorded their lived experiences for posterity. I am writing and publishing this note following Mother’s Day. My mother is still around, my mother’s mother is no longer with us. Towards the end of my grandmother’s life she was afflicted with dementia and she would often fail to recognize who I was between my visits to see her. I often think about all the stories and wisdom she could have shared with me had I met her earlier, had I been at an age where I began considering such things. I often think about all the things my mom can’t find the right opportunity to tell me and vice-versa – the vicissitudes of daily life seem to perpetually get in the way of candid, sincere heart-to-hearts, often cultivating into a garden of words forever unspoken and unshared. This is a tragedy in the most real sense of the word.</p><p>What I am advocating for is a widespread cultural practice of journaling, with a particular focus on the elderly, each of whom contain a lifetime’s worth of wisdom but often no one to impart said wisdom to. The potential civilizational value of recording our collective human knowledge and wisdom for current and future generations is immeasurable; the cost is de minimis. More concretely, imagine the sense of purpose and community that a project like this could reimbue to the elderly – to let them know that their perspectives and life experiences are valued and could be preserved throughout time would be a good in and of itself.</p><p>I believe that future generations will look back at this moment in human history and wonder why we did not start engaging in this practice of preservation earlier. Such a project is at or nearing feasibility from a technical standpoint but the real barrier to adoption depends on the communication of the idea that people have thoughts, feelings, and stories worth passing on to future generations. I think this is an idea worth communicating and I hope you help me in doing so.</p><hr>]]></content:encoded>
            <author>0x125c@newsletter.paragraph.com (0x125c)</author>
        </item>
        <item>
            <title><![CDATA[Velocity and the n-body problem]]></title>
            <link>https://paragraph.com/@0x125c/velocity-and-the-n-body-problem</link>
            <guid>W8nvQ8BwEItxBWtKzZ2O</guid>
            <pubDate>Tue, 01 Mar 2022 17:48:31 GMT</pubDate>
            <description><![CDATA[Part 4 of Planetary-Scale Computation: An industry primer on the hyperscale CSP oligopoly (AWS/Azure/GCP):Let’s Get Physical, (Cyber)Physical!: Flows of Atoms, Flows of ElectronsA Cloudy History: Four Histories of Cloud ComputingPrimer on the Economics of Cloud ComputingThree-Body: Competitive Dynamics in the Hyperscale OligopolyInitial Positions and Laws of [Competitive] MotionMass and the Law of [Economic] GravitationVelocity and the n-body problemThe Telos of Planetary-Scale Computation: O...]]></description>
            <content:encoded><![CDATA[<p>Part 4 of <em>Planetary-Scale Computation: An industry primer on the hyperscale CSP oligopoly (AWS/Azure/GCP)</em>:</p><ol><li><p><em>Let’s Get Physical, (Cyber)Physical!: Flows of Atoms, Flows of Electrons</em></p></li><li><p><em>A Cloudy History: Four Histories of Cloud Computing</em></p></li><li><p><em>Primer on the Economics of Cloud Computing</em></p></li><li><p><em>Three-Body: Competitive Dynamics in the Hyperscale Oligopoly</em></p><ol><li><p><em>Initial Positions and Laws of [Competitive] Motion</em></p></li><li><p><em>Mass and the Law of [Economic] Gravitation</em></p></li><li><p><strong><em>Velocity and the n-body problem</em></strong></p></li></ol></li><li><p><em>The Telos of Planetary-Scale Computation: Ongoing and Future Developments</em></p></li></ol><p>Table of Contents for <strong><em>Velocity and the n-body problem</em></strong>:</p><ul><li><p><em>The n-body problem: An Allegory for the Cloud</em></p></li><li><p><em>The n-body Problem: The Cloud’s Food Chain and Trillion Dollar Frenemies</em></p><ul><li><p><em>The Cloud’s Food Web</em></p></li><li><p><em>Public Cloud Value Matrix</em></p></li><li><p><em>Trillion Dollar Frenemies</em></p></li></ul></li><li><p><em>Velocity and Vertical Integration in the Hyperscale Cloud</em></p><ul><li><p><em>Feels Like We Only Integrate Backwards</em></p></li><li><p><em>Shingeki no Hyperscaler: Keep Integrating Forward</em></p></li></ul></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/da6a4e78f2cd8bd1e46a0b647c3893adf85902ca7127fe265d6288a73c76bb9e.png" alt="[From NASA via EarthSky.org] Artists rendering of a supermassive black hole consuming a star" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">[From NASA via EarthSky.org] Artists rendering of a supermassive black hole consuming a star</figcaption></figure><blockquote><p>The fundamental axiom of economics is the human mercenary instinct. Without that assumption, the entire field would collapse.</p><p>— <strong>Cixin Liu</strong>, <em>The Dark Forest</em></p></blockquote><hr><h2 id="h-the-n-body-problem-an-allegory-for-the-cloud" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The n-body problem: An Allegory for the Cloud</h2><p><em>Let’s imagine a fictional place — we’ll call it “ECON”.</em></p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/37a6666b281a0f558e26da2501537e5f8f20a3bd066a9ece0c87ef183a95ddac.png" alt="From Microsoft: Multi-engine n-body gravity simulation" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">From Microsoft: Multi-engine n-body gravity simulation</figcaption></figure><p>Imagine that we’re playing through the starter tutorial for an immersive, multiplayer VR game from an alien civilization called <em>The n-Body</em> <em>Problem.</em></p><p>From our VR headset, we log into a command center in which we can observe this fictional game Universe which operates under Universal laws that are different but similar to the laws under which our own Universe operates. In this fictional Universe, matter is either dumb, smart, or dark — dumb matter passively adheres to the same laws of physics that exist in our Universe, smart matter is able to actively exert force to shape its surroundings and form connections with other smart matter, and dark matter is an inferable but unobservable theoretical construct that is some function of the dumb and smart mass within a system. Furthermore, dumb matter can be converted into smart matter and smart matter can degrade back into dumb matter, but trends indicate that the ongoing transition from dumb matter to smart matter is a monotonically increasing function.</p><p>Gamers here can make a career out of placing bets on the amount of dark matter that they estimate is present in particular regions of Space. Since different market participants have differing information and there exist various methodologies for evaluating the measure of this theoretical construct, [<em>some like to measure flows, others estimate using multiples of comparable masses, and even others just buy and sell whatever they see is popular</em>] there exists an active in-game financial market of people expressing their opinons through buying/selling at whatever price others are willing to sell/buy.</p><p>Our observation of this <em>otherrrr</em> Universe is limited (so far) to a single galaxy which is composed of various stellar systems. In this galaxy, which has been named “ECON”, there exists one particular galactic sector, the “IT” sector, that has been growing faster than the other sectors in ECON through its faster absorption of surrounding matter in Space. As an aside, the ECON galaxy itself has been accumulating mass and energy from the broader Universe at a rate of <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.imf.org/en/Publications/WEO/Issues/2021/07/27/world-economic-outlook-update-july-2021">5-6% per year</a> (Earth years, that is).</p><p>The defining characteristic of this “IT” sector is the relatively high ratio of smart mass in the system, mass that seems to be forming connections to the concentrations of smart mass (smart mass communicate by streaming patterns of electrons at each other) in the Economy’s other systems and is also accelerating the pace at which stellar system “IT” is gathering matter. While ECON has historically exhibited disperse concentrations of carbon-based smart matter, there has been a relatively recent burst of silicon-based smart matter in “IT” that has better computing, networking, and storage capabilities than the carbon-based smart matter that still dominates ECON (we suspect, too, that there might be other forms of smart matter in this Universe that exists outside of our current observational capabilities). Observations show that larger, concentrated masses of carbon-based smart matter have historically utilized and instrumentalized silicon-based smart matter to communicate, but increasing concentrations of silicon-based smart matter seem to be networking and communicating without carbon-based mediation, leading some to believe that silicon-based smart matter will one day replace carbon-based smart matter.</p><p>In the game’s lore, only a few decades following the formation of the “IT” sector, observers noticed an <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Interstellar_cloud">interstellar Cloud</a> (which we’ll just call “The Cloud”) within the “IT” sector that contained the highest mass and concentration of highly networked, silicon-based smart matter observed in ECON. Whereas the <strong>T</strong>otal <strong>A</strong>stronomical <strong>M</strong>atter (TAM) of IT is growing at a rate of only <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.gartner.com/en/newsroom/press-releases/2021-10-20-gartner-forecasts-worldwide-it-spending-to-exceed-4-trillion-in-2022">5-10%</a> per year given IT’s already large size, the size of the Cloud is growing at a rate of <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.gartner.com/en/newsroom/press-releases/2021-04-21-gartner-forecasts-worldwide-public-cloud-end-user-spending-to-grow-23-percent-in-2021">around 20%</a> per year through the steady absorption of less connected, less smart matter within “IT” — some people predict that the Cloud will eventually engulf the entirety of “IT”, leaving some observers with the view that the TAM of “The Cloud” will converge towards the TAM of IT.</p><p>The Cloud’s center of gravity is comprised of three bodies of highly concentrated masses of smart matter, both carbon and silicon-based — the first of these bodies to coalesce is currently the largest of the three with the second-largest body possibly approaching the size of the first and the smallest body lagging behind the other two in terms of mass. The carbon-based smart matter of these three bodies are exceptionally proficient at the rotation of variously shaped silicon-based matter required for the instrumentalization of silicon for computation and so these carbon-based masses are widely known as “shape rotators” to observers. While shape rotators exist throughout ECON, there seem to be especially high concentrations of shape rotators within the Cloud and particularly at these three compan-, I mean <em>celestial bodies</em>.</p><p>Normally, a system of three sufficiently large celestial bodies comprised of dumb mass would devolve into a chaotic system but since these three bodies are smart, they exist in a stable configuration because, unlike theoretical three-body systems in physics textbooks in which unchanging mass bodies indiscriminantly follow the <em>physical</em> laws of motion and gravity, these smart masses in the Cloud follow <em>competitive</em> and <em>economic</em> laws that seem to be maximizing for the quanta of dark matter (darK matter henceforth referred to as a <em>capital</em> “K”) that the bodies possess. These three bodies are pushed and pulled by each other, constantly jostling for a position that gives them access to more flows of smart matter (carbon-based and silicon-based) as well as dumb matter to convert into networked silicon-based smart matter that feeds into the maximization of their bodies’ overall estimated measure of “K”.</p><p>Stellar systems in other galactic sectors like <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.investopedia.com/terms/c/cpg.asp">CPG</a>, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.investopedia.com/ask/answers/05/investmentbankfig.asp">FIG</a>, and NRG (particularly in the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.datacenterdynamics.com/en/news/greenpeace-details-how-major-cloud-companies-are-partnering-big-oil/">OIL subsector</a> of the NRG sector) increasingly network their internal smart matter with the Cloud’s smart matter, utilizing information derived from the Cloud’s computational capabilities to better accumulate K for their own sectors of the ECON galaxy and transferring a portion of their increased flows back to the Cloud as payment for utilizing the Cloud’s smart mass — the Big Three are increasingly competing with each other to connect with mass bodies in other sectors of ECON.</p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.figma.com/file/wyf5VPpxThGTIDDvgMArRV/map-of-n-body-problem">https://www.figma.com/file/wyf5VPpxThGTIDDvgMArRV/map-of-n-body-problem</a></p><p>The overall mass of the Cloud, despite being anchored by the Big Three, is comprised of assemblages of other masses which in aggregate are comprised of two to three times more K than the Big Three <em>despite having much less matter, both dumb and smart.</em> These dematerialized, <em>“softer”</em> bodies — dubbed Interstellar Volumes, or ISVs — began emerging in the Cloud in the wake of the Big Three’s formation, orbiting around the Big Three to utilize computational resources and sell resultant information to other sectors of ECON whilst simultaneously attempting to avoid getting too close to the trio lest they risk being devoured. While each of the Big Three would prefer not being disintermediated by these softer satellite bodies, each of the three are too preoccupied in competitively positioning against the other two to <em>also</em> fight back against the soft bodies which end up paying to use their computational resources anyways. It should be noted here that other mass bodies with large concentrations of silicon-based smart matter exist in the Cloud but these other compani-, <em>masses</em>, have decided that selling the utilization of their silicon isn’t the optimal path towards accumulating K. The Big Three are therefore referred to as the “<em>Public</em> Cloud” in that they network with and sell to whichever mass bodies will pay them for use of their silicon-based smart matter.</p><p><strong><em>Each of these n bodies in the Cloud, from the dematerialized soft masses to the Big Three, are variable-mass bodies seeking to maximize their own long-term K and are constantly engaged in positioning themselves to benefit from flows of information and matter that serve this purpose, potentially at the expense of other mass bodies in this ecosystem — this is the Cloud’s n-body problem.</em></strong></p><p>Observers of ECON from our Universe place bets on the predicted K value of the various mass bodies in ECON and use historical measures of position, velocity, and mass with their knowledge of ECON’s laws of motion and gravity so as to try and predict values of K better than other observers. Financially interested observers are motivated to iterate on their calculations of these masses’ dynamically-adjusting velocities because it informs them as to whether or not their favored mass bodies might eventually occupy a strategically enviable position that precludes existential risk (i.e., collision with other bodies, consumption by a much larger celestial body, etc.) while also maximizing for the accumulation of matter that results in larger K values.</p><p>During the Cloud’s nascent stages, prevailing consensus among observers was that the Big Three would end up sucking up all of the Cloud’s matter leaving no room for independent softwar- <em>softer</em> mass bodies within the Cloud. However it turns out that the Cloud’s smaller and more numerous independent bodies are able, through the inherent nimbleness from dematerialization and relatively smaller size, to leverage the computational infrastructure provided by the Big Three and quickly reposition themselves in order to exploit competition <em>between</em> the Big Three and stay competitively viable in their own niches. The predictability of the competition among the Big Three has enabled the rise of an ecosystem of other masses in the Cloud that don’t need to construct their own silicon-based smart matter because the central three-body structure ensures favorable terms that wouldn’t exist if the Cloud was dominated by a single body of mass.</p><p>But despite the Cloud’s structural stability each of the ecosystem’s mass bodies are seeking relative advantage and reforming their <em>chains</em> of flows and connections to accelerate in trajectories with more matter to absorb and more unintermediated connections with masses in other sectors. Some of these <em>n</em> bodies even form connections with each of the Big Three and seek position themselves as neutral, equidistant intermediaries that try to utilize select aspects of each of the Big Three’s computing capabilities, siphoning off a proportion of flows for themselves. Each body seeks to engulf the connections and flows of other bodies (observers of the Cloud like to say they try to “eat” each other) for their own benefit, in order to reconstruct the networked chain of flows to their own advantage and minimize their own contributions to mass bodies that aren’t their own. Among the more consequential celestial events that are ongoing within the Cloud ecosystem are ...</p><ul><li><p>Satellite bodies of the Big Three are increasingly exploring methods of minimizing their reliance on any particular one of the three in a wholesale attempt to minimize the proportion of flows that they have to transfer to the trio for use of their silicon</p></li><li><p>To increase the capabilities of their silicon-based smart matter, the Big Three are improving the design of their silicon (unfortunately the rate of improvement for carbon-based smart matter is slower by orders of magnitude) and the connectivity properties within separate concentrations of smart matter</p></li><li><p>To stave off disintermediation from the dematerialized, more agile masses that they themselves enable, the Big Three are attempting to form stronger connections with the masses in other sectors that are looking for the informational capabilities provided by silicon-based smart matter by tailoring to the unique properties of mass bodies in these sectors</p></li><li><p>The silicon-based smart matter of the Big Three are dispersing themselves to connected locations on other mass bodies in what observers are calling “hybrid cloud” and “distributed cloud”; Recently, an <em>n</em>-th body that is comprised of connected network of mass at various edges of ECON has begun populating these edge locations with increasing concentrations of silicon-based smart matter</p></li></ul><p>... and much, much more.</p><p>At this point the tutorial ends and a line of text appears:</p><blockquote><p>For observers of the Cloud, competitive and economic principles serve as the foundation for fundamental analysis of the motion, dynamic masses, and changing positions and velocities of the <em>n</em>-bodies.</p></blockquote><p>The goal of <em>The n-Body Problem</em> is this: Use your knowledge to predict the movement, positioning, and mass of these celestial bodies.</p><blockquote><p><em>We invite you to log on again.</em></p></blockquote><hr><h2 id="h-the-n-body-problem-the-clouds-food-chain-and-trillion-dollar-frenemies" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The n-body Problem: The Cloud’s Food Chain and Trillion Dollar Frenemies</h2><p><em>What does it mean to “eat”? Who’s eating who?</em></p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/97d65e86562db5781b2bbb82320ea95b2ceedc5891458f562064890bc19025fc.png" alt="frenemy goals 2022 xoxo " blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">frenemy goals 2022 xoxo</figcaption></figure><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/228ab79a0c67005be53dc8445815b1ca49c78aa36e3a7fa9b1f4e03c03e1d565.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>As penance for the contrived allegory I subjected you to in the previous section, let me begin this section in concrete fashion. Whether explicit or implicit, most discussions about the Cloud industry can usually be reframed as discussions about Christensen’s <strong><em>law of conservation of attractive profits</em></strong> and specifically about which stages and subsystems of the technology value chain are undergoing commoditization versus decommoditization — companies that are well-positioned in stages that are being commoditized are unable capture the value of the revenues that flow through them; companies that control differentiated/decommodified, “<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/Skate-to-Where-the-Money-Will-Be-809a0ccc27c54cfaae2b9fc295350878"><em>interdependent links in the value chain capture the most profit</em></a>.”</p><p>Discussions around whether <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://a16z.com/2021/05/27/cost-of-cloud-paradox-market-cap-cloud-lifecycle-scale-growth-repatriation-optimization/">the cost of cloud represents a Trillion Dollar Paradox</a>, questions around <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.platformonomics.com/2019/11/dining-preferences-of-the-cloud-and-open-source-who-eats-who/">“Who eats who?” between Cloud providers and open source software</a>, proclamations about <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.fabricatedknowledge.com/p/the-tech-monopolies-go-vertical"><em>The Tech Monopolies Going Vertical</em></a> (more on this later), or predictions about the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://erikbern.com/2021/11/30/storm-in-the-stratosphere-how-the-cloud-will-be-reshuffled.html">reshuffling of the Cloud</a> (i.e., reshuffling the Cloud’s <strong><em>value chain</em></strong>) — these are all discussions about the relative power of different factions within the Cloud ecosystem and how one faction’s relative power is de/increasing, thereby justifying lower/higher margin capture along the value chain. When people talk about X eating Y, what they’re ultimately talking about is X treating Y as a modularized, “good enough” commodity and integrating Y into X’s subsystems into a new interdependent, differentiated value proposition.</p><h3 id="h-the-clouds-food-web" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">The Cloud’s Food Web</h3><p><em>Eat and be eaten.</em></p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/274e99e5a432a89bbe4cc381070012b1897fec5f8cdf106dd32871fb99ac08d1.jpg" alt="From @cats_conscious_creations" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">From @cats_conscious_creations</figcaption></figure><p>Marc Andressen’s contribution to tech canon in the form of his <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://future.a16z.com/software-is-eating-the-world/"><em>Why Software Is Eating The World</em></a>, while unfortunately not giving an explicit definition of what it means for X to be ”eating” Y, leaves hints as to what viable interpretations of what it means for software to “eat” the world — in the article he writes that “<em>Software is also eating much of the value chain of industries that are widely viewed as primarily existing in the physical world.</em>” and also refers to markets and industries being eaten by software.</p><p>It was difficult to portray “eating” in my <em>n</em>-body allegory because the closest analogy to eating in a celestial context [<em>that is, when black holes consume stars in a process usually accompanied by </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Spaghettification"><em>spaghettification</em></a>] is inappropriate for the “eating” being done in the Cloud, which is the simultaneous commodification of competitors’ offerings [<em>as well as the offerings of players in adjacent value chains</em>] and reintegration of proprietary solutions into interdependent subsystems along the value chain — the three/<em>n</em>-body analogy wasn’t well-suited to illustrating this idea. A celestial body analogy that is more appropriate but impossibly contrived would be if the company with control over an interdependent link in the value chain was represented by a black hole that was continually spaghettifying [<em>ayyy i’m making up words ova here</em>🤌] the outer gaseous layers of multiple stars [<em>”stars” that are themselves gathering flows of mass from other sources</em>], with the stars representing companies/subsystems being commodified and mass accumulation by the black hole serving as the metaphorical value capture along the value chain. In other words, the more accurate analogy would be a dynamic, fractal web of value chains.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/28d1041d2ea2c5bfba30141c307e4f54d3c5abc122a911c366ed9c8872cec8a6.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>The “stars” can live on indefinitely, their growth will just be limited by the black hole capturing most of the excess mass in the system. Therefore Y doesn’t have to completely vanish to have been “eaten” by X (though this <em>can</em> be the case as in Andressen’s example of Amazon bankrupting Borders). X bankrupting Y isn’t necessarily X “eating” Y, nor is X acquiring Y equivalent to X “eating” Y — IBM didn’t “eat” anything through its aquisition of Redhat but AWS has certainly taken big bites out of open source without bankrupting or acquiring companies that monetize through OSS. Rather, X can “eat” Y by keeping Y alive in order to produce increasingly commodified modules that X reintegrates into X’s own processes and subsystems in order to capture value/margin along the value chain. The symbiote-parasite spectrum (where the one doing the “eating” = parasite) is a more useful analogy than predator-prey. Cloud “eating” open source does not mean AWS or Azure write a new operating system kernel to replace/”kill” Linux, nor does multi-cloud [tools/software/companies] “eating” cloud mean that the cloud dies in <em>any</em> world that multi-cloud ends up succeeding. That which is “eaten” continues to live on, earning returns that <em>theoretically,</em> eventually converge towards their cost of capital until fortunes reverse or the host dies and the “parasite”/symbiote finds other nutrient sources.</p><p>[<em>Sidenote: The word “parasite” has negative connotations but this is wholly unintended; AWS commodifying basic compute through its “parasitic” use of OSS has unlocked untold surplus value into the world and, like, what else is OSS there for if not for enabling the actual democratization of large-scale computation?</em>]</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/d0c751ff3101fb600da28e828d361468859706254541632e2e7dfe2b0e93abd5.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/dd289163caaa5e0ba969205152247caa0447256e4ab834ed48f44f32894631ae.png" alt="I don’t agree with how this depiction frames the cloud landscape but it does a good job conveying the idea of the Cloud ecosystem’s food chain in a memorable way. I wonder what a fish-stuffed fish dish would taste like?" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">I don’t agree with how this depiction frames the cloud landscape but it does a good job conveying the idea of the Cloud ecosystem’s food chain in a memorable way. I wonder what a fish-stuffed fish dish would taste like?</figcaption></figure><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/a9ccdfc7ea52089c33793393127f5c0331f5053edfbc04aff5355d770cc70c6a.png" alt="[From an Nvidia blog post]  A sign of how pervasive Andreessen’s “X eating Y” metaphor has become in tech." blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">[From an Nvidia blog post] A sign of how pervasive Andreessen’s “X eating Y” metaphor has become in tech.</figcaption></figure><p>So the “eating” metaphor is imperfect but intuitive in much the same way that the “Cloud” is, and any phrase with the “Cloud” “eating” this or being eaten by that ends up sounding like technobabble for people who don’t already have mental models of the concepts underlying “Cloud” and “eating” at their disposal. The purpose of this exposition was to attempt to draw connections between the “X eating Y” throughline with the commoditization-differentiation &amp; modularity-interdependence frameworks we’ve been working with so far. Looking with a Christensonian lens, what all of these articles/posts/tweets ...</p><ul><li><p><em>[software ← <s>world</s>]</em> [Aug ‘11] <strong>Marc Andreessen</strong>: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://future.a16z.com/software-is-eating-the-world/">Why Software is Eating the World</a></p></li><li><p><em>[AI ← <s>software</s>]</em> [May ‘17] <strong>Nvidia</strong>: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://blogs.nvidia.com/blog/2017/05/24/ai-revolution-eating-software/">The AI Revolution is Eating Software</a></p></li><li><p><em>[GPUs/TPUs ← <s>linear algebra</s> ← <s>deep learning</s> ← <s>machine learning</s> ← <s>AI</s> ← <s>software</s>]</em> [Nov ‘17] <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://twitter.com/reza_zadeh/status/929957611348180993">@Reza_Zadeh</a></p></li><li><p><em>[multi-cloud ← <s>cloud</s> ← <s>open source</s> ← <s>software</s>]</em> [May ‘18] <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://twitter.com/bassamtabbara/status/996899379964399616">@bassamtabbara</a></p></li><li><p><em>[OSS ← <s>cloud computing</s>]</em> [Aug ‘18] <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://coss.media/oss-will-eat-cloud-computing/">Open Source Will Eat Cloud Computing</a></p></li><li><p><em>[public clouds ← <s>OSS</s>]</em> [Nov ‘19] <strong>Platformonomics</strong>: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.platformonomics.com/2019/11/dining-preferences-of-the-cloud-and-open-source-who-eats-who/">Dining Preferences of the Cloud and Open Source: Who Eats Who?</a></p></li><li><p>[<em>cloud ← <s>software</s></em>] [Apr ‘20] <strong>Bessemer Venture Partners</strong>: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.bvp.com/atlas/state-of-the-cloud-2020">State of the Cloud 2020</a></p></li><li><p><em>[cloud ← <s>software</s>]</em> [May ‘21] <strong>nikitha suryadevara</strong>: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.lafemmenikitha.com/2021/05/cloud-is-eating-software/">Cloud is eating software</a></p></li><li><p><em>[Software-defined, hardware-accelerated infra ← <s>original software-defined infra</s>]</em> [Aug ‘21] <strong>Nvidia</strong>: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://blogs.nvidia.com/blog/2021/08/23/software-ate-world-hardware/">Software Ate the World — That Means Hardware Matters Again</a></p></li></ul><p>... are saying when they say “X is eating Y” is just that X has modularized Y and Y is becoming increasingly commoditized so that the majority of excess returns will eventually go to X instead of Y if the dynamic is left unchanged.</p><p>To be clear, I’m not saying that “Cloud eating Software” means that software companies sacrifice all of their margins to cloud providers [<em>they don’t, despite IaaS costs constituting 50+% of SaaS COGS; also I don’t think cloud is eating software but software is becoming cloud-like</em>] but that this <em>would be</em> the case <em>if the dynamic was left unchanged</em> and if not for the countervailing trends of “Multi-cloud eating Cloud” or “OSS eating IaaS/PaaS” and therefore helping to modularize clouds infrastructure at the same time that Cloud is eating Software. These are continual <strong><em>processes</em></strong> that offset and interact with each other.</p><p>Since nature doesn’t typically exhibit examples of animals eating another animal while itself being eaten by two other animals (who are, themselves, being chewed on by even others), this “eating” example is only accurate if the predator-prey relationship is unidirectional, totalizing, and discrete — it never is. A more faithful representation of the competitive reality might be a fractal web of celestial mass bodies, each of whom are being spaghettified while also spaghettifying others, with the mass body gaining mass at the fastest rate representing the player that controls interdependent links in the value chain — but that’s hardly a useful marketing metaphor that people can instantly <em>get</em> like “eat” and “cloud”.</p><p>Instead, here’s what I think might be a more <em>useful</em> and <em>explanatory</em> representation of the Cloud ecosystem that is inspired by this graphic of <em>The Dis-Integration of the Computer Industry</em> featured in both <em>Skate to Where the Money Will Be</em> and <em>The Innovator’s Solution ...</em></p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/cb535eaaf5758713e5d925188feb0e6098351ff0e591ba469fea7d53bdbed967.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>... that:</p><ol><li><p>decomposes the value chain of the public cloud [<em>inspired by Christensen</em>]</p></li><li><p>lays out a simplified chessboard for the cloud ecosystem [<em>inspired by Porter’s five forces</em>]</p></li></ol><h3 id="h-public-cloud-value-matrix" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Public Cloud Value Matrix</h3><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/8a1d939753cd852a41d5b88d7d33b99fdc7805c77396d18006b8bf0698b5de62.png" alt="See Notion or the embedded link below to access the associated Figma file." blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">See Notion or the embedded link below to access the associated Figma file.</figcaption></figure><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.figma.com/file/5DD8fQpdSF1YTLFcpeqh8u/public-cloud-value-matrix?node-id=0%3A1">https://www.figma.com/file/5DD8fQpdSF1YTLFcpeqh8u/public-cloud-value-matrix?node-id=0%3A1</a></p><p>This diagram on the right uses the hyperscale triopoly as the central frame of reference in order to illustrate the various fronts of competition that are occuring — from the perspective of the hyperscalers, they are being challenged ...</p><ol><li><p>[from <strong><em>inside</em></strong>] by the <strong>other hyperscale cloud service providers</strong></p></li><li><p>[from <strong><em>above</em></strong>] by <strong>cloud agnostic SaaS vendors</strong> looking to leverage their status as digital Switzerlands and modularize cloud infrastructure; Snowflake is my stand-in example for both large ISVs looking to modularize IaaS+PaaS as well as companies that are selling multi-cloud software (HashiCorp, Upbound, RackN, etc.)</p></li><li><p>[from the <strong><em>edge</em></strong>] by <strong>Cloudflare</strong>, which recently stated its intention to be the “<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.protocol.com/enterprise/cloudflare-r2-storage-aws">fourth major public cloud</a>” and is actively trying to modularize basic centralized compute and storage and shift the basis of competition to the edge where latency, governance, serverless start-up times, etc. matter (and where it has advantages)</p></li><li><p>[from <strong><em>below</em></strong>] by <strong>Nvidia</strong>, which wants to treat entire data centers as <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.nextplatform.com/2020/08/04/datacenter-is-the-new-unit-of-compute-open-networking-is-how-to-automate-it/">the new units of compute</a> (modularization) in order to sell programmable fabrics, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.nextplatform.com/2022/01/24/meta-buys-rather-than-builds-and-opens-its-massive-ai-supercomputer/">HPC solutions</a>, and more GPUs — differentiated solutions in a world where the entire data center is modularized</p></li><li><p>[from <strong><em>outside</em></strong>] by <strong>companies who want to pay less for cloud services</strong> which is <em>every</em> company but <em>particularly</em> tech-first companies (aka companies w/ big % of COGS dedicated to cloud) who have mature knowledge and expertise about the Cloud (aka companies who’s initiatives re: Cloud increasingly become cloud cost management) and for whom the a reallocation of engineering resources towards redesigning their cloud architecture is marginally beneficial vs the project opportunity set [<em>i.e., top line growth is slowing, emphasis on bottom up is rising</em>]</p></li></ol><p>This diagram is essentially a kind of Porter’s Five Forces <em>[buyers, suppliers, substitutes, new entrants ⇒ S&amp;P500, NVDA, SNOW, NET, respectively]</em> that the value chain decomposition diagram elaborates upon by identifying exactly <em>where</em> along the value chain the different players have competing interests with respect to modularization/commoditization vs interdependence/integration.</p><p>Here’s how to read the value chain diagram:</p><ul><li><p>the x and y-axis are similar; towards the origin (left, down) indicates going backwards in the value chain and going away from the origin (right, up) indicates going forwards in the value chain; backwards is the land of witchcraft magic with impenetrable moats (TSMC, ASML); forwards is where the average individual user sits (you, me)</p></li><li><p>the unidirectional arrows between companies on the x-axis indicate the flow of goods and services <em>[Nvidia sells chips to AWS which sells EC2 and S3 instances to Snowflake which sells data analytic tools to Disney which sells streaming services to You]</em></p></li><li><p>the groupings of value-add processes on the y-axis are grouped in the following categories: “witchcraft” [<em>aka stuff there’s no chance anyone in the Cloud will ever touch</em>], “IaaS”, “PaaS”, “SaaS”, and “interface &amp; edge” [<em>a provisional category with admittedly poor conceptual borders but fuck it we ball</em>]</p></li><li><p>the dotted green boxes that span left-to-right indicate that the company engages in the process; overlapping solid red or blue blocks indicates that the company is seeking to treat that particular process as a modular (blue) or interdependent (red) component with respect to its own offerings</p></li><li><p>stars indicate where along the value chain that that company is particularly focused on and possibly views as a source of existing or potential strategic advantage and/or differentiation; question marks indicate areas where I’m currently unsure about and/or expectant to learn more about [<em>e.g., I’m personally unsure how Cloudflare views their own data centers and edge locations wrt modularity because I simply haven’t done enough research on them yet</em>]</p></li><li><p>the purple boxes linked to the header “areas of competing interests” are those areas along the cloud value chain where the different players are either competing ([A], [B], and [C]) or have diametric positions with respect to modularity ([D], [E]); the easiest example of this diametric opposition is between Snowflake, whose business model is centered around modularizing cloud infrastructure, and AWS, who would prefer to stave off commoditization of their infrastructure for as long as possible through an integrated offering that modularizes infrastructure only insofar as they are the primary beneficiaries of modularization through an integrated cloud offering</p></li><li><p>the numbers accompanying [A] through [E] indicate which parties are competing within the specified segment — e.g., “[E][one][four][five]” indicates that the Nvidia and hyperscalers are competing within chip design and also that independent companies would prefer to treat new chip designs as modular, even though the chip design companies will insist that they are differentiated</p></li></ul><p>Let me elaborate on what I think might be the most confusing point here, which is that the red or blue blocks for each player is an indication what they’re <em>hoping</em> to make interdependent or modular. “XYZ Co.’s” [<em>representative of Cloud customers</em>] blue block is an indication that they would <em>prefer</em> for the Cloud to be modularized and commoditized, although things won’t necessarily turn out that way. Same story with AWS and Azure’s solid red block — both these players would <em>ideally</em> be the beneficiaries of an interdependent Cloud value chain for which they are the primary suppliers for a fully integrated solution but the realities of competition will be the ultimate determinant.</p><p>While the value chain diagram can certainly be improved and iterated upon and you could easily argue that I’ve mischaracterized at least a couple of things, I think much of the utility comes from the framing and making explicit and visible the divergent interests within the cloud ecosystem. Mapping all of this out makes it easier for me to now <strong><em>finally</em></strong> write out my views on Cloud ecosystem without having to provide qualifications for every point for fear of not providing sufficient context:</p><p>[<strong><em>Sidenote</em></strong>*: These points don’t map perfectly (i.e., one-to-one) onto the “areas of competing interest”*]</p><ul><li><p>[A &amp; C] <strong>data as differentiator</strong> ||| Everyone wants more data. <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.wired.com/story/no-data-is-not-the-new-oil/">Data is not the new oil</a>. Oil is a finite, non-reusable commodity whereas data is infinite and perpetually reusable — your Google search history data can theoretically be cross-correlated, recombinated, re-interpreted, and so on until the heat death of the Universe. The proliferation of primary data from IoT edge devices, health and fitness trackers, VR-based eyetracking, continuous urban surveillance in physical and digital spaces, terrestrially and celestially-focused satellites, etc. and the creation of [<em>an exponentially higher quantity of</em>] derivative, secondary data from the cross-correlation and interrelation of primary data means there will be a lot more data in the future. The battle over which parties can exert control (<em>de jure</em> <strong>AND</strong> <em>de facto</em>) over data creation and collection is one that already being fought on multiple fronts on the largest of scales [<em>i.e., will AI-powered </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://arena.org.au/cybernetic-capitalism-with-chinese-characteristics/"><em>Cybernetic Capitalism with Chinese Characteristics</em></a><em>, strengthened by relaxed cultural attitudes concerning privacy and a clear CBDC strategy, prove a long-term strategic </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://foreignpolicy.com/2021/12/11/bitcoin-ethereum-cryptocurrency-web3-great-protocol-politics/"><em>techno</em></a><em>/geo-political advantage against the United States and their more privacy-oriented culture + ambiguous </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.federalreserve.gov/publications/files/money-and-payments-20220120.pdf"><em>digital dollar</em></a><em> strategy</em>]. On the relatively smaller scale of the tech industry, the importance of data as <em>the</em> fundamental differentiator can be seen in ...</p><ul><li><p>... the importance that literally <em>every</em> Big Tech company is placing on consumer touchpoints, especially in tech that captures biometric data like integrated software/hardware for healthcare and fitness and AR/VR hand/eye/body-tracking. Also important are IoT consumer touchpoints (Nest, Ring, Alexa, etc.) that presage the future of ubiquitous computing and control over the hardware layer of consumer computing interfaces — Facebook = Oculus, Apple = iPhone/Glasses, Amazon = Fire Phone/Kindle, Microsoft = Surface/HoloLens, Google = Chromebook/Pixel/Glass; I would not be surprised if Amazon eventually leveraged their Graviton (and other) chip designs to produce AI-accelerated, Linux-based laptops as an adjacency to their AWS strategy (with less clear but nevertheless present adjacencies to advertising, media, and gaming). For me, what’s most interesting here is the potential for Microsoft to create new, clearly differentiated use cases involving [HoloLens + their Azure IaaS + creator/developer platform] because the potential of this integrated offering to attack both Facebook (through having better integrations with enterprise data already in Azure and not in Facebook’s datacenters) and AWS (through creation of consumer/enterprise compute demand that is less easy to modularize through multi-cloud given the latency requirements of AR/VR use cases which allows for a premium for full-stack integration)</p></li><li><p>... the competition between Snowflake (and other non-hyperscaler software players, but particularly Snowflake) and hyperscaler CSPs for the creation of a [i.e. <em>the</em>] data <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.trustradius.com/compare-products/aws-data-exchange-vs-ibm-infosphere-vs-snowflake">exchange/marketplace/platform</a> for enterprises. I’m still structuring my thinking about this but my initial position has been that SNOW wins in this domain relative to AWS because of Snowflake’s credible neutrality wrt data lock-in (hey we’re multi-cloud!) and the fact that they’re not conjoined to Amazon the Industry Killer (hey we’re not Amazon!). Also interesting are new crypto-based data platforms like <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://oceanprotocol.com/technology/marketplaces">Ocean Protocol</a> and Chainlink (which <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.coindesk.com/business/2021/12/07/ex-google-ceo-eric-schmidt-joins-oracle-provider-chainlink-as-strategic-advisor/">Eric Schmidt serves as a strategic advisor</a> to as of Dec ‘21) and rapidly advancing [<em>in terms of primary research and intra-crypto awareness within certain well-informed niches, but not yet consumer-friendly use cases</em>] cryptographic technologies like zero-knowledge proofs [<em>and other, possibly non-cryptograhic, differential privacy technologies</em>] that might allow users to control the use of their personal data for AI/ML model training/inference akin to Gboard’s FLoC-based training on steroids — i.e., I’d be okay with submitting extremely sensitive personal data to be used for model training and inferencing purposes <strong><em>given certain cryptographic guarantees</em></strong> about personal identity, data retention, etc. Privacy guarantees around <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6447070/">genetic information</a> and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://arxiv.org/pdf/1904.06809.pdf">eyetracking datasets</a> will become really, really, <em>really</em> fucking important in the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.coindesk.com/layer2/2022/01/19/meta-leans-in-to-tracking-your-emotions-in-the-metaverse/">near future</a></p></li><li><p>... and other areas in which <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf"><em>The Unreasonable Effectiveness of Data</em></a> and the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.gwern.net/Scaling-hypothesis#blessings-of-scale">relative importance of data</a> [<em>in quantity but also diversity — i.e., data about what time you drink water every day for your entire life is probably less useful than data about exactly what you ate every day for a year</em>] vs model architecture</p></li></ul></li><li><p>[B &amp; D] <strong>the network is the computer</strong> ||| Software-defined networking, Nvidia’s claim that the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.nextplatform.com/2020/08/04/datacenter-is-the-new-unit-of-compute-open-networking-is-how-to-automate-it/">datacenter is the new unit of compute</a>, Cloudflare’s proclamation that <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://blog.cloudflare.com/the-network-is-the-computer/">The Network is the Computer</a>, “data center disaggregation”, and continued penetration of hybrid/multi/distributed cloud deployment models — all of these are convergent movements marching towards this seemingly inevitable gestalt of a single, ubiquitously connected global computer.</p></li><li><p>[D]: <strong>multi-cloud</strong> ||| Adoption of multi-cloud solutions for cloud cost management and semi-automated cloud services provisioning [<em>i.e.,Customers asking “Which combination of hyperscale CSP vs colo vs hybrid will provide the best cost-performance given a certain set of constraints?”</em>] will, at some point, inflect and accelerate. This will be driven by the natural maturation of cloud customers, more multi-cloud software solutions that make this deployment method easier and easier, and continued erosion of vendor lock-in from similar disruptive tactics like Google’s release of Kubernetes and Cloudflare’s introduction of R2, which has zero egress fees. Some of the implications of this change <em>might</em> include ...</p><ul><li><p>progressive modularization of basic cloud services + minimization of data lock-in and other switching costs that make data more liquid <em>could</em> allow GCP to have structurally higher margins through selling a higher mix of specialized instances (VCU, TPU) that <em>[</em><strong><em>might</em></strong><em>, I’m not sure]</em> have better margin profiles than commoditized CPU instances; that enterprises might split workloads among different CSPs has <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/Inside-Wells-Fargo-s-multicloud-strategy-d6036d7140c345bbb2d2608966b36f69">precedence</a></p></li><li><p>actualization of the long-awaited commoditization of basic compute and storage that leads hyperscalers to pass through cost improvements to SaaS companies → SaaS companies, as a whole, eventually have structurally lower COGS <em>[i.e., if AWS operating margins fall from it’s historical 25-30% to 20%, it would be “cloudless” SaaS companies who are the other side of AWS’s margin “collapse” (oh no, not 20% operating margins!)]</em> but this only means barriers to entry for being a SaaS co are lowered if they can’t fortify their moat; the extent of margin uplift would depend on the nature of cloud services being utilized for it may very well be that differentiated hardware instances like AI accelerators can even expand their margins, depending on the market demand for AI [<em>from more AI start-ups, more use of AI applications from enterprises and governments, or, my personal favorite thesis, the rise of more citizen data scientists and creators</em>]</p></li></ul></li><li><p>[D] <strong>ISVs vs hyperscalers</strong> ||| As a result of increased modularization through IaaS’s oligopolistic structure [<em>good for customers relative to a monopolistic structure</em>], multi-cloud adoption, and eventual maturity of cloud customers, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://erikbern.com/2021/11/30/storm-in-the-stratosphere-how-the-cloud-will-be-reshuffled.html">the cloud will be reshuffled</a> such that cloud vendors are “<em>basically leasing capacity in their data centers through an API</em>” and independent software vendors, given the advantages of cloud neutrality and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/Independent-Software-Providers-and-the-Cloud-Vendors-7753e1795da6406a842a3fe78cbcbf96">product focus</a>, will gain dominance in the software layer where they will inhabit their respective niches as <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.platformonomics.com/2021/12/the-cloudless-cloud-company/">cloudless cloud companies</a>. On the cost side, in much the same way many companies have evolved to assume a cloud-first approach towards their tech stack, companies will eventually begin with a multi-cloud approach <em>if</em> the cloud ecosystem evolves such that there are companies focused on providing multi-cloud solutions, which I think it will. One of the strongest arguments against the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://a16z.com/2021/05/27/cost-of-cloud-paradox-market-cap-cloud-lifecycle-scale-growth-repatriation-optimization/">trillion dollar paradox</a> argument laid out by Sarah Wang and Martin Casado is that the cost of defocusing engineers to pursue cost saving measures is the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/a16z-Infra-6-The-Cost-of-Cloud-vs-Repatriation-ce419d27019f4e1580494034b5a02646">opportunity cost of not pursuing growth initiatives</a> with valuable engineering resources. The idea here is that there is a huge market opportunity to create multi-cloud abstraction layers and there will <em>eventually</em> come a time where those companies who have previously solidified their multi-cloud cost strategy successfully share their experience with the uninitiated, creating demand for solutions (from start-ups, other credibly neutral parties, etc.) that make it easier to architect cloud neutrality from the get go, and more and more engineers internalize the ethos of vendor agnosticism from the start — more on this later</p></li><li><p>[E] <strong>hyperscaler verticalization</strong> <em>[1][2][4][5]</em> ||| As a response to long impending “commoditization” of basic cloud services, hyperscalers are pursuing backwards integration into chip design <em>[and other things in the world of atoms that can only be pursued with the benefit of scale like </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://news.microsoft.com/innovation-stories/datacenter-liquid-cooling/"><em>immersion cooling servers</em></a><em>]</em> to both save on costs (i.e., spread their R&amp;D costs for new designs to achieve the net effect of lower LT costs) and differentiate their infrastructure offerings. Hyperscalers are also pursuing forwards integration through their cultivating partner ecosystems and tailoring their stack for industries that don’t have sufficient concentrations of Cloud expertise. One interpretation of <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/As-re-Invent-kicks-off-AWS-CEO-Adam-Selipsky-charts-key-role-of-partners-in-a-new-cloud-world-a2300b9727b8450ea6fff57bf1a87a5b">all</a> <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/Thomas-Kurian-CEO-Google-Cloud-at-the-Deutsche-Bank-2021-Technology-Conference-2676d7bfae274e09aa6da44cac1e5059">this</a> <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/Microsoft-Inspire-2021-Keynote-with-Satya-Nadella-9e08963d8d8b4062b9bd05a31313a525">rhetoric</a> <em>[from Selipsky, Kurian, and Nadella, respectively]</em> about “partners” and “partner ecosystems” is the recognition of the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.wsj.com/articles/battle-for-the-cloud-once-amazon-vs-microsoft-now-has-many-fronts-11627221600">shift in strategic leverage</a> away hyperscalers and towards buyers, SaaS/ISVs (frenemies), and “global systems integrators” (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://blogs.partner.microsoft.com/mpn/the-key-to-industry-wide-digital-transformation-global-system-integrators/">aka consulting firms</a>) in a world of mix-and-match where AWS isn’t the only option anymore (a Straussian reading of Selipsky saying “<em>Our fundamental strategy around our partners is unchanged</em>” during the latest re:Invent sounds like “Things have changed but let’s act like we’ve been playing nice all along”) — more on this in the next subsection.</p></li></ul><hr><h2 id="h-trillion-dollar-frenemies" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Trillion Dollar Frenemies</h2><p><em>Cost of cloud. ISVs vs hyperscalers. Partners.</em></p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/4b871035864b9a1c1328cc810cabfbebadd55c6247918c24dcca6c924f29b24c.png" alt="“The truth is, you’re the weak ... and I am the tyranny of hyperscale Cloud. But I’m trying Ringo. I’m trying real hard to be an ecosystem partner.”" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">“The truth is, you’re the weak ... and I am the tyranny of hyperscale Cloud. But I’m trying Ringo. I’m trying real hard to be an ecosystem partner.”</figcaption></figure><p>Here’s the “cost of cloud” debate ...</p><ul><li><p>[May ‘21] <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/The-Cost-of-Cloud-a-Trillion-Dollar-Paradox-a79a35c291b84fdc986da02be5eaa572">The Cost of Cloud, a Trillion Dollar Paradox</a></p></li><li><p>[May ‘21] <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://twitter.com/zackkanter/status/1399013516107948037">https://twitter.com/zackkanter/status/1399013516107948037</a></p></li><li><p>[Jun ‘21] <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/Cloud-Cost-with-Martin-Casado-6d09f5b30f774ed0a6f8e612c4abb33b">Cloud Cost with Martin Casado</a></p></li><li><p>[Jul ‘21] <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/a16z-Infra-6-The-Cost-of-Cloud-vs-Repatriation-ce419d27019f4e1580494034b5a02646"><strong>a16z Infra #6: The Cost of Cloud vs. Repatriation</strong></a></p></li></ul><p>... between [<em>what I’m facetiously dubbing</em>] the <strong>Repatriarchy</strong> and the <strong>Hypershillers</strong> [<em>primarily embodied by </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://twitter.com/martin_casado"><em>Martin Casado</em></a><em> and </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://twitter.com/zackkanter"><em>Zach Kanter</em></a><em>, respectively</em>] in broad strokes:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/e7bd16f96bbecc5626e477a152a1f62ce37605da8ab94132c8c859c8aca05446.png" alt="Note: See Notion to untoggle and view the hidden quotes." blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Note: See Notion to untoggle and view the hidden quotes.</figcaption></figure><p>... because repatriation (and more specifically, designing in optionality/flexibility through modular, vendor-agnostic design) can actually free up resources to pursue more growth opportunities. I should be explicit at this point for those who haven’t read or forgot the original a16z article and say that the “Repatriarchy” isn’t arguing for or against repatriation, rather that “<em>infrastructure spend should be a first-class metric</em>” and that the “trillion dollar paradox” is centered around managing this cloud spending throughout the company lifecycle. As stated by Sarah Wang and Martin Casado, the paradox is this: <em>You’re crazy if you don’t start in the cloud; you’re crazy if you stay on it.</em></p><p>Customers have historically had a frenemy relationship with the major public clouds, especially tech-first independent software vendors (ISVs) who essentially outsource most or all of their infrastructure needs to AWS/Azure/GCP whilst, at the same time, potentially competing with them. The “won’t AWS just copy it?” narrative is well-trodden territory given that it’s been in the minds of investors for over a decade now. It’s clear now, however, that ISVs have particular advantages relative to the hyperscalers ...</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/a0eb8de3257545cb41eb0ab7c87a478a38ee4d21d1eee51782805fd270fe79ca.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>ISVs now inhabit niches in the Cloud ecosystem that are somewhat defensible from the hyperscalers, who have been so engaged in maintaining competitiveness against each other that maintaining dominance over the entire ecosystem has become untenable — this is the nature of an “ecosystem”. To Offringa’s point on “the spectrum of differing approaches”, the strategic calculus for hyperscalers has increasingly become one in which GCP thinks to iself “If I can be the best ecosystem partner for OSS/ISVs then I can take share from AWS and Azure” and AWS thinks to itself “I’ll lose out if I’m not a good ecosystem partner to ISVs and Azure and Google Cloud are.” Although many software products compete with the hyperscalers’ offerings, if customers are going to use Snowflake or MongoDB anyways, it might as well be through their infrastructure rather than their competitors’ infrastructure. In a similar way to when a certain European country was too busy fighting on two separate fronts <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.youtube.com/watch?v=O0MMvvcYhO4">to invade Switzerland</a> and ended up partnering with them instead, the hyperscalers’ <em>détentes</em> with ISVs in order to combat one another have evolved into a full-blown <em>partnerships</em>. As to be expected from the disruptor with the least to lose, Google outlines this relationship in the relatively most straightforward manner:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/7cf049334ac207ce7f77d41ecd08ef6f9d9c9fa5e38b6a36481cbd3b85a54ec7.png" alt="From Alphabet Inc at Goldman Sachs Technology &amp; Internet Conference 2020" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">From Alphabet Inc at Goldman Sachs Technology &amp; Internet Conference 2020</figcaption></figure><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/216785f262e25589695313deff05cfae156b6e1721b68c289c57620c30745606.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>... which, well, seems to have more layers to it. For example, AWS feeling the need to say that “<em>Our fundamental strategy around our partners is unchanged</em>” might be reinterpreted as AWS’s tacit recognition of how “partners” now have a level of strategic leverage that needs to be negotiated with. Selipsky’s answer to “Well, are you going to compete with your ecosystem?” is clearly not an unequivocal “No”. Selipsky’s instead answers that “<em>there are many, many thousand of explosive opportunities for partners in our ecosystem</em>” which I read as “Yes. Yes, we are going to compete with those in our ecosystem who are areas that we consider to be strategic to AWS.” This is not me passing a vaue judgement on AWS — if you’re an AMZN bull then these interpretations might be reassuring if your long thesis depends on AWS exerting control over the SaaS layer [<em>or they might not if your long thesis depends on AWS becoming a good ecosystem partner, it all depends</em>].</p><p>It should be noted both of these public explications on partner strategy by Kurian and Selipsky are from late 2021 (September and late November, respectively) which, along with Satya’s heavy emphasis on partner ecosystems throughout 2021, can be interpreted as hyperscaler recognition of the strategic need for more allegiances. Once again, these types of competitive dynamics are only possible because of cloud infrastructure’s oligopolistic structure — AWS would not be talking about a “partner ecosystem” in the same way if they had a sole monopoly over cloud infrastructure, nor would there be as many ISVs to populate the ecosystem in the first place.</p><p>This industry-wide frenemy dynamic has the effect of contributing to the modularization of cloud infrastructure from cloud-based software and, in my estimation, acts as one of the motivators for hyperscalers’ continuing efforts towards vertical integration — this is the focus of the next subsection.</p><hr><h2 id="h-velocity-and-vertical-integration-in-the-hyperscale-cloud" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Velocity and Vertical Integration in the Hyperscale Cloud</h2><p><em>On backwards integration. On forwards integration. Where everyone is going.</em></p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/cba31eded7ba11585d2cb5ce183ba512f281f19f785e8466f1c8b53f6162ba19.png" alt="Joshua Citarella, Render and Difference II (2016)" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Joshua Citarella, Render and Difference II (2016)</figcaption></figure><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/b2f7fec171d56520c45106a294a499cb8c3dbfc39ccc9387647f0390bc97347b.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><h3 id="h-feels-like-we-only-integrate-backwards" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Feels Like We Only Integrate Backwards</h3><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://soundcloud.com/tame-impala/feels-like-we-only-go">https://soundcloud.com/tame-impala/feels-like-we-only-go</a></p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/ed0530c1c811ef39e7ef486e6c921b363933e803288085676fcdf8d0284b2b96.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>As discussed previously in <em>Primer on the Economics of Cloud Computing</em>, the decision by what are now known as the Cloud’s hyperscale infrastructure players to begin selling cloud services represented a decision to integrate backwards in the value chain for Internet-based services [<em>those services being e-commerce for AMZN, search for GOOG, and software + pre-existing Windows-based server management business for MSFT</em>] — <em>from the very start</em> the hyperscale cloud infrastructure business has been an exercise in backwards integration, an exercise that has continued further backwards as the hyperscalers have increased their scale.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/54f45a08d89f4596e46b3cefcbdcd6c9e4532142961fa7dba007d525825d7558.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Hyperscalers [<em>and particularly AWS given their longer operating history</em>] quickly realized that equipment vendors upstream from them were unable to meet their specific needs and that they themselves were beginning to get “large enough to support an in-house supplying unit large enough to reap all the economies of scale in producing the input internally”, and so they did. Here’s James Hamilton at a 2016 AWS re:Invent keynote talking about his realization that no switch gear manufacturer could meet their data center needs and that they had to simply do it themselves:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/e973dc47b4124055dbaad22229355d0d3e44ce64be0dfc43120829a919fc4f37.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>*Sidenote: Hamilton is talking about how their old DC switch gears didn’t switch from utility power to their backup generators in the event of certain circumstances that weren’t relevant or important to AWS. [Their proprietary switch gear control system is called AMCOP. The entire keynote is worth watching if you’re interested in cloud infrastructure.*]</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/2c84860230fe12c4089892b80f6a43debed9a0a2479db8f2e827db6127ba79c0.png" alt="From  AWS re:Invent 2020 - Infrastructure Keynote with Peter DeSantis " blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">From AWS re:Invent 2020 - Infrastructure Keynote with Peter DeSantis</figcaption></figure><p>In the same keynote, Hamilton talks about the operational benefits of their decision to integrate backwards into their networking equipment ...</p><blockquote><p>[18:35] <strong>Hamilton</strong>: Second thing is, okay, you’ve got a problem, what do you do? Well if a pager goes off we can deal with it right now. It’s our data centers, we can go and figure it out, and we can fix it. We’ve got the code, we’ve got skilled individuals that work in that space — we can just fix it. If you’re calling another company, it’s going to be a long time. They have to duplicate something that happened at the scale I showed you in their test facilities — how many test facilities look like that? There’s not one on the planet.</p></blockquote><p>... through their <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://perspectives.mvdirona.com/2019/02/aws-nitro-system/">Nitro networking ASICs</a>, namely that they don’t have to depend on third party suppliers in the event of problems, problems that these suppliers wouldn’t even necessarily be able to solve [<em>and certainly not as effectively as AWS could internally</em>] given their lack of experience in the scale and operational complexity within AWS’s datacenters. Furthermore, backwards integration into networking equipment through Nitro means that AWS have full control over the <strong><em>direction and speed</em></strong> [<em>i.e., </em><strong><em>velocity</em></strong>] **of their design for all the components of their interdependent architecture. At the AWS re:Invent 2020 Infrastructure Keynote, Peter DeSantis talks about how backwards integration allows them to “innovate more quickly” in networking ...</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/e13a7533747e8e2b3c69b6ecf0b9085d854cf0b81667ec1c7db0fe46772cb71d.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Charles Fine’s “Clockspeed” thesis is that this ability to dynamically choose and execute on design and engineering competences across a company’s supply/value chain is “<em>the</em> meta-core competency” and “the only lasting competency”:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/9cab8964ae3c66992d5c7bd71330e9c2cca1ca98e3886e6646ecba88784b1cb7.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>In line with Christensen’s theory that integration/decommoditization and modularization/commoditization are reciprocal processes within a value chain, the disintegration of suppliers [upstream/backwards/down the stack] from AWS resulted in their modularization and altered the basis of competition in favor of [increasingly concentrated] buyers (i.e., the hyperscalers):</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/7d428d49263fe8f3416df073ce7ab0043b524f5d7c4be00aa7798ac763e23f15.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>If you’re reading this section in conjunction with the other sections in <em>Three Body</em>, then the first quote can be read as “Look! We’ve modularized networking ASICs!” and Hamilton’s second quote should remind you of our discussion around Christensen’s thinking regarding modularity vs interdependence. Hamilton even invokes the same mainframe example, which isn’t a surprise given that, like I’ve said, an industry analysis of cloud computing is the natural extension of the computing pre-cloud with respect to Christensen’s modularity theory. Hamilton’s observation that “<em>as soon as you chop up these vertical stacks, you’ve got companies focused on every layer</em>” <em>was</em> relevant in the Mainframe vs PC/Server era, <em>is</em> relevant in AWS’s backwards integration into networking (and other areas of infrastructure), and ,if you’re of the opinion that the vertical stack on the proverbial chopping block now is AWS’s stack, <em>continues</em> to be relevant.</p><p>The failure of AWS’s equipment vendors to react to the needs of what would eventually be a significant and highly concentrated buyer segment through uncomfortable modularization is reminiscent of Intel’s failure to meet Apple’s iPhone chip needs over a decade ago ...</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/88c9890f2998ea3619710559242dfd516ff127f271c9216601199b56999d12c5.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>... leading to what we can now recognize as a situation of modularization at the processor level (thanks to TSMC’s enabling of fabless chip design) and reintegration at the SoC and SiP level, optimized [<em>in terms of compute AND power consumption</em>] for Apple’s proprietary software. The modularization of manufacturing [<em>enabler = TSMC</em>] and design [<em>enabled = Big Tech, NVDA, AMD, chip start-ups, etc.</em>] has led to the gradual disintegration of Intel into what it is now, which is, in many analysts’ opinions, a company better off separated in the way AMD was spun off from its manufacturing arm, GlobalFoundries, in 2009 [<em>and the stock performance of AMD relative to INTC over the past decade speaks to the success of AMD’s decision</em>]. Intel’s disintegration is in line with Christensen’s thinking regarding modularity-interdependence reciprocality within value chains — AAPL has found a defensible point of reintegration in software+hardware whereas INTC’s former point of integration in design+manufacturing is no longer defensible. This tangent matters to our discussion about the Cloud because Intel, which has historically had a monopoly in the CPU server chip market ...</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/7784dcbb7a5e8eeb35aba5fd1c2652f6f6ba04c8474deb5f4eb0ef326bb7a3ac.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>... is now experiencing pressure from TSMC-enabled, fabless designers in the form of AMD and NVDA [<em>NVDA dominates the GPU market for both servers and PCs and announced its </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.nextplatform.com/2021/04/12/nvidia-enters-the-arms-race-with-homegrown-grace-cpus/"><em>ARM-based Grace server CPU effort</em></a><em> in 2021</em>] as well as the hyperscalers themselves who are eschewing Intel’s x86 architecture and opting instead for ARM-based architectures which are both cheaper to license and more power efficient, resulting in lower server TCOs overall. Furthermore, since Intel has <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.theverge.com/22597713/intel-7nm-delay-summer-2020-apple-arm-switch-roadmap-gelsinger-ceo">clearly fallen behind</a> in their advancing their 7nm (roughly equivalent to TSMC’s 5nm) process node and TSMC is already working on their <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.tsmc.com/english/dedicatedFoundry/technology/logic/l_3nm">3nm process</a>, hyperscalers also have a performance [<em>not just price</em>] argument to make in favor of continuing their design efforts as well — backwards integration is the only way to maximally ensure that the velocity of product improvement stays in sync across the stack.</p><p>Therefore the situation for all the hyperscalers is one in which they design, manage, and self-service nearly every component in their infrastructure stack ...</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/b88f10f7b55708fa48c7e06e3db11ba4026a30ed5c49269706997b9643ac8e48.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>... leading the to christening of <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.fabricatedknowledge.com/p/the-tech-monopolies-go-vertical"><em>The Tech Monopolies Go Vertical</em></a> narrative which predicts the acceleration of Big Tech’s continued backwards integration into semiconductor design:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/fd9c48343aa6b0f3fb5da1c59043ceeb31cd663f27439bea69e00e518ccc16a3.png" alt="From Fabricated Knowledge: The Tech Monopolies Go Vertical" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">From Fabricated Knowledge: The Tech Monopolies Go Vertical</figcaption></figure><p>This is a prediction which seems to be coming true, particularly for the hyperscale cloud players who have no choice but to seek new points of integration in order to stay competitive within an ecosystem that continues to bifurcate profit pools between the infrastructure and software [<em>and above</em>] — hyperscalers earn 25-30% operating margins +/- [<em>whatever pricing pressure results from </em><strong><em>f(industry capacity vs demand, semi R&amp;D costs, intensity of intra-CSP competition for market share vs margins, residual factors)</em></strong>] on integrated IaaS+PaaS (where PaaS is more or less a commoditized complement) plus margins from full stack opportunities (forward integration, industry focus, value-add AI/ML services) in the steady state; ISVs and SaaS earn whatever margins are justified given lower barriers to entry with industry margins averaging higher than IaaS but with higher variance across participants.</p><p>The nearly complete integration of infrastructure along the value chain by hyperscalers implies a reciprocal process of modularization of those aspects that the hyperscalers have integrated. When Hamilton talked about “chopp[ing] up these vertical stacks” in reference to the server networking market, he was talking about modularization of networking equipment. This reciprocal process is also present in the CPU market, for in hyperscale data centers what is important is the entirety of the interdependent architecture that is the data center, and not any particular component.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/4be9554113980efb663ffdec71b3c24afc5b30c7999057f1bf9574d8f60781cc.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>In a market with dispersed buying power, the suppliers can dictate the standards/basis of competition and expect that their customers will have square holes for the suppliers’ square pegs. The rise of hyperscalers have meant that the buying power in the data center market has become concentrated, with the hyperscalers now dictating the terms and standards by which suppliers will compete for their business — AWS does not care about the sticker stats on Intel’s chips, they care about how well the modules perform <em>as an interdependent whole</em>. How well Intel’s CPUs or Cisco’s switches and routers adhere to the standards and specifications of AWS/Azure/GCP data centers matters more now than when buyers were more fragmented and they could them to take it or leave it.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/9ae8cd7698a2c29b66e3617d0b4fdcf68e88d1e8d3a322251f0a55a6a93675d6.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>It also helps the hyperscalers that, even on the module level, proprietary designs are outperforming designs from suppliers.</p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://twitter.com/dylan522p/status/1465747464229687298">https://twitter.com/dylan522p/status/1465747464229687298</a></p><p>AWS wants to convince their IaaS customers, through better price/performance, to rearchitect their software to be compatible with their ARM-based chips rather than Intel’s x86-based server chips. In many (most?) cases, customers can’t necessarily just flip a switch to go from x86 to ARM: low level code needs to be recompiled, software packages need to be rewritten, and things need to be debugged and tested. This <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.honeycomb.io/blog/observations-on-arm64-awss-amazon-ec2-m6g-instances/">case study</a> and one-year <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.honeycomb.io/blog/graviton2-one-year-retrospective/">retrospective</a> by Honeycomb, an AWS customer, as well as this 2020 <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.youtube.com/watch?v=NLysl0QvqXU">re:Invent deep dive on Graviton2</a> gets into some of the gory details of rearchitecting for AWS chips, details that I have no business writing about since I’m neither a developer nor software engineer.</p><p>The necessity to rearchitect workloads for ARM helps to explain why AWS has promoted their <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://aws.amazon.com/about-aws/whats-new/2021/12/amazon-ec2-m1-mac-instances-macos/">EC2 M1 Mac instances</a> <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://techcrunch.com/2021/12/02/aws-brings-m1-mac-minis-to-its-cloud/">so strongly</a> — since Apple’s M1 is ARM-based, any rearchitecting that developers and engineers do to optimize for the M1 instances will translate over [<em>at least to some degree</em>] to AWS’s ARM-based Graviton instances. Of course, whatever developers and engineers rearchitect for AWS’s and Apple’s ARM chips will also be compatible with the ARM-based designs of other players [<em>e.g., Microsoft’s newer, ARM-based Surface PCs</em>] and vice-versa.</p><p>Before we move forward [<em>haha get it? because forward integration?</em>], there’s a point I want to make about the hyperscalers’ backwards integration that I haven’t yet seen articulated, which is that I’m not exactly sure if proprietary chip designs constitute <em>true</em> differentiators. Yes, the astronomical amounts of CapEx and R&amp;D spend certainly constitute high barriers to <em>entrance</em> into the Cloud infrastructure industry, but whether or not these investments translate into differentiation <em>within</em> the industry is something I’m not as convinced about. The price/performance and TCO measures for proprietary chip designs indicate a cost strategy rather than a differentiation strategy. That is, if Microsoft hypothetically comes out with an ARM-based server chips that perform just as well on a price/performance basis as AWS’s Graviton chips, can AWS really claim that Graviton is a differentiator? I’m not so sure.</p><h3 id="h-shingeki-no-hyperscaler-keep-integrating-forward" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Shingeki no Hyperscaler: Keep Integrating Forward</h3><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/09baffbd8bc4d565db8211e74896fb1931becc00d04273a53e5b65cabd788e2a.png" alt="it’s actually very hard to find a pop culture reference for the word “forward” but alas" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">it’s actually very hard to find a pop culture reference for the word “forward” but alas</figcaption></figure><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/7b9195e46ee710b53f9a11e8942dd551c22259729792ed52539393b66f01aad5.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>The reason it <em>Feels</em> Like They [<em>i.e., the hyperscalers</em>] Only Integrate Backwards is because tailoring general purpose cloud services into tailored solutions for industries/verticals isn’t as conspicuous a case of forwards integration as, say, a material goods manufacturer integrating into distribution might be, even though that’s <em>exactly</em> what the hyperscalers are doing by moving downstream the value chain and up the stack [<em>reminder that moving downstream the value chain is isomorphic to moving up the tech stack</em>]. Hyperscalers offering so-called “industry clouds” can be viewed as a manufacturer [<em>of C/N/S services through their infrastructure networks</em>] integrating forwards to better couple this manufacturing capacity [<strong><em>Reminder</em></strong>*: “A datacenter is a factory that transforms and stores bits”*] with marketing and distribution.</p><p>If “the Cloud” is a marketing term, then “Industry Cloud” is (marketing)$^2$ — for many industries, the “Industry Cloud” designation is purely virtual construct. For example two of Microsoft’s [<em>currently, this number will inevitability rise</em>] six industry clouds are Microsoft’s Healthcare Cloud and their Financial Services Cloud but the distinction, from a materialist perspective, between these two “Industry Clouds” is nonexistent — both their Healthcare Cloud and Financial Services Cloud will run on the same pool of servers [<em>granted, the FS cloud will ostensibly utilize a higher proportion of GPU-based instances for ML workloads</em>] and transport data down the same sets of dark fiber, thereby operating on the same exact “Cloud”. The only significant differences between providing cloud services in the Cloud for Healthcare and the Cloud for Financial Services is how engineers and systems architects design for compliance and security [<em>and even then, it’s not like the encryption algorithm cares if the encrypted string is my blood type or the password to my checking account</em>] and how internal salespeople and external consultants (i.e., ”partners”; “global systems integrators”) with industry expertise market the services to companies in each industry.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/678f082008c5a3f1e339c01655d0ae4573200d8273d560b9e32a29b00cb55ba3.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>But, well, there actually <em>are</em> industry clouds after all, because marketing matters and, even if the engineers and solutions architects at the Big Three know that there aren’t “Healthcare” electrons or a “Financial Services” electrons in NAND charge trap cells, the Healthcare VP in charge of allocating her department’s IT budget doesn’t care about any of that nonsense. She cares about providing healthcare services and the Cloud is merely a means to that end. This is essentially the evolution in internal culture/philosophy that Google Cloud brough Kurian in to catalyze — having the “best” product/service isn’t better than a “good enough” (in the Christensonian sense) product/service that the customer can actually understand.</p><p>If the hyperscale CSPs have been selling picks and shovels to tech-savvy, software-oriented organizations in the form of modular primitives, then they are now increasingly doing the mining and doing the digging themselves for less tech-savvy, larger enterprises that don’t have enough miners and diggers because the hyperscalers and tech-first companies hired them all out of digging and mining school. And AWS/Azure/GCP have all converged on this realization to the point where it’d be difficult to identify the hyperscaler by their publicly expressed strategy around Industry Clouds. No, seriously, try guessing which hyperscaler said what.</p><p><strong>Let’s Play Guess The Hyperscaler!</strong></p><ol><li><p>“<em>It&apos;s great. I mean, we see our go-to-market, first of all, in three or four different lenses. One, we have shifted as an organization from talking about technology and shifted from technology to products to solutions. Customers want to have us understand their business, the opportunity to transform their business, and then provide solutions to their business problems as opposed to coming in and talking about how great our technology is. So that&apos;s been a big change in selling methodology and approach.</em></p><p><em>A second thing that we&apos;ve done is we&apos;ve organized our sales organization around industries because different industries have different needs. In financial services, for example, the use of data may be for fraud detection, may be for regulatory reporting, may be for financial market simulation. In contrast, in retail it may be about personalization, inventory management, supply-chain optimization, etcetera. So we&apos;ve segmented our sales force by industry so that we can build greater competency in understanding customer needs.</em>”</p></li><li><p>“<em>And let me also just one more comment, when I say we&apos;re going after verticals, I want to make sure I&apos;m also clear on this. It is partnering, with SIs, ISVs, basically our customer. We&apos;re not competing with our customers. There are other clouds that do that. They actually think, gosh, I should just go capture that margin, looks like a good opportunity. In our case, we&apos;re going to continue to partner. But I think there is a huge opportunity for us to go into that space and help enable that ecosystem.</em>”</p></li><li><p>“<em>At the same time, </em><strong><em>[REDACTED]</em></strong><em> says, </em><strong><em>[REDACTED]</em></strong><em> needs to continue to expand vertically as well, by providing more complete solutions for specific industries such as health care and manufacturing. To do so, the company is bringing its services to the edge of the network, including traditional data centers, the factory floor, and even the field, and establishing an operating model that integrates </em><strong><em>[REDACTED]</em></strong><em> more deeply into businesses in virtually every industry.</em>”</p></li><li><p>“*Not surprisingly, companies are also looking for more remote and edge use cases. The pandemic magnified the idea that the cloud must move to where the people and applications are – at the edge. More and more use cases are emerging such as cloud in cars, cloud in factories, cloud in farm equipment and so on. <strong>[REDACTED]</strong> wants its cloud to be everywhere. “We are going to work aggressively on ‘internet of things’ solutions, on ML solutions and these horizontal use cases and applications as well as bundle it together in ways that are attractive to solving customer problems,” said <strong>[REDACTED]</strong>.*”</p></li></ol><p>Check your answers via <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/dcfstate/Velocity-and-the-n-body-problem-7132628659054d3db42dd1a6d2523fac#37b3baaecf88475aa66363400fc86ff4">this toggle Notion block</a>.</p><p>Each of the three hyperscalers have made every point in each of these quotes at some recent (last 2 years-ish) conference/interview or another — they’re all ...</p><ul><li><p>shifting from products and primitives to providing more complete, integrated solutions for industries/verticals</p></li><li><p>beefing up their sales teams and go-to-market capabilities for enterprise customers</p></li><li><p>seeking to cultivate a great [<em>”the best!”</em>] partner ecosystem with ISVs and global SIs [<em>and anyone else between them and enterprises</em>]</p></li><li><p>offering to physically install hardware wherever industries (manufacturing/agriculture/energy/etc.) want them [<em>i.e., “the edge”</em>]</p></li><li><p>using AI/ML capabilities and expertise to enhance their industry solutions</p></li></ul><p>... and they all have explanations why they’re the best at all of these things ...</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/862baf5c2064cd05aab8d29487f5032ef9508458b21275ed1444abf070a88e04.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>... because it is these five factors (integrated solutions, sales focus, partner ecosystem, deployment mode, AI/ML) that are common to all of their Industry Cloud strategies. The importance of AI/ML capabilities as a selling point for these hyperscalers cannot be overstated — provision of AI/ML capabilities is the primary point of differentiation between the hyperscalers and their prospective enterprise customers not only because the former contains world-class integrated hardware and software for AI/ML, but also because the latter has a lack of human capital capable of assessing and utilizing these new tools relative to Big Tech companies who act as black holes of talent.</p><p>From the perspective of the interdependence-modularity framework, not only can AI/ML serve as a demand driver for increasingly commoditized basic C/N/S services, the provisioning of AI/ML services as industry-focused “solutions” enables hyperscalers to reintegrate commodified C/N/S with differentiated software [<em>reminder that AI/ML is technically software</em>], differentiated hardware [<em>e.g., Google’s TPUs, AWS’s Trainium and Inferentia</em>], standardized development frameworks [<em>Google has Tensorflow, AWS has partnered with Facebook on Pytorch</em>], and 1P/3P consulting that integrates this technology into the systems of enterprises [<em>hence the term “systems integrators”</em>] with the overall effect of creating a differentiated, integrated solution with pricing power.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/4e030496b21f89dc64df2d45ca09bc38ab9404d1caaa351f7c8769a40e260344.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>An important point to make here is that the prospect of AI/ML capabilities (incl. integrated hard/software + frameworks/platforms + 1P/3P “expertise”) being a point of reintegration along the value chain for hyperscalers is contingent upon interactions with counteracting forces that are primarily manifesting as “multi-cloud” but ultimately stem from customers’ desire to avoid vendor lock-in. If, for example, Google Cloud can catalyze modularization of basic compute and storage with their multi-cloud initiatives and offer differentiated AI/ML services, then basic C/N/S won’t be reintegrated into the overall solution. That is, if GCP can incrementally invest more in differentiated hardware like TPUs and VCUs and offload the “commoditized” and functionally modularized basic compute and storage to AWS and Azure, then GCP is a successful disruptor:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/21f4a14737b123bc7b97cfcac8642cc3e3ec924025f1cbab5939000fa101abd8.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>You can see some evidence of Google’s modularity-oriented, multi-cloud strategy playing out in initiatives like BigQuery Omni ...</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/9b62352d6c5dd48aa2622f2ee8a5fc8b09650a5049fbf559590facc7a7c7beca.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>... which is powered by Google’s open source (i.e., Anthos is based on Kubernetes and Istio), deployment/cloud-agnostic application management platform, Anthos. <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://cloud.google.com/anthos/docs/concepts/overview">Anthos enables</a> multi-cloud “Infrastructure, container, and cluster management”, “Multicluster management”, “Configuration management”, “Migration”, and “Service management” — that Google released and champions this interoperability-enabling tool is understandable given that GCP has more to gain from modularization along the Cloud’s value chain than AWS or Azure. There’s also public anecdata about the actual realization of Google’s multi-cloud strategy in clients like Wells Fargo ...</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/a4d54ec5c666b18d4dec28ffbc807ab555edd233b47dd8180044327951bfbebc.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>... which utilizes Azure as their primary cloud provider but allocates AI/ML workloads to GCP, so it’s clear that this modularity-oriented strategy has basis in reality. Obviously AWS and Azure are not going to sit idly by as Google has their cake and eats it too, so the question of whether or not GCP’s modular strategy takes root throughout the boader market is an open one. Enterprises, especially those in highly competitive industries where data can be maximally utilized, will prioritize making sure that the solution the CSP offers is an effective one before considering secondary vendor lock-in and multicloud concerns. If a company in an industry undergoing digital transformation has to choose between a relatively complex, multi-cloud AI/ML solution that requires technical expertise that isn’t internally available and a proven, fully integrated solution, company management will probably opt to stay competitive and clearly have someone to blame is something goes wrong.</p><p>[<strong>Note</strong>: <em>See </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/dcfstate/Velocity-and-the-n-body-problem-7132628659054d3db42dd1a6d2523fac#e75070f13439416cac58f672275e4a50"><em>this Notion block</em></a><em> for gallery of select McKinsey slides on industry AI/ML</em>]</p><p>Forwards integration by hyperscalers into providing industry solutions means clearly communicating the potential value from applying recurrent neural networks techniques on repeat customer purchase data to the management team of a retail company and explaining why <em>your</em> Retail Industry Cloud is better than the other [<em>two</em>] options because you don’t compete in retail like AWS [<em>you’re GCP in this example</em>] and because you can derive unique, applicable customer insights from your search business. There are lots of complex moving parts in [<em>buzzword warning</em>] Cloud-based, AI/ML-enabled digital transformation that aren’t the core competencies of the C-suite of many companies in non-tech industries/verticals — simplification into a value proposition that makes sense for the companies’ decision makers and resource allocators, via a combination of both technical and industry expertise, is a differentiated service that deserves differentiated margins.</p><p>AWS, Azure, and GCP each have good cases to make for why they’re able to provide superior Industry Cloud offerings compared to their competitors:</p><ul><li><p>AWS’s experience in operating multiplayer gaming experiences like Fortnite and League of Legends is undoubtedly something their sales function brings up when targeting contracts with companies in the Gaming industry</p></li><li><p>Google’s experience in handling video workloads at a global scale via Youtube, along with their specialized hardware for video transcoding in the form of VCUs, gives GCP credibility when targeting companies in the Media industry</p></li><li><p>Azure’s full stack approach that extends all the way to the hardware layer in the form of HoloLens represents a unique value proposition for companies in industries which can derive substantial value from AR augmented use cases like Manufacturing and Construction</p></li></ul><p>The potential for an integrated, full stack Microsoft offering that ties together Azure IaaS/PaaS, GitHub Copilot, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://azure.microsoft.com/en-us/blog/introducing-the-microsoft-azure-modular-datacenter/">modular datacenters</a>, IoT edge devices, and HoloLens [<em>along with myriad other modular elements</em>] is, for me, the most interesting value proposition of the three. The idea of an on-site engineer controlling and live programming [<em>ostensibly with the aid of a CTRL-Labs type interface device</em>] factory robots and/or drones with an AR headset on seems pretty cool, if not a little bit scary. Whether Microsoft’s HoloLens represents forwards or backwards integration depends on our perspective of the value chain. At its most abstract, the Cloud’s value chain is really the value chain for decision making and information processing — thinking of the HoloLens as a human-machine interface that tightens the cybernetic loop allows it to be interpreted as being up the stack, despite its being hardware. Or at least that’s how I’m currently thinking about it.</p><hr>]]></content:encoded>
            <author>0x125c@newsletter.paragraph.com (0x125c)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/5863f402cc724dd81f2bfd0dad26ab7ccdd1d3e8acd7b4bb5ca86a040fe931e9.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Mass and the Law of [Economic] Gravitation]]></title>
            <link>https://paragraph.com/@0x125c/mass-and-the-law-of-economic-gravitation</link>
            <guid>uiGIlR6Mo5uoHXpHa9uE</guid>
            <pubDate>Tue, 01 Mar 2022 17:48:02 GMT</pubDate>
            <description><![CDATA[Part 4.2 of Planetary-Scale Computation: An industry primer on the hyperscale CSP oligopoly (AWS/Azure/GCP):Let’s Get Physical, (Cyber)Physical!: Flows of Atoms, Flows of ElectronsA Cloudy History: Four Histories of Cloud ComputingPrimer on the Economics of Cloud ComputingThree-Body: Competitive Dynamics in the Hyperscale OligopolyInitial Positions and Laws of [Competitive] MotionMass and the Law of [Economic] GravitationVelocity and the n-body problemThe Telos of Planetary-Scale Computation:...]]></description>
            <content:encoded><![CDATA[<p>Part 4.2 of <em>Planetary-Scale Computation: An industry primer on the hyperscale CSP oligopoly (AWS/Azure/GCP)</em>:</p><ol><li><p><em>Let’s Get Physical, (Cyber)Physical!: Flows of Atoms, Flows of Electrons</em></p></li><li><p><em>A Cloudy History: Four Histories of Cloud Computing</em></p></li><li><p><em>Primer on the Economics of Cloud Computing</em></p></li><li><p><em>Three-Body: Competitive Dynamics in the Hyperscale Oligopoly</em></p><ol><li><p><em>Initial Positions and Laws of [Competitive] Motion</em></p></li><li><p><strong><em>Mass and the Law of [Economic] Gravitation</em></strong></p></li><li><p><em>Velocity and the n-body problem</em></p></li></ol></li><li><p><em>The Telos of Planetary-Scale Computation: Ongoing and Future Developments</em></p></li></ol><p>Table of Contents for <strong><em>Mass and the Law of [Economic] Gravitation</em></strong>:</p><ul><li><p><em>Mass: The Cloud is a variable-mass system</em></p></li><li><p><em>Law of [Economic] Gravity: Supply and Demand</em></p><ul><li><p><em>Compute Demand</em></p></li><li><p><em>Supply: Capex/Capacity Rules Everything Around Me (C.R.E.A.M.)</em></p></li></ul></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/2bf28429f38ca0acd1b78b959379b7e85bd949c8d9bc2037857be67dd5511855.jpg" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><hr><h2 id="h-mass-the-cloud-is-a-variable-mass-system" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Mass: The Cloud is a variable-mass system</h2><p><em>On top-down market sizing and the economic mass of the system. On TAM, market penetration, and intra-market share capture vs inter-market share capture.</em></p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/dd112a7d6b6922033c55259d234030f5e3bda1b10f8f2c5740e0f9a3a35f4f55.jpg" alt="The Rose by NASA/JPL-Caltech/SSI" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">The Rose by NASA/JPL-Caltech/SSI</figcaption></figure><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/9abf2a32c55153764fb31dac42be423fcbbd81f78281e68451db167dcc640ac1.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Whereas classic three-body system in physics is concerned with a <strong>closed system</strong> with only three bodies, the system which we are considering is a <strong>variable-mass system</strong> in which three hyperscalers constitute the system’s center of mass but are not the system’s sole source of mass. Infrastructure and platforms, by definition, can not be infrastructure and platforms unless they serve as the <strong><em>foundation</em></strong> for bigger things to be built on top of them. In much the same way that the point of roads and bridges are not the roads and bridges themselves, the point of the tens of thousands of tons of metal and glass that compose the hyperscalers’ “clouds” is not itself. In both cases, the value of the infrastructural layers is contingent upon what can be enabled <em>by and through</em> them — roads and bridges enable the physical movement of automobiles and the people within them; IaaS and PaaS enable the computation, communication, and storage of electronic information through code.</p><p>From a strictly materialist perspective, the <em>literal</em> center of physical mass within the Cloud industry is the material infrastructure layer constituted by the thousands of tons of hyperscaler-operated servers in the Cloud, with all of the -illions of electrons and atoms that comprise the code and data in the software layer weighing, well, much less than that. However from an economic perspective, the “mass” of the dematerialized economic flows (i.e., periodic recurring revenue, annual free cash flow, etc.) and stocks of capital (i.e., market capitalization, enterprise value, etc.) of the software layer exceeds the economic “mass” of the infrastructure and platform layers which forms its foundation.</p><p>From a [revenue] flow-based, global <strong><em>public</em></strong> SaaS run rates at around 2-3x that of IaaS+PaaS.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/15fcbb3992628c0fc894829a10db999813d462a0311556e17fc1b53de06ae7ce.png" alt="This stacked bar chart isn’t up to date (2016 to 2018) but it clearly illustrates the relative market sizes of the layers of the public Cloud industry — IaaS+PaaS serve as the foundation for a larger, dematerialized economic structure." blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">This stacked bar chart isn’t up to date (2016 to 2018) but it clearly illustrates the relative market sizes of the layers of the public Cloud industry — IaaS+PaaS serve as the foundation for a larger, dematerialized economic structure.</figcaption></figure><p>More recent <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.idc.com/getdoc.jsp?containerId=prUS47685521">IDC figures for CY2020</a> indicates that global public revenues of SaaS are ~2x that of IaaS+PaaS revenues:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/a829e449552fe8b84589c6105f323aed4fd2359de84417ba0e7ccd4057cf03fc.png" alt="[SaaS - System Infrastruture Software] + [SaaS - Applications] = 16.0% + 49.7% = 65.7% of market share; SaaS/[IaaS+PaaS] ⇒ 65.7%/34.3% = 1.91x" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">[SaaS - System Infrastruture Software] + [SaaS - Applications] = 16.0% + 49.7% = 65.7% of market share; SaaS/[IaaS+PaaS] ⇒ 65.7%/34.3% = 1.91x</figcaption></figure><p>From a stock-based [<em>”stock” as opposed to “flow”-based, that is; not referring to equity value</em>] perspective, napkin math indicates that the global market value of public and private cloud companies exceeds the value of the hyperscaler cloud business segments by roughly a factor of 2 to 3.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/acf0aa56bcdb07e701a75ef43a33d7a9baccb0a62ab9ac4f60ef7144efb23f03.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/1de08933e10028f6b9263e2aa14e16c313b9ea8f504de6c18ea3de2082063a38.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>The existence of a higher software value layer on top of the Cloud’s infrastructure and platform layers is what makes these layers <em>Infrastructure</em>-as-a-Service and <em>Platform</em>-as-a-Service. The market cap of <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.spglobal.com/marketintelligence/en/news-insights/trending/1ogxZVKS9lDQlJ7EVXKh-g2">the top 20 public US utilities companies</a> is somewhere within the magnitude of ~$700B [<em>a figure that’s less than half the market cap of Amazon, the smallest of the Big Three cloud’s parent companies</em>] versus the aggregate market cap of the 30 US-based constituents of the S&amp;P Global Luxury Index is ~1.7T, but the <em>flourishing</em> of the latter industry is dependent on the <em>functioning</em> of the former — whereas Thanos-snapping away the 30 US luxury companies off the face of the Earth would immediately destroy 1.7T of market value, doing the same to 20 US utilities companies would destroy 700B of market value <strong><em>as well as</em></strong> the market value of every company in the Luxury Index because bullets are more valuable than Birkins when the world is ending. Similarly, while the economic mass of SaaS companies eclipses that of the hyperscale cloud businesses, their value is predicated on the existence of the thousands of tons of servers that compose the Cloud’s infrastructure — this is why, for the sake of my contrived three-body analogy, the hyperscalers are the center of gravity of the system despite being less valuable than SaaS companies in the aggregate.</p><p>That Cloud platforms create systems greater than the underlying platforms themselves is the intended vision that has been implicit from the outset of the modern cloud computing industry and is increasingly explicit as the industry continues to expand.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/eaecdb4d8f4a70b979611a7f01d890db6c44b0e3ca836158d1a665157b34ee73.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>If declining and dying industries might be conceptualized as systems which are losing both mass and energy [<em>in the analogical/economic sense as well as the literal/materialist sense; an industry’s economic value relies on the employment of actual mass and energy in the form of labor and capital</em>], then the system in which the three hyperscalers form the center of gravity is steadily accumulating mass and energy from the broader economic universe. <em>The Cloud is a variable-mass system</em>.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/caa1d872f0bb928ff167211677f42fcbe5badcf3baf43a214769904b9088cef8.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Q1’20 estimations [<em>for this primer, the primary focus is on framing and not on up-to-date numbers</em>] from GS and Gartner indicate that IaaS+PaaS will reach ~18% penetration of the total enterprise IT market opportunity (est’d @$777B by Gartner) in 2022, presenting over $600B of additional inter-market share capture for hyperscale cloud players. This Q1’20 estimation of ~$140B IaaS+PaaS revenue is less than the ~180B (=<em>71,525M+106,800M</em>) implied by <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.gartner.com/en/newsroom/press-releases/2021-04-21-gartner-forecasts-worldwide-public-cloud-end-user-spending-to-grow-23-percent-in-2021">Gartner’s public cloud spending forecast from Apr ‘21</a>. More recent <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.idc.com/getdoc.jsp?containerId=prUS48129821">Aug ‘21 forecasts from IDC</a> predict combined IaaS+PaaS revenues of $400B in 2025 at a 28.8% ‘21-’25 CAGR, implying a ~$187B IaaS+PaaS (<em>400B/[1.288]^3=187.2B</em>) in 2022.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/687b76c68f9b4c8a2c53769016d82ce73434ae19a2831750a611934210ed5783.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Framing the cloud market as a pie chart with relative percentage shares between the hyperscale players omits the fact that the pie is continually getting bigger. In growing industries, participants can sublimate the desire to engage in zero/negative-sum, intra-market competition and tacitly cooperate to expand the overall market, effectively engaging in inter-market competition in which competition is reframed as disruptive industry (Public Cloud) vs incumbent industry (Traditional IT).</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/1031fd09fe97ca645b1393c3a989bae71bd7b50c32bec23dc17d3a5974eda210.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/284368519d5fa47f1015270720378f4aa732893c557e525f362026f45b2b88ba.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Overall industry growth relieves the pressure within the variable-mass system known the Cloud so that the hyperscalers have the option of expending resources on growing than the pie rather than fighting over where to cut. Some people disagree with framing the entirety of IT as an opportunity set for the Cloud, arguing that this framing understates the SAM/TAM penetration of global Cloud IaaS/PaaS/SaaS spend. They may be right but, frankly, it doesn’t really matter for our purposes today — the takeaway that most everyone can agree on is that the Cloud is growing and still has room to grow, regardless of whether or not the “Global IT Spend = TAM” framing is overzealous. It’s questionable whether Porter would consider public cloud infrastructure as an “Emerging Industry” or “Maturing Industry” in the ontology he sets out in <em>Competitive Strategy</em> [<em>certainly not a “Fragmented Industry” or “Declining Industry”</em>], but the quote applies regardless — mutual dependence between firms in an industry “inducing substitution and attracting first-time buyers” have common enemies and common problems.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/389fb606dbcc7bfab07a013ca122c6eb5bdad828870325b193257cc8192e5cba.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><hr><h2 id="h-law-of-economic-gravity-supply-and-demand" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Law of [Economic] Gravity: Supply and Demand</h2><p><em>On the competitive laws that govern the overall mass of the system and its constituent bodies. Cost vs Price. Margin evolution.</em></p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/a27a1c966fefedab3e0c2c5877859d8027bcfa8c9f6c159da7e6b82dd905ae8b.jpg" alt="Gravitational Waves (2017) by Tomás Saraceno" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Gravitational Waves (2017) by Tomás Saraceno</figcaption></figure><p>Depending on what we consider as mass bodies, analogizing supply and demand as gravity should be considered anything between an artificial contrivance and conceptual bastardization. For example, if we are considering the dollar value of the three hyperscalers’ hypothetical standalone market capitalizations and/or enterprise value as individual masses, then gravitational interaction between these three bodies would be impossible to interpret in terms of supply/demand dynamics — the primary mode of interaction between AWS and Azure has little to do with supply or demand between the two businesses; the more appropriate interpretation within this mass-gravity framework might be one of attraction [<em>i.e., industry clustering</em>] or collision [<em>i.e., negative-sum competition</em>]. If we, however, consider global aggregate demand for public cloud infrastructure services as a mass and hyperscale capacity to supply these services as another mass, there might be some amenable interpretation that makes this mass-gravity analogy work for supply/demand.</p><p>Of the five pairings in this three-body problem analogy that I assigned a comparable business/economics element to in this primer, relating Newton’s Law of Universal Gravitation to the “law” of supply and demand has been my least favorite. It’s not that there’s no merit to the comparison — in fact, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.usitc.gov/data/gravity/gravity-in-diagrams/#spatial-arbitrage">there’s</a> <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://sites.nd.edu/jeffrey-bergstrand/files/2020/04/The-Gravity-Equation-in-International-Trade.pdf">even</a> <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.nber.org/system/files/working_papers/w16576/w16576.pdf">precedent</a> for this gravity to supply/demand comparison in international trade economics in the form of the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Gravity_model_of_trade">gravity model of trade</a>. That this analogy works <em>somewhat well</em> is the problem, placing it in some sort of uncanny valley for conceptual metaphors that the “Laws of Motion ⇒ Law of Conservation of Attractive Profits” avoids by virtue of obviously not making any sense. I should also mention at this point that the use of Newton’s laws in describing economic phenomena [<em>and therefore criticism of its use</em>] has a long tradition dating back to Marx’s <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://web.uri.edu/isiac/files/lawofmot.pdf">concept of an economic law of motion</a> all the way to obscure, heterodox, Marx-Keynes-Schumpeter (MKS) syntheses in the form of H.J. Wagener and J. W. Drukker’s <em>The Economic Law of Motion of Modern Society</em> (1986) and Peter Flaschel’s <em>The Macrodynamics of Capitalism</em> (2009).</p><p>In any case, the “gravity” metaphor works for now so I’ve tried to make the best of it. If the analytical result of gravitional interactions between two bodies is their path and position, the intention of this Supply/Demand analysis is to build a framing around how margin for cloud infrastructure providers might evolve. The basic idea is this — too much demand relative to supply and industry margins rise because undercapacity gives CSPs’ pricing power; too much supply relative to demand and industry margins fall because overcapacity takes away CSPs’ pricing power. The industry margin therefore becomes a question of how compute demand will evolve relative to compute supply/capacity and what are the subdrivers for both demand and supply. This is the question we will be exploring.</p><h3 id="h-compute-demand" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Compute Demand</h3><p><em>Metaverse. AI. Jevons paradox.</em></p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/ba5b2dca381172c21adf2e3bca8cd9e57a1d17bbc07c662f9cbdfde3a87751b2.png" alt="The Last Judgement in Cyberspace (2006) by Miao Xiaochun" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">The Last Judgement in Cyberspace (2006) by Miao Xiaochun</figcaption></figure><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/bf449290dfa0800ce398e60676bd4966dd781429913c7e1da8715046201a5c4d.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Broadly speaking, forecasting demand growth for compute is tantamount to predicting the future of humanity’s relationship with technology and, thus, humanity’s future. There might be people out there who might be able to say something like “The 10 year CAGR for compute demand will be X%, with factors = [a, b, ... n] driving [a%, b%, ... n%] of incremental growth in the demand forecast” and, in fact, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://venturebeat.com/2021/01/26/why-the-oecd-wants-to-calculate-the-ai-compute-needs-of-national-governments/">the OECD formed a task force in 2021</a> around metricizing and quantifying AI compute demand needs for national governments — this subsection will <em>not</em> <em>even begin to attempt</em> anything of the sort. Quantification of demand and capacity for AI compute will undoubtedly have widespread, global policy implications as the adoption of general purpose AI/ML inevitably accelerates in coming years ...</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/2785c281d86ad89d84f49e75143729ea5a3cf09eb996c4c82a93bc22a2fe45fb.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>... so I look forward to reading analyses from better informed, better resourced analysts, but a granular breakdown of global demand for compute [<em>and accompanying demand for data storage and networking</em>] is currently infeasible for me. Some useful framings for structuring the demand forecast for basic cloud infrastructure (i.e., compute, networking, storage) include ...</p><ul><li><p>cloud penetration into the broader global IT market [<em>discussed in the previous subsection</em>]</p></li><li><p>Workloads/Data in Public Clouds (today vs planned)</p></li><li><p>Data Volume and Data Stewardship, (consumer vs enterprise vs cloud)</p></li><li><p>Technology Adoption Rates</p></li><li><p>Data Generation by Category</p></li></ul><p>[<strong><em>Note</em></strong>*: Please see the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/dcfstate/Mass-and-the-Law-of-Economic-Gravitation-9e0a3896fe0d4fcd86670b284292fbb7#52706846c58f42b0839a5f82a739d785">Notion block</a> for trend gallery*]</p><p>... etc., but I think what might better drive this discussion into future compute demand are explorations into specific use cases that are relatable to people on an individual level, <strong><em>because it is ultimately people that are downstream from usage of cloud computation</em></strong>.</p><p>This assertion is most obvious when we think about internet-based services like e-commerce, video streaming (Netflix, Youtube), app-based ridehailing/delivery (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.cnbc.com/2019/03/01/lyft-plans-to-spend-300-million-on-aws-through-2021.html">Uber, Lyft</a>), consumer cloud storage (Dropbox, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.cnbc.com/2019/04/22/apple-spends-more-than-30-million-on-amazon-web-services-a-month.html">iCloud</a>), social media (Facebook), multiplayer gaming, etc., but remains valid even when considering less-intuitive areas like manufactured consumer goods, industrial agriculture, oil &amp; gas discovery, and national defense — the provisioning of most goods and services in today’s world, from digital media to state-operated physical defense systems, embeds and incorporates some quanta cloud infrastructure services. The photos in your iPhone gallery that are backed up on iCloud that comprises part of the $30mm+ bill that Apple pays to AWS every month (according to CNBC) can very clearly be attributed to each iPhone user. The cloud services sought by the US Department of Defense in whatever replaces the now cancelled <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.cnbc.com/2021/07/06/pentagon-cancels-10-billion-jedi-cloud-contract.html">$10bn JEDI cloud contract</a> can be conceptualized as a pool of computational/storage resources attributable pro rata to the 320+ million people in the US — i.e., The cost of powering the cloud infrastructure demand for the national defense ~$30 over 10 years [<em>that is, assuming the DoD was solely responsible for this function, which it’s not</em>] of each American is (as implied by the JEDI contract).</p><p>In essence, each and every one of the 7.9+ billion people living on Earth today (and those dead people who either didn’t get a chance to shut down their AWS instances or are executing a <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Daemon_(novel_series)">Daemon</a>) can be thought of as directly or indirectly consuming some quanta of the world’s computational resources. The in/direct use of computational resources obviously varies by individual, with those swathes of peoples living without Internet and/or personal compute devices (PCs, smartphones, etc.) in/directly using the least and Always Online people (me, and anyone reading this) in/directly using the most. Despite the disparity of computational resource consumption [<em>i.e., the </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Digital_divide"><em>digital divide</em></a>] inherent in the unequal global socioeconomic realities that also underly disparities in the consumption more fundamental resources like food, water, and energy, I believe that thinking of overall compute demand as [7.9+ billion people] x [<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://whatisprogress.com/2016/01/19/measuring-the-digital-divide/">computation per capita</a>] (and similar analogs like “data storage per capita” and “bandwidth-use per capita”) is an effective and parsimonious framing.</p><p>The distribution of in/direct C/N/S resource consumption among individuals, although highly correlative with measures like income and wealth [<em>source = N/A; common sense, general vibes</em>], is probably much less so than other, more material resources. Whereas Jeff Bezos owns multiple homes and cars (therefore in/directly consuming quantities of steel, cement, wood, glass, petroleum-based products, etc.), he probably spends less time streaming Netflix, scrolling Facebook, or shopping on Amazon than you do. Of course that is isn’t the final word on his in/direct usage of computational resources because everything from the computation involved in his private security detail to the computation and storage used in the various systems that manage his personal wealth [<em>e.g., If Jeff had a 5% LP stake in a computationally-intensive quant hedge fund then I’d attribute 5% of the fund’s yearly compute usage to him</em>] should be attributed to him as well — make no mistake, he <em>definitely</em> uses more computational resources than you do. However, I would posit that the ratio of his compute use to your compute use is a smaller ratio than the ratio of his wealth to your wealth — i.e., [Jeff’s in/direct compute consumption]/[Your in/direct CC] &lt; [Jeff’s wealth]/[Your wealth]. The difference in the in/direct consumption of C/N/S resources between Jeff Bezos binging 8 hours of Netflix and you binging 8 hours of Netflix is negligible and, given that people are granted only 24 hours in a day regardless of who they are, you can see why the global distribution of computation per capita is likely more egalitarian than global distributions of things like wealth or <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://data.worldbank.org/indicator/EN.ATM.CO2E.PC?most_recent_value_desc=true">CO2 emissions</a>.</p><p>This is how I justify reframing the question of forecasting changes in aggregate compute demand as a question about predicting how the daily life of average people [<em>and therefore the compute intensity of daily life for average people</em>] will change over time. Essentially what I’m saying is that if we decompose aggregate compute demand into ...</p><ol><li><p>~7.9+ bn individuals</p><ol><li><p>birth rate, death rate, fertility, etc.</p></li><li><p>increased human lifespan through advances in longevity longevity</p></li><li><p>[residual factors]</p></li></ol></li><li><p>variability (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Statistical_dispersion">statistical dispersion</a>) of individual CPC</p><ol><li><p>remaining global penetration of PCs and smartphones</p></li><li><p>continued global penetration of internet access</p></li><li><p>variations in adoption of emerging technologies between income and age cohorts</p></li><li><p>[residual factors]</p></li></ol></li><li><p>average computation per capita (CPC)</p></li></ol><p>... then average CPC is the most interesting factor to analyze. Global population trends are extremely predictable and if we accept my provisional, minimally-viable argument that variability of individual CPC is negligible in the steady state [<em>i.e., The digital divide is very real, but what I’m saying is that individual demand for in/direct C/N/S consumption won’t vary tremendously by age/wealth/geography/etc. cohorts (relative to other measures of consumption) in the near future when literally everyone has internet access and connected devices</em>], then average in/direct consumption of computation becomes the leverage point for the entire equation. A [<em>rough, non-rigorous, non-MECE, and ontologically inconsistent</em>] decomposition of average CPC might look like this:</p><p><strong>average CPC</strong> = f( ...</p><ul><li><p>average indirect CPC</p><ul><li><p>compute intensity of manufacturing, utilities, healthcare, national defense, etc. (goods and services that are not easily, directly attributable to individuals)</p></li></ul></li><li><p>average direct CPC [<em>this subsection’s area of focus</em>]</p><ul><li><p>computational overhead common between different use cases [<em>i.e., all aforementioned use cases will require computation for de/encryption and AI/ML training/inferencing</em>]</p><ul><li><p>privacy-oriented cryptography</p></li><li><p>AI/ML</p></li></ul></li><li><p>key use cases</p><ul><li><p>personal finance (portfolio rebalancing via robo-advisors, budgeting, payments, etc.)</p></li><li><p>personal health (biometric tracking, consumer genomics, etc.)</p></li><li><p>“Metaverse” [<em>this will be our main focus for the remainder of this subprimer</em>]</p><ul><li><p>social and leisure (gaming, online dating, chatting, VR-based social media, VR experiences, etc.)</p></li><li><p>education</p><ul><li><p>rise in simulation-based learning [<em>my personal pet theory</em>]</p></li><li><p>persistence of distance-learning post-COVID</p></li></ul></li><li><p>work</p><ul><li><p>growth in proportion of global population engaged in knowledge work (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.forrester.com/blogs/the-global-information-worker-population-swells-to-1-25-billion-in-2018/">Forrester estimates</a> = 1.25bn as of 2018) ⇒ primarily driven by [<em>secular, monotonic</em>] automation of repetitive manual labor</p></li><li><p>“Creator Economy”, citizen developers, citizen data scientists, etc. ⇒ primarily driven by [<em>secular, monotonic</em>] automation of repetitive cognitive labor [<em>through RPA, low/no-code, OpenAI Codex, etc.</em>]</p></li></ul></li></ul></li></ul></li></ul></li></ul><p>... )</p><p>[<strong><em>Sidenote</em></strong>*: The distinction between direct and indirect compute consumption is a matter of degree. This is most evident in cases of consumer tech hardware that utilize AI/ML to deliver services. For example, in the case of my Alexa, a certain quanta of compute can be attributed to ...*</p><ol><li><p><em>the design and manufacture of the hardware</em></p></li><li><p><em>cloud-based inferencing (</em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://aws.amazon.com/blogs/aws/majority-of-alexa-now-running-on-faster-more-cost-effective-amazon-ec2-inf1-instances/"><em>via AWS’s Inferentia chips</em></a><em>) when I ask Alexa for the weather</em></p></li><li><p><em>training of Alexa’s </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.amazon.science/blog/amazon-scientists-applying-deep-neural-networks-to-custom-skills"><em>deep-learning</em></a><em> (i.e., neural network w/ 3+ layers) ML model</em></p></li></ol><p><em>... and it’s clear that the computational cost of inference is directly attributable to me but that the indirect computational cost of continually training the overall Alexa model can be attributed to me on some sort of pro rata basis as well.</em>]</p><p>I should note here that the entirety of the multi-stage decomposition I’ve just presented reflects my imposed framing (i.e., there exist multiple valid ways to decompose average CPC that aren’t by direct vs indirect compute use) and my own personal views on what constitute key compute use cases for individuals. Framing demand growth drivers as being fueled by key use cases is an analytic choice I’ve made because it’s the framing that I believe to be the most informative, but there are other ways to frame the problem space such as by industry as in this slide on interconnection bandwidth capacity ...</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/8552aa5742f9183d2111b777bbc4a1e72d34a0ce7a36dcca71611d43b4068329.png" alt="From Credit Suisse: The Cloud Has Four Walls, 2021 Mid-year Outlook" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">From Credit Suisse: The Cloud Has Four Walls, 2021 Mid-year Outlook</figcaption></figure><p>... which decomposes interconnection bandwidth growth by existing industries — a similar analysis can be, and has definitely been, done on compute growth for it’s unimaginable that the hyperscale CSPs don’t internally conduct similar kinds of analyses that segment incremental cloud workload growth by industry. But, like I’ve said, I won’t be attempting anything of the sort.</p><p>Regarding computational overhead from privacy-oriented de/encryption and AI/ML training/inferencing, compute demand growth from the latter is more obvious and [rightly] expected to be more impactful than the former. My belief is that privacy and security concerns will only continue to rise as the Cloud manages larger and larger quantities of increasingly sensitive personal data and that the computational cost of de/encryption will be non-negligible ...</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/75d1fd878d928d5b2981e3a411ec585b2a57c181a0b2b5f79ffd7564c200d32f.png" alt="From Security Algorithms in Cloud Computing (2016) by Bhardwaj et al." blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">From Security Algorithms in Cloud Computing (2016) by Bhardwaj et al.</figcaption></figure><p>... at the minimum. The potential for mass adoption of zero-knowledge proofs and time-lock encryption (esp. verifiable delay functions) by individuals and enterprises represents potential computational overhead beyond the minimum tablestakes level, but this is a rabbit hole unto itself. With respect to AI//ML based computational overhead, Mule’s/FabricatedKnowledge’s *<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.fabricatedknowledge.com/p/gpt-3-and-the-writing-on-the-wall">GPT-3 and the Writing on the Wall </a>*runs the gamut of what I’d cover outside of a deeper dive and this graphic of his ...</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/5ffadb64394ebc52d28464b8cd688343b4210721dac14b6ab5f9294f7fa57b5e.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>... is as succinct as it gets.</p><p>Cryptography [<em>once again, for privacy/security-related reasons; computational overhead for maintaining cryptonetworks through PoS, PoW, or other proof-based mechanisms is another issue entirely that I won’t get into here</em>] and AI/ML will serve as sources of computational overhead for nearly every use case. In fact, these two sources of overhead are already ubiquitous in many of the internet services we use today. Look no further than “<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.google.com/">https://www.google.com/</a>”, an AI/ML-enabled search engine that you connect through via the HyperText Transfer Protocol Secure (HTTPS) where <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://davidtnaylor.com/CostOfTheS.pdf">the ‘S’ requires the computational cost of de/encryption</a>. More pertinent to the Cloud is the fact that <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://docs.aws.amazon.com/whitepapers/latest/logical-separation/encrypting-data-at-rest-and--in-transit.html">each</a> <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://docs.microsoft.com/en-us/azure/security/fundamentals/double-encryption">major</a> <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://cloud.google.com/security/encryption-in-transit">CSP</a> offers encryption both “at rest” and “in transit” in accordance with Zero Trust Architecture (ZTA) principles that are increasingly relevant in a world where the surface area for cybersecurity attacks has exponentiated and cybersecurity has become an area of concern on the federal level.</p><p>This recent (Jan 26, 2022) <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf">strategy memorandum</a> from the U.S.’s Office of Management and Budget, in response to Biden’s May ‘21 Executive Order titled ‘<em>Improving the Nation’s Cybersecurity’</em> (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.federalregister.gov/documents/2021/05/17/2021-10460/improving-the-nations-cybersecurity">EO 14028</a>) which “required agencies to develop their own plans for implementing zero trust architecture”, clearly outlines those aspects of ZTA-oriented design that will become standards [<em>de jure for gov’t agencies and therefore de jure for CSPs looking to earn gov’t contracts</em>] sooner rather than later.</p><p>The privacy-based de/encryption and AI/ML-based requirements (and thus, associated computational cost) for personal finance and personal health applications are a function of these two use cases requiring particularly sensitive personal data and also being greatly reliant on ML models to be effective. A potential future where my health insurance premiums are partially determined through analyses of my biometric data streams [<em>something already possible at technical level but still outside of the acceptable range of the Overton window on the societal level</em>] is a future that will necessitate, whether by law or by consumers, lots of redundant privacy measures powered by cryptography (’<em>Chapter 4: Contactless Love</em>’ in <strong>Kai-Fu Lee’s</strong> <em>AI 2041</em> is a compelling sci-fi illustration of this idea). The promise of health start-ups like <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.beondeck.com/case-studies/rootine">Rootine</a> [<em>if you’ve riden the MTA in the past year you’ve seen their ads</em>] and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://techcrunch.com/2021/05/04/personalized-nutrition-startup-zoe-closes-out-series-b-at-53m/">ZOE</a> is the potential for pooling biometric and genetic data in order to derive insights for better health results. Here are two key excerpts from a highly recommended <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://hbr.org/podcast/2022/02/the-future-of-healthcare-personalization-and-ai-with-zoes-jonathan-wolf">recent interview</a> between Azeem Azhar and ZOE’s CEO and co-founder, Johnathan Wolf, that gives an idea as to the potential compute and data intensity of the industry:</p><blockquote><p><strong>JONATHAN WOLF</strong>: Today the product is live in the US and we’ll be launching in the UK shortly. What we do is we take this largest nutrition science study in the world and then we allow you to do a very simple at-home test and then we can compare your data using machine learning with the data from all of those studies. What that means is that it starts with a box that arrives at home and you unwrap and an app that takes you through a process that very easily explains how you can send us a microbiome sample, so that is a sample of your poop. How we can measure your blood sugar. So there’s a blood sugar sensor called a continuous glucose monitor they can put on your arm with a standardized meal that allows us to understand exactly how you respond to a meal with sugar and fat in it. And we also do a blood test to understand what’s happening to your fat, so your lipids. The fat is just as important as sugar in terms of what’s going on. We take all of that data and we also get you to track what you’re eating for a few days and give us some more context about your health. All of that information, we can then compare with thousands of people who took part in these weeks of clinical study in hospital and everybody else who’s been participating in ZOE since.</p></blockquote><blockquote><p><strong>JONATHAN WOLF</strong>: The challenge to understand is that most science studies, particularly around nutrition, are very small. So whenever you’ve seen something on the front page about a particular food being a superfood or liable to kill you or give your cancer, most of those studies have involved 20 or 30 people. That means that the accuracy of that data is very low because there’s just not enough information, particularly given this huge personal variation that’s going on. Now, the reason why that’s happening is not because those scientists are bad scientists. It’s because the amount of funding that you can get to support a nutrition science study is very small. I won’t go into all the boring details, but the net result is that you end up funding maybe 30 tiny studies rather than one large scale study that can follow this over enough time to really get useful data. And so what is exciting, I think, for many of the scientists working with us is that they’ve been able to participate in what is the largest nutrition science study in the world that therefore gives you that depth of data that allows them to answer many, many questions, often questions we hadn’t even thought of at the point that we started the study because it’s got that scale of data.</p></blockquote><p>It should surprise no one that all of Big Tech wants to get into personalized healthcare and have presented explicit strategies towards targeting the this emerging industry (despite the continued reluctance for traditional healthcare companies to dive into public clouds for various reasons).</p><p>Moreover, the recent successes of AlphaFold (extremely computationally intensive to train) in predicting protein structure from amino acid sequences (i.e. ATCG for DNA, AUCG for RNA) is catalyzing a wave of exploration into the potential of computational bioinformatics. Large-scale panel (across both # of subjects and time) analyses of combined biometric and genetic data and the resultant health and phenotypic outcomes will require lots of compute from <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4623434/">privacy-preservation</a> methods (incl. <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.computer.org/csdl/journal/ec/2021/03/08972608/1gXC2Rbk08w">differential privacy methods</a>) and model training/inference. To be clear, I consider myself uninformed in all things bio/health-tech [<em>my expertise begins and ends with running agarose-based gel electrophoresis experiments on DNA in high school</em>] so these bits I’m presenting are primarily invitations to DYDD, but what’s clear is that progress in this space will require lots and lots of computation.</p><p>With respect to personal finance, the crux of my argument here is that “everyone” (in the “everyone” has an iPhone sense of the word) will use some level of automated ML-based [<em>and therefore computationally-intensive</em>] financial asset management software akin to what existing robo-advisors already offer for the six-figure cohort and AM platforms like BlackRock offer for the 7+ figure cohort. What’s interesting about BlackRock is that [<em>based on some limited conversations</em>] they’ve been working on integrating quantized behavior and risk profiles in their wealth management business beyond just “<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://static.vgcontent.info/crp/intl/avw/mexico/documents/guide-to-proactive-behavioral-coaching.pdf">behavioral</a> <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.blackrock.com/us/financial-professionals/practice/behavioral-finance">coaching</a>” for clients, indicative of what the future of PWM might evolve towards in the future, but I don’t know enough about ongoing developments to say more. What I can say, however, is that things like AI/ML-powered automated portfolio rebalancing sound more complicated than they actually are and are extremely low hanging fruit, hence why I (someone who doesn’t identify as a “developer” or “coder” and <em>definitely</em> not a quant) was able to create a minimally-viable <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/dcfstate/quantamental/blob/main/rebalancing.ipynb">portfolio rebalancing algorithm</a> using:</p><ol><li><p>an LSTM module</p></li><li><p>an open-source rebalancing algorithm (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/mayabenowitz/Hedgecraft">Hedgecraft</a>)</p></li><li><p>Alpaca’s APIs for financial data and trade execution</p><p>[<strong><em>Note</em></strong>*: Alpaca changed their API sometime last year and I haven’t updated the code to reflect their new API endpoints. Regardless, I’m not responsible for anything you do with my code nor does anything I’m presenting anywhere constitute financial advice or a solicitation for a financial product.]*</p></li></ol><p>My algorithm is [<em>or rather, was, before Alpaca changed their API endpoint</em>] able to automatically rebalance my personal equities portfolio via an API-first brokerage (Alpaca) using a risk minimizing rebalancing algorithm plugged into a basic LSTM module I stole and, if I knew how to deal with .yaml files, could have been made into an automated, pre-scheduled serverless task. In any case, it sucked for various reasons, not least because it was just too simplistic and simple ARIMA-based models often do a better job than LSTM-based models when using only historical price information but primarily because <em>I’m not a quant</em>. But the point I’m trying to make is that I (not a quant or coder) was able to cobble together something that worked and literally anyone [<em>who can pass KYC for Alpaca’s brokerage</em>] can use this — ostensibly, better renditions from more capable programmes will commoditize automated portfolio rebalancing, making this type of personal finance product as common as Robinhood is now.</p><p>What I really want to talk about, and what I’ve been muddling through these various tangents and preparatory remarks to get to, is the open question of the computational requirements for the emerging digital media landscape that we’ve christened as the “Metaverse.”</p><p><strong>Matthew Ball</strong> opines in <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.matthewball.vc/all/computemetaverse"><em>Compute and the Metaverse</em></a> that ...</p><blockquote><p>In totality, the Metaverse will have the greatest ongoing computational requirements in human history. And compute is, and is likely to remain, incredibly scarce.</p></blockquote><p>... and, depending on how you define the M-word, there’s reason to believe this may prove true. There’s the question of what proportion of Metaverse-induced computation demand will take place in centralized cloud servers versus on-device (i.e., GPU-based processors integrated into VR headset) in the steady state versus via decentralized, local edge servers, and the various potential combinations in between. While, due to many of reasons mentioned in the Matthew’s article under ‘<em>Where to Locate and Build up Compute</em>’, I can’t imagine a centralized cloud gaming model for VR experiences [<em>in which VR headsets primarily function as an interface and offload the majority of computation for rendering to hyperscale cloud servers</em>], I can imagine use cases for massively multiplayer real-time experiences for people locally proximate to “edge” [<em>that is, “edge” relative to larger, centralized data centers</em>] locations — i.e., MMO cloud gaming is more tenable if the client-side users are all using their interface device in Lower Manhattan and the edge datacenter is there as well instead of, say, Northern Virginia.</p><p>However, the thesis that the Metaverse will require enormous amounts of cloud-based compute isn’t contingent on graphics rendering being offloaded onto cloud/edge servers. The graphics rendering for Facebook’s Oculus standalone models (Quest, Quest 2, and Go) is done on-device and yet the company is planning on doubling the number of buildings that it operates for internal use. From an article titled <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://datacenterfrontier.com/facebook-has-47-data-centers-under-construction/"><em>Facebook Has 47 Data Centers Under Construction</em></a> from <strong>Data Center Frontier</strong>:</p><blockquote><p>“As I’m writing this, we have 48 active buildings and another 47 buildings under construction,” said Tom Furlong, President of Infrastructure, Data Centers at Meta (formerly Facebook). “So we’re going to have more than 70 buildings in the near future.”</p></blockquote><p>The article’s title is wrong — the “47 buildings under construction” are not all data centers in the same way the “48 active buildings” were not all data centers. Facebook’s <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://datacenters.fb.com/">Data Center Map</a> lists 46 (not 48) buildings, only 18 of which are data centers with the rest comprising infrastructure to manage in/outflows of energy and water. If I had to bet, I’d guess that data centers comprise a higher proportion of the 47 buildings UC as compared to the 18/46 proportion Facebook currently operates because they’re probably planning on leveraging existing energy/water infrastructure for their new data centers.</p><p>[<strong>Sidenote</strong>: <em>For context, I wrote this subsection the days leading up to and following FB’s disastrous Q4’21 print.</em>]</p><p>From what I’ve seen, the attention of the analyst community underindexes Facebook’s Metaverse investments into DC infrastructure relative to Oculus-related R&amp;D investments but the trajectory of their CapEx, which is largely driven by additional investments in their &quot;data center capacity, servers, network infrastructure, and office facilities”, indicates an internal expectation that they expect the computational intensity of providing their services to increase. Given that they’ve reached the asymptote of their global user penetration, the only explanation for continued increases in CapEx is that they expect usage on a per user basis to increase through increases in app usage via more apps and/or more usage per app [<em>FB’s explanation is more AI/ML workloads but I doubt they’re expecting the training data to only be from their Family of Apps</em>]. While their stated position is that CapEx growth has not been driven by expected capacity needs from their VR services ...</p><p>From <strong><em>FB Q4’21 Earnings Call</em></strong>:</p><blockquote><p>While our reality labs products and services may require more infrastructure capacity in the future, they do not require substantial capacity today and, as a result, are not a significant driver of 2022 capital expenditures.</p></blockquote><p>... their entire Metaverse, “next computing platform” strategy depends on mass adoption which implies mass data collection which implies lots and lots and lots of data centers due to the drastically higher amount of C/N/S capacity relative to regular ol’ Facebook/Instagram/Whatsapp on a per user basis. Given the long year lead times for adding DC capacity from a combination of supply chain constraints and increased demand for all things DC (servers, chips, networking gear, etc.), FB is stuck in the unfortunate position of having to make a bet on expanding capacity before being able to better gauge whether the inflection point of adoption will take hold for their Oculus hardware, and doubly so given the lack of information about consumer uptake of VR hardware competition from Sony and Apple.</p><p>It makes sense then why <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://press.aboutamazon.com/news-releases/news-release-details/meta-selects-aws-key-long-term-strategic-cloud-provider">Facebook would partner with AWS</a> (announced Dec ’21 during AWS re:Invent) to keep open the potential for offloading their Reality Labs workloads in the case of insufficient internal capacity. To be clear, the AWS press release doesn’t say anything about offloading cloud workloads onto AWS from Facebook’s existing lines of business, but that FB will keep AWS-based workloads on AWS in the case of acquisitions already on AWS ...</p><blockquote><p>Meta will run third-party collaborations in AWS and use the cloud to support acquisitions of companies that are already powered by AWS. It will also use AWS’s compute services to accelerate artificial intelligence (AI) research and development for its Meta AI group.</p></blockquote><p>... but my impression, reading between the lines, is that FB is backed into a corner and is ...</p><ol><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.bloomberg.com/news/articles/2022-02-02/what-is-the-metaverse-meta-s-fb-zuckerberg-brings-a-virtual-frontier-to-dc">hoping to assuage Washington</a> before pursuing tack-on VR acquisitions in order to ...</p></li><li><p>... spur the adoption of their VR/Metaverse platform ...</p></li><li><p>... while hedging the risk of not having sufficient DC capacity by forming a “long, term strategic partnership” with AWS ...</p></li></ol><p>... which stands as the hyperscale CSP that is <em>least</em> at odds with them [<em>FB &amp; MSFT competing on Metaverse; FB &amp; GOOG competing on ads and AI/ML framework; FB &amp; AMZN compete on ads too but AMZN is nonetheless the better of three evils for FB</em>]. But enough about Facebook.</p><p>[<strong><em>Note</em></strong>*: See this Notion block for “A sidebar rant on Facebook, Metaverse compute infrastructure, and framing potential CapEx requirements” that I can’t collapse/hide in Mirror*]</p><p>What AWS has been doing with gaming is quite instructive for how the Metaverse might potentially, <em>actually</em> operate in a Universe in which the M-word is (eventually? inevitability?) brought into reality, a prospect that everyone seems to have an opinion on but few take to its logical conclusion with respect to underlying [<em>hardware-based</em>] requirements. Their most recent re:Invent revealed implementation details of what a Metaverse on the Cloud [<em>the only place with sufficient capacity for it to run at scale</em>] might be designed to handle participants at scale, but it should be noted that the company’s competence in multiplayer experiences has been many years in the making — Minecraft, before being <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.cnbc.com/2020/07/20/microsoft-minecraft-mojang-abandon-aws-for-azure.html">transitioned onto Azure in 2020</a> post-acquisition by Microsoft, used to run on AWS since 2014; Roblox has been running on AWS <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://techcrunch.com/2020/10/09/how-roblox-completely-transformed-its-tech-stack/">since 2017</a>; Fortnite has run completely <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://aws.amazon.com/solutions/case-studies/innovators/epic-games/">on AWS</a> since 2018; Figma (and therefore FigJam) <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://static.figma.com/uploads/9a5a711d808ae1157219e59777669aa57182f23f">uses AWS</a>; League of Legends runs <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://aws.amazon.com/gaming/reinvent-2017-slides/">on AWS</a> — so they’ve had time to iterate and learn from operating multiplayer experiences at scale prior to their release of their first, internally produced, MMO game title, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.youtube.com/watch?v=5kGcrtkWIgM"><em>New World</em></a>.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/8b877b29e67f071189eea94415b738cc66c17abe3351e88d6e2cea9b6eae2245.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>What Werner is talking about when he makes the distinction between the “Old World’s” scale up and the “New World’s” [<em>he’s referring to both the game title and a new philosophy/architecture here</em>] scale out is in reference to the actual, physical servers that mediate multiplayer online experiences — whereas your client (i.e., your PC) had to connect to different physical servers that were dedicated to a particular “town” or “zone” in traditional MMOs (which is why you get loading screens when you teleport from Town A to Town B), <em>New World</em> treats the entire world as a unified space. The <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.matthewball.vc/all/fortnitetravisscott">2020 Travis Scott Fortnite concert</a> [<em>hosted on AWS</em>] which claimed an attendance of 12.3 million live viewers did not have 12.3 million people interacting in the same, synchronous virtual world but rather split up these millions of people into 50 person groups that primarily corresponded to user location [<em>i.e., Epic Games’ player matching engine prefers to match players within the same geographic area to minimize cross-latency</em>].</p><p>From Wired: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.wired.com/story/epic-games-qa/"><em>It&apos;s a Short Hop From Fortnite to a New AI Best Friend</em></a> (2019)</p><blockquote><p><strong>Tim Sweeney</strong>: It makes me wonder where the future evolutions of these types of games will go that we can&apos;t possibly build today. Our peak is 10.7 million players in <em>Fortnite</em> — but that&apos;s 100,000 hundred-player sessions. Can we eventually put them all together in this shared world? And what would that experience look like? There are whole new genres that cannot even be invented yet because of the ever upward trend of technology.</p></blockquote><p>Absent pesky things like hardware constraints and the laws of physics [<em>ugh, SO annoying. who agrees??</em>], the Platonic ideal of the Metaverse approximates something like 8 billion people [<em>a good proportion of which would normally be sleeping</em>] concurrently in a synchronous VR space just, like, 🌊<em>vibing</em>🌊 out, man. Pretty sure there’s a Buddhist Sutra like this, minus the VR headsets. In any case, <em>New World</em> accommodation capacity of 2,500 players per “world” is the closest thing we have to this Platonic ideal so far.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/3221990165f3abcea02dc1b987a7b410e78aee852b6a9a01dac86587e2a5545d.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Here’s a simplified break down of <em>New World’s</em> ontology:</p><ul><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.newworld.com/en-us/support/server-status">Five regions</a> for <em>New World</em> (”US WEST”, “US EAST”, “SA EAST”, “EU CENTRAL”, “AP SOUTHEAST”), with each region responsible for several of the 100+ “worlds” [<em>See </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://nwdb.info/server-status"><em>this website</em></a><em> for live stats on New World server status and player count</em>]</p></li><li><p>100+ “worlds” [<em>variable, depending on overall player count within a geographic region</em>], representing synchronous, persistent virtual worlds; the number of “worlds” reached ~500 at one point post-launch, but as user count declined following hype, worlds were “merged” together</p></li><li><p>14 “blocks”, corresponding to 7 synchronized EC2 instances (2 zones per EC2 instance), per “world”</p></li><li><p>~2,500 players, ~7,000 A.I. entities, and X00,000s of objects per “world” [<em>note that it looks like per server player capacity was </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.xfire.com/new-worlds-2000-player-server-cap-has-taken-the-fans-by-surprise/"><em>reduced to 2,000</em></a><em> in Sep ‘21</em>]</p></li></ul><p>All the interactions between thousands of players, entities, and objects in each of these hundreds of worlds [<em>to be clear, 2,000 is the current cap but a quick look of the </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://nwdb.info/server-status"><em>New World Server Status</em></a><em> page, depending on when you check, shows servers don’t approach that capacity</em>] requires a lot of compute and produces a lot of data.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/a63c0a18b9398bb7f8bf81093a789bed91336f6f98b8d8d5f2bf8448dc9eef04.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>You can begin to see why Matthew Ball’s prediction that “<em>In totality, the Metaverse will have the greatest ongoing computational requirements in human history</em>” might end up being true. Relative to the CPC (computation per capita) that Meta’s ~3 billion Facebook MAUs (monthly active users) require, the CPC that a <em>New World</em> player requires is orders of magnitude larger — how many more billions of dollars of cloud infrastructure would Meta require if those 3 billion Facebook users become Reality Labs [<em>or whatever they’re calling it</em>] users, thereby exponentiating both per user compute demand and per user data generation?</p><p>To be clear, my 2 cents is that Meta/Facebook won’t be able to monopolize the Metaverse. I think that consumer hardware as the point of integration [<em>i.e., hardware + platform + social graph + existing data, etc.</em>] to capture users won’t work for Facebook because, not only has FB lost the trust of, well, literally everybody, but there are credible contenders in the hardware space that will make AR/VR hardware into an oligopolistic industry before FB is able to capture a critical mass of users that catalyzes a sustainable ecosystem. Furthermore, the creation of digital assets, virtual worlds, and curated communities requires a massively decentralized effort but both creators and users [<em>a distinction that may become increasingly blurry over time as the barriers to entry of being a “Creator” are lowered; in my mind, everyone can become a Creator in the same way everyone became a photographer and blogger post-iPhone/IG/Twitter</em>] don’t like Facebook — they will most likely opt for credibly neutral, platform-agnostic, crypto-enabled ownership of digital assets/worlds that ensures fairer distribution of financial upside and stronger privacy protections because, not only because that’s the economically rational thing to do, but because <em>people hate Facebook</em>. This was the same reason Facebook’s Libra/Diem never took off — the crypto community <em>hated</em> (still hates, but also hated) Facebook, which was obvious if you went to crypto conferences circa 2019 (and that’s on top of simply having no idea how to think about regulatory/legal structure, from what I’ve heard).</p><p>I have more thoughts on the value chain of the Metaverse from an modularity-interdependence, profit pool perspective that warrants it’s own primer, so I’ll leave the rest of my thoughts about Facebook’s VR ambitions for later. To be clear, it just so happens that Amazon’s <em>New World</em> is currently the most elucidating case study for the ideas around Metaverse computing demand I’m trying to get across — my thesis around Metaverse compute demand isn’t dependent on any particular game or even the category of “video games” in the traditional sense of the word. The reason I said I agree with Ball’s claim on computational requirements “<em>depending on how you define the M-word</em>” is because the concept of the Metaverse holds the promise of catalyzing a convergence between the physical and digital worlds and what redefining what constitutes a “game”.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/dc8fd3df70ab99571cef53e4bd029d3257a3c6107483dd25d8068a6963ee39a7.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>As of the time of my writing this sentence [<em>Feb 10, 2022</em>], both the AWS re:Invent breakout session and the interview with <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.crunchbase.com/person/bill-vass">Bill Vass</a> (VP Engineering @AWS) are at sub-500 views on Youtube, which is part of the reason why I believe infrastructure requirements of the Metaverse are currently being ignored. The world’s foremost Cloud infrastructure provider has revealed their thinking around one of the hottest ideas of our current zeitgeist and only 500 people are paying attention [<em>Bill Vass also touches upon the idea of NFTs, crypto, and open standards in the interview; you would think more people would be paying attention to Amazon’s opinion on these things</em>].</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/abbb9777d0055ee193678ee365c5859bf071b47898c286441b4bbe6acb0edac8.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>The idea of the convergence of games, simulation, and reality has long been explored in science fiction, but AWS’s case study exposition during their Dec ‘21 re:Invent “<em>From game worlds to real worlds</em>” session is, to my [<em>once again, limited</em>] knowledge, the first public demonstration of how this convergence would be achieved in real life. The key takeaway from this breakout session is that the underlying technology, infrastructure, and architectural design of massively multiplayer games can be applied to large-scale simulations — for a computer [<em>or, rather, a distributed collection of connected computers</em>], a workload utilizing a physics engine for <em>New World</em> is indistinguishable from a workload utilizing the same physics engine for modeling Earth’s environment.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/7a78b7ae1095d6adab44d4fc4f22fae294b42a6ad5e1c0b9b810a3af7d2078e9.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>And, in fact, the distributed architecture that Werner referred to when presenting <em>New World</em> mirrors the distributed architecture that Wesley and Maurizio present as solutions to their respective simulation problems:</p><ul><li><p><strong>for Wesley</strong>: recovery and national resource allocation in California earthquake scenario ⇒ [AWS + <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.here.com/">here</a>’s geospatial data + Unity game engine]</p></li><li><p><strong>for Maurizio</strong>: 1 million independent A.I. agents pathing through Melbourne and Denver ⇒ [AWS + <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://cesium.com/">Cesium</a>’s photogrammetry data + Unreal Engine]</p></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/5424cb8fba363016b1def580303c8ea9be0e1618b333816b85a050a52bb66509.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/6d7316ef891eab5da9b5ba9d4e84d33332d4ed17a2800fe03688360838849bc9.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>In both of these cases, as in the case of <em>New World</em>, the “world” is partitioned into blocks that are allocated between a distributed set of compute instances [<em>scale out</em>] with the effect of simulating a unified digital space. If/when becomes ubiquitous in 10-30(?) years, it will be this type of cloud-based scale out architecture, most likely in conjunction with non-centralized edge computing devices, that enables expansive, near-synchronous AR/VR experiences for millions/billions of people — short of advances in silicon photonics or quantum computing [<em>at mass scale</em>], <em>this is the only feasible way for the Metaverse to manifest in reality</em>. To be clear, there will be interconnections <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://twitter.com/SubstrataVr/status/1490657885973782529">between different virtual worlds</a> and some of these worlds might even exhibit <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.youtube.com/watch?v=kEB11PQ9Eo8">non-Euclidean logic</a>, [<em>see also: </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.youtube.com/watch?v=ztsi0CLxmjw"><em>a non-Euclidean VR example</em></a><em>; more likely, 3D dimensional worlds that require meshing together blocks along the z-axis will probably become the norm before people start messing with non-Euclidean worlds</em>] but the point is that massively multi-player/agent [<em>agent can be human or A.I.</em>] shared digital spaces will require this kind of distributed, scale out architecture for reliable functioning.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/d3599793ff3a18e0f69020bd990c38620d57d1c7cf97d93dad7880861b0f616f.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>The near-term, logical conclusion of this computationally-enabled merging of virtual and physical realities is manifest in convergent visions of a digital twin of Earth. Microsoft has their “<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://planetarycomputer.microsoft.com/">Planetary Computer</a>” initiative continues their <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.youtube.com/watch?v=5ljZlJX3Eq4">pre-existing efforts</a> to increasingly model and parameterize the Earth’s environment for developers using global environmental data and accessible APIs. “<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://digital-strategy.ec.europa.eu/en/policies/destination-earth">Destination Earth</a>” is the name that the European Commission has given to <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://techmonitor.ai/leadership/sustainability/destination-earth-eu-project-to-build-digital-twin-of-planet">its effort to build a digital twin of the planet</a> <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.science.org/content/article/europe-building-digital-twin-earth-revolutionize-climate-forecasts">in order to run climate simulations</a>. Nvidia has a nearly identical, supercomputer-based simulation project [<em>at the level of press releases, that is</em>] which they’ve dubbed <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://blogs.nvidia.com/blog/2021/11/12/earth-2-supercomputer/">Earth-2</a>, announced the same month as EU’s Destination Earth project.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/f123cb2858d5720a3c4623c20925c7bdc66b9287f2b625ead0da62d5d06e8d12.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>In what seems like a direct contradiction to the claim that I just made about shared digital spaces <em>requiring</em> distributed, scale out architecture, Jensen posits hosting millions of people inside a scaled up, cloud-native supercomputer. If Jensen is talking about multiplayer experiences with synchronous, mutually dependent, interactivity [<em>i.e., If I kill you in Fortnite before you kill me, you can’t keep shooting me</em>], then synchronizing between millions of people is impossible given latency constraints. However, I think Jensen may be alluding to multiplayer experiences cases where <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.figma.com/blog/how-figmas-multiplayer-technology-works/">eventual consistency</a>, rather than synchronous mutual dependence, takes precedence — i.e., Millions of people each editing an Omniverse file that corresponds to millions of separate “plots” on the central Earth-2 simulation, with limited or batch-processed interactions between players and player created/edited entities.</p><p>Who knows? I certainly don’t. What’s clear, however, is that computational demand begets more computation demand. Jensen’s Earth-2 is not just one model, but a multiplicity of models achieved through “<em>millions</em> of overlays and <em>millions</em> of alternative universes” built by both AI and humans. These millions of alternate universes will ostensibly be accessed by AR/VR headsets which themselves create <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.immersivecomputinglab.org/publication/instant-reality-gaze-contingent-perceptual-optimization-for-3d-virtual-reality-streaming/">new</a> <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://dl.acm.org/doi/10.1145/3332165.3347887">use cases</a> for compute workloads beyond [<em>hopefully encrypted, privacy-preserving, unattributable, and anonymized</em>] Cloud-based AI/ML pattern analysis of biometric/eye-tracking/facial data. The rise of Web2 and social media throughout the last decade were fueled by network effects which means that compute demand begot more compute demand after a critical mass of users. Positive explanations for <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Jevons_paradox">Jevon’s paradox</a>, which has seen previous mentions in the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://www.somic.org/2011/02/15/on-nuances-of-jevons-paradox-in-cloud-computing/">context of cloud computing</a>, have been generally unsatisfactory. I posit [<em>I’m sure this is not a new theory by any means. It’s just that, in my ignorance, I just haven’t seen it clearly stated before</em>] that overall rise in resource usage [<em>and therefore resource demand</em>] is driven by game theoretic, competitive logics. My interpretation of the original coal use case that inspired Jevons’ observation is that technological improvements that drove increased efficiency in coal use increased system-wide coal consumption throughout a range of industries because it became a competitive imperative to do so. Applied to cloud computing, the efficiency gains from cloud computing have ignited competitive pressures between companies to engage in cloud-based, compute-intensive digital transformation. Applied to the conception of the Metaverse as outlined here, the increasing digitalization of our lives may create positive feedback loops that catalyze step function increases in average computation per capita akin to what we’ve already experienced in the past two decades.</p><hr><h3 id="h-supply-capexcapacity-rules-everything-around-me-cream" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Supply: Capex/Capacity Rules Everything Around Me (C.R.E.A.M.)</h3><p><em>On capital commitment. Capacity expansion.</em></p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/abcaa998a047489fadcf4b1d43b53bc176a03a62d388142e4e2305eee2b4c540.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/d8430d3b8c51f01fa0234e57fb1eb3574c8823a1d30f024c916c51df693ab5cc.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/c0e2f7571efd921483c5821ca1edc4b82eaa7043145efd1277d8fd6ee29b4c91.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/ebd93bc42eedde2cbe0f8d33025ecb58ef102ff5d225a5a6c7970031cce1b95e.png" alt="From Credit Suisse: The Cloud Has Four Walls: 2021 Mid-year Outlook " blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">From Credit Suisse: The Cloud Has Four Walls: 2021 Mid-year Outlook</figcaption></figure><p>There was a brief period towards the beginning of the last decade when the business community (after gifting AWS <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://youtu.be/f3NBQcAqyu4?t=2111">a healthy seven year head start</a>) finally realized the success of AWS’s business model and the list of potential entrants for the nascent cloud infrastructure industry included now-ignored names like Verizon, AT&amp;T, CenturyLink (now Lumen), HP, Rackspace, IBM, and Oracle, as well as Google and Microsoft — arguably the only successful market entrants. Rackspace was the first to <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.networkworld.com/article/2461361/rackspace-bows-out-of-iaas-market.html">bow out</a> of the market in 2014, focusing instead on providing cloud management services for businesses making the transition to cloud. Verizon, AT&amp;T, and CenturyLink, despite their attempts, were clearly <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.investors.com/news/technology/how-amazon-microsoft-crushed-verizon-att-in-the-cloud/">out of the race by 2016</a> <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.sdxcentral.com/articles/news/verizon-and-att-exit-from-cloud-business-applauded-by-analysts/2017/05/">to the applause of many analysts</a>. HP <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://venturebeat.com/2015/10/21/hp-is-officially-shutting-down-its-helion-public-cloud-in-january-2016/">discontinued</a> their public cloud business around the same time. And it is well understood that the last two holdouts, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.protocol.com/enterprise/ibm-lost-public-cloud">IBM</a> and Oracle, despite kicking and screaming on their way out of the cloud infrastructure business (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.crn.com/news/cloud/larry-ellison-knocks-aws-over-outage-oracle-s-cloud-never-ever-goes-down-">mostly Oracle</a>), will never be competitive in the public cloud business at the hyperscale that the Big Three are.</p><p>Cloud infrastructure is a highly capital intensive industry meaning that “capacity decisions have long lead times and involve commitments of resources which may be large in relation to firms&apos; total capitalization” for companies seeking to be viable competitors in the industry. As Google has repeatedly demonstrated through its continued lossmaking in GCP, credible communication of intent to remain in the industry costs billions of dollars in CapEx and requires that investors be able to stomach prolonged periods of profit losses. Google’s opposite is Oracle [<em>a point highlighted by </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://stratechery.com/2018/google-cloud-changes-ceos-layers-of-surprise-or-not-the-vmware-analogy/"><em>Ben Thompson in 2018</em></a><em> after Thomas Kurian (an ex-Oracle exec) took the helm as Google Cloud’s CEO</em>], a company which has repeatedly exaggerated their dominance in the cloud but clearly lacks the requisite costly infrastructural capacity to back up their claims — Charles Fitzgerald puts it best in a 2016 blog post titled <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.platformonomics.com/2016/09/sorry-oracle-clouds-are-built-with-capex-not-hyperbole/"><em>Sorry Oracle, Clouds are Built with CAPEX, Not Hyperbole</em></a>.</p><p>Fitzgerald’s longstanding coverage of the Cloud through the lens of CapEx, especially his <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.platformonomics.com/tag/capex/"><em>Follow the Capex</em></a> series, has [<em>to my knowledge</em>] been the most consistent source for reminders on importance of “putting your money where your mouth is” in the cloud via simply spending billions on servers and datacenters.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/c40057154947e1e8e2612c249f0803744bd81a48983429cd125b9a6f55301713.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>This chart on annual CapEx spend from a Jul ‘20 article by Charles is a good illustration of what Porter identifies as the “single most important concept in planning and executing offensive or defensive moves” — “<em>commitment</em>”:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/28d5e476fbdcd5449e55da95102a34b760d532b78b68cfec07bb6d46e42ca596.png" alt="From Platformonomics: Follow the CAPEX: Clown Watch" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">From Platformonomics: Follow the CAPEX: Clown Watch</figcaption></figure><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/903c58356b6814e6de907fc65a78b614a6660ec11fb48d536cc55f4b84379337.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Google’s ad-driven money printing machine has given the company the cash flow to plough back into GCP infrastructure and make credible commitments towards market entry that can be interpreted by competitors (mainly AWS) as “Give me some space or you’re going to hurt more from a price war than we are” and by existing/potential customers as “Look, we’re here to stay. Don’t believe me? Look at this pile of cash that we’ve been burning for years!” Relative to Google’s “Other Bets”, more conservative shareholders are comparatively ecstatic to see Google investing in their cloud segment given that, at the scale of their existing media and advertising businesses, many innovations end up getting passed down to their internal segments regardless of their ability to sell them as services — e.g., even if they never sell their internally developed in-house VCU instances (hardware-based encoding for uploaded video data, primarily Youtube) to external customers, they operate at a scale that justifies the R&amp;D costs of the chip design. In other words, the cost of process/design/engineering improvements that Google invests in to provide their internal hyperscaled business lines can be amortized over a broader base if they’re able to capture market share in the public cloud and improve the operating leverage on their investments. The question of whether or not GCP has the ability to capture market share and achieve sufficient operating leverage to achieve profit margins that start converging towards AWS’s 25-30% operating margins has remained the overarching concern for investors trying to underwrite the business.</p><p>Guidance from management ...</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/557f04c142856eb8bc2505cdd36db29cd7b4ef4700f1a07161e9d35d7353cedd.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>... indicates that, despite negative but positively-trending operating margins, Google is continuing to optimize more for top line growth than for profit. From what I understand, the general street consensus is that there’s no reason (at least from a technological standpoint) why GCP shouldn’t be able to converge towards AWS/Azure level margins eventually, the only question is how long Google intends on spending to expand their footprint versus dialing back on growth CapEx to position their asset base towards profitable levels of utilization [<em>i.e., Continued growth-oriented CapEx spend ahead of expected infrastructure utilization rates continually depresses realized utilization rates</em>] and therefore profitability.</p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://twitter.com/modestproposal1/status/1489620276186947588">https://twitter.com/modestproposal1/status/1489620276186947588</a></p><p>Google’s continued willingness to operate this segment at a loss while still growing their capacity/infrastructure-oriented investment spend is a reflection of how large the company expects the market for IaaS to become by the backhalf of the growth curve for the market — when Google says they think we’re still in the “early innings” for cloud adoption/penetration, they’re really putting their money where their mouth is. Google’s ability to put chips on the table (and chips into data centers) has cemented their status as one of the three hyperscalers in the public cloud infrastructure industry’s triopoly (ex-China), an industry whose barriers to entry can be summarized in one graph:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/761918106c954c49039cf444d1859d4e3d067b9c8c7e4c428d0c031628fbc712.png" alt="From Platformonomics: Follow the CAPEX: Cloud Table Stakes 2020 Retrospective" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">From Platformonomics: Follow the CAPEX: Cloud Table Stakes 2020 Retrospective</figcaption></figure><p>It is with this backdrop that we can begin a discussion about risks of industry-wide overcapacity, as outlined by Porter in <em>Competitive Strategy</em> and <em>Capacity Expansion in a Growing Oligopoly</em>, **in this mutually dependent oligopoly — <strong><em>What’s to prevent an overcapacity situation in which “too many competitors add capacity” and “no firm is likely to escape the adverse consequences” among the hyperscaler CSPs</em></strong>?</p><p>It’s likely my research simply hasn’t been extensive enough, but I have yet to encounter anyone else investigating this question [<em>please do direct me if you’ve encountered something along these lines</em>] and it’s probably because:</p><ol><li><p>Case studies of industry capacity through a game theoretic lens aren’t in the public domain [<em>i.e., These types of case studies are either academically-oriented (rather than practitioner-oriented) or they exist behind sell-side research portals</em>]</p></li><li><p>Demand for Cloud infrastructure services [<em>and therefore those inputs that enable the construction of data centers</em>] has been remained perennially strong for the last decade.</p></li></ol><p>The latter point only partly answers my question. In situations where an industry’s firms expect predictably high demand [<em>which has certainly been the case for cloud computing services</em>], economically rational decisions by individual firms to expand capacity and meet expected demand can risk a situation of broader industry overcapacity. However, in the case of the public cloud industry, the ability of firms to overbuild capacity in anticipation of demand has been limited by constraints in the supply of components, and particularly semiconductors, which have emerged as a particularly conspicuous bottleneck during the pandemic. The key thing to note is that hyperscalers have preferential treatment with chip suppliers ...</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/b156540e7b6860d26e7300ddce7481b2e2d26883e9244a390129a752353b3465.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>... so AWS/Azure/GCP will be the last to suffer from semiconductor-related supply constraints. Furthermore, hyperscalers won’t significantly over order (”double book”) beyond what can be consumed by customers because, yes, fabs are able to tell the difference between “real” orders vs double booked orders, but more importantly because the rate of improvement for semis meant a three to four year [<em>now five, after AWS recent server depreciation change</em>] depreciation schedule for servers. In other words, even is TSMC gave the green light and permitted Microsoft or Google to buy four years worth of chip supply in one year in a bid to catch up with AWS, Azure/GCP would find that the servers would remain underutilized if they’re unable to find sufficient customer demand and the problem for them would that they’d be unable to wait 10 years for demand to catch up to their capacity because the chips in their servers would be obsolete by then.</p><p>I also want to make a [<em>not so</em>] quick point here about server depreciation given the recent changes ...</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/7c5c91b42df679047b20898ef3c6e3eba2cfa1f3105151717e16268a3b9961f3.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>... in estimations of useful life for servers and networking equipment by AWS. The common sense explanation for continued extensions in estimations in useful life of hardware is that the rate of innovation present during the early stages of cloud infrastructure expansion has moderated over time — this is not to say there isn’t still innovation [<em>there is, especially given the re-architecting necessary for proprietary chip designs, software-defined networking, data center disaggregation (i.e., DC as unit of compute), and mix shift towards HPC and GPU hardware for AI workloads</em>], but merely stating the obvious that AWS/Azure/GCP had more to figure out on a foundational level 5-10 years ago than they do now. A decade ago, the hyperscalers were still figuring out the optimal designs and configurations for their data center infrastructure and Moore’s Law, as it is traditionally known, was still humming along [<em>See this </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.fabricatedknowledge.com/p/heterogeneous-compute-the-paradigm"><em>great FabricatedKnowledge post</em></a><em> on “the spirit of Moore’s lives on”</em>] — as all the low hanging Pareto fruit got picked and chip node cycles lengthened as the fabs approached 5nm, it only makes sense for equipment useful life to extend.</p><p>But why do they replace old servers in the first place? Surely the servers don’t break down and become completely unusable after three years of use? This was the line of questioning that bothered me to no end, not least because search terms like “server depreciation why physics reason” or “why do servers depreciate” yielded links to accounting standards and financial minutiae that didn’t address the physics/engineering-based realities underlying the derivative accounting concerns. Why didn’t hyperscalers just continue using and operating hardware that was ostensibly perfectly functional, even after three to four years, and shift lower priority workloads to older, less performant servers? James Hamilton saves the day again.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/40aacbe966765d2e913e6954d9ccbf48d4588fdc185a46859ea744e3d3a6a7a8.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Not only do replacing servers increase speed, but new servers cost less in terms of power/cooling-related overhead because of efficiency gains from higher logic densities between transistors on server chips — if transistors are packed closer together then, all other things being equal, it requires less energy to move electrons on the chip, therefore requiring less energy and producing less excess heat for the same amount of computational work. As Hamilton demonstrates, there comes a point at which bringing new, more efficient [<em>and performant</em>] servers is more economical than relying on old servers.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/b1dea893cc883b6cd4eca797829434945f5bac8cd60d16330f4c7d3980285aa2.png" alt="[From TSMC slidedeck by way of ExtremeTech]: Note the “Speed Improvement at Same Power” and “Power Reduction at Same Speed” improvement categories." blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">[From TSMC slidedeck by way of ExtremeTech]: Note the “Speed Improvement at Same Power” and “Power Reduction at Same Speed” improvement categories.</figcaption></figure><p>What the continued increase in server/equipment useful life indicates is that the economical point at which replacing old servers with new servers is arriving less and less frequently, coincident with the diminishing of power efficiency gains from process node improvements [<em>i.e., second derivative of power efficiency improvements from 28nm→22nm→ ... →7nm→5nm is negative</em>]. Furthermore, continued moderation of DC-related depreciation expenses has room to run as hyperscalers pursue <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ieeexplore.ieee.org/document/7842314">disaggregated data center</a> architectures [<em>”Comparing to the monolithic server approach that data centres are being built now, in a disaggregated data centre, CPU, memory and storage are separate resource blades and they are interconnected via a network fabric”</em>], thereby enabling more granularity in equipment refreshment:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/8f022026cfcf405e608efa008b6b3a98f83edb40de45adf516c0a90f300d0c38.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>That is, instead of having to replace an entire server blade (CPU + memory + fans + chillers + etc.) every four years, DC operators can, for example, choose to replace only the CPU after four years while keeping the fan for another three years of use.</p><p>One more tangent on what I originally intended to be a “quick point” on server depreciation, which is that while the hyperscalers don’t publicly break out what proportion of their DC CapEx is spent on Refreshment vs Expansion, I think it’s a useful frame for thinking about hyperscaler CapEx spend.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/4646a719926f229f62d5db0aebfed5225ddb10ab16530b13f605b14c9855b74f.png" alt="From McKinsey: How high-tech suppliers are responding to the hyperscaler opportunity " blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">From McKinsey: How high-tech suppliers are responding to the hyperscaler opportunity</figcaption></figure><p>But back to capacity, what analysts have been concerned about recently [<em>and what I personally need to do more research on</em>] is the question of whether or not hyperscalers are subject to capacity constraints from global supply chain issues. Although buyers of server chip, especially those who end up supplying hyperscalers, get preferential treatment from fabs ...</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/b539c3f084ceb416ba3ef2be2d365c9a34970ea6061ec00783f50a8cd67be162.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/1fa2268e38264904b1daadebef837a2c21e3df16d1de552298e32df98553a003.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>... constraints that may even impact the hyperscalers.</p><hr>]]></content:encoded>
            <author>0x125c@newsletter.paragraph.com (0x125c)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/7520495b0558cc96d06df169ea0392fe22d890596bfeb18e2d5d52434154f3b3.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Three Body: Competitive Dynamics in the Hyperscale Oligopoly]]></title>
            <link>https://paragraph.com/@0x125c/three-body-competitive-dynamics-in-the-hyperscale-oligopoly</link>
            <guid>UJV51qrmQTPbQLhhhHhm</guid>
            <pubDate>Tue, 01 Mar 2022 17:47:45 GMT</pubDate>
            <description><![CDATA[Part 4 of Planetary-Scale Computation: An industry primer on the hyperscale CSP oligopoly (AWS/Azure/GCP):Let’s Get Physical, (Cyber)Physical!: Flows of Atoms, Flows of ElectronsA Cloudy History: Four Histories of Cloud ComputingPrimer on the Economics of Cloud ComputingThree-Body: Competitive Dynamics in the Hyperscale OligopolyInitial Positions and Laws of [Competitive] MotionMass and the Law of [Economic] GravitationVelocity and the n-body problemThe Telos of Planetary-Scale Computation: O...]]></description>
            <content:encoded><![CDATA[<p>Part 4 of <em>Planetary-Scale Computation: An industry primer on the hyperscale CSP oligopoly (AWS/Azure/GCP)</em>:</p><ol><li><p><em>Let’s Get Physical, (Cyber)Physical!: Flows of Atoms, Flows of Electrons</em></p></li><li><p><em>A Cloudy History: Four Histories of Cloud Computing</em></p></li><li><p><em>Primer on the Economics of Cloud Computing</em></p></li><li><p><strong><em>Three-Body: Competitive Dynamics in the Hyperscale Oligopoly</em></strong></p><ol><li><p><em>Initial Positions and Laws of [Competitive] Motion</em></p></li><li><p><em>Mass and the Law of [Economic] Gravitation</em></p></li><li><p><em>Velocity and the n-body problem</em></p></li></ol></li><li><p><em>The Telos of Planetary-Scale Computation: Ongoing and Future Developments</em></p></li></ol><hr><h2 id="h-three-body-competitive-dynamics-in-the-hyperscale-oligopoly" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Three Body: Competitive Dynamics in the Hyperscale Oligopoly</h2><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/101b249dc601d2048fc4d54f6a3add7e764db81c1ddf55cb4fa104eea88b5c3a.png" alt="Algo-r(h)i(y)thms, 2018. Installation view at ON AIR, carte blanche exhibition to Tomás Saraceno, Palais de Tokyo, Paris, 2018." blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Algo-r(h)i(y)thms, 2018. Installation view at ON AIR, carte blanche exhibition to Tomás Saraceno, Palais de Tokyo, Paris, 2018.</figcaption></figure><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/9774988672f3b5220b5cfc3e5527daad4911c340bbd70f65005d91f6e12f98aa.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><blockquote><p>Civilization Number 184 was destroyed by the stacked gravitational attractions of a tri-solar syzygy. This civilization had advanced to the Scientific Revolution and the Industrial Revolution.</p></blockquote><p>In this civilization, Newton established nonrelativistic classical mechanics. At the same time, due to the invention of calculus and the Von Neumann architecture computer, the foundation was set for the quantitative mathematical analysis of the motion of three bodies.</p><p>After a long time, life and civilization will begin once more, and progress through the unpredictable world of <em>Three Body</em>.</p><p>We invite you to log on again.</p><p>— <strong>Cixin Liu</strong>, <em>The Three Body Problem</em></p><blockquote><p>“Three may keep a secret, if two of them are dead.” ― <strong>Benjamin Franklin</strong>, <em>Poor Richard&apos;s Almanack</em></p></blockquote><hr><h2 id="h-complete-information-and-stable-configurations-in-the-three-body-problem" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Complete Information and Stable Configurations in the Three Body Problem</h2><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://cloud.anylogic.com/model/f1999d97-8de2-4804-9940-5ae261d7ad86">https://cloud.anylogic.com/model/f1999d97-8de2-4804-9940-5ae261d7ad86</a></p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/d7ea45e96444f6d916d99fa9379333d178369e0cad929d3610d7a805c7256645.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>The basic premise of Cixin Liu’s <em>The Three Body Problem</em> is that an intelligent and technologically superior alien civilization has discovered that Earth’s solar system is habitable for their species through radio-based communication with a lone, disillusioned astronomer at a Chinese astronomical observatory. The aliens are called Trisolarans because they live on a planet within a chaotic trinary star system called <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Alpha_Centauri">Alpha Centauri</a> that is based on the <em>actual</em> but <em>non</em>-chaotic trinary star system by the same name. Trisolaris, the planet upon whch the Trisolarans have lived on for around two hundred “civilizations” worth of eras, is subject to the chaotic fluctuations of extreme hot and cold climates due to the chaotic nature of the triple star system — the co-opting of Earth’s single star system from humans represents the Trisolarans’ only feasible chance of civilizational salvation through cosmic stability.</p><p>The three-body problem is a <em>problem</em> in that there’s no closed-form solution [<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Three-body_problem#General_solution">*Wiki</a>: ”meaning there is no general solution that can be expressed in terms of a finite number of standard mathematical operations”*] and requires numerical computation in order to solve for the system’s state at time, <em>T —</em> while <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://physics.stackexchange.com/questions/456808/how-do-computers-solve-the-three-body-problem">analytic approximations are possible sans sequential computation, errors compound</a> and a true solution requires sequential computation). Back on Earth, another three body system is entering a stable, non-chaotic configuration by virtue of each of these three bodies being able to respond to the other two bodies’ positioning in a way that dumb star mass is unable to.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/f556f00ed588a1065b7212dc797ca753aa21b513b5d7f0f18326074558cb4c7c.png" alt="[From Goldman Sachs: Cloud Quarterly 1Q20]: This data isn’t up to date (Q1’20) and primarily presented to convey the idea of AWS/Azure/GCP as masses with velocity. Although revenue is more “flow” than “stock”, we can imagine a similar bubble chart being made with business enterprise value (or segment market cap through [segment revenue or earnings] x [comped multiples]) as “mass”." blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">[From Goldman Sachs: Cloud Quarterly 1Q20]: This data isn’t up to date (Q1’20) and primarily presented to convey the idea of AWS/Azure/GCP as masses with velocity. Although revenue is more “flow” than “stock”, we can imagine a similar bubble chart being made with business enterprise value (or segment market cap through [segment revenue or earnings] x [comped multiples]) as “mass”.</figcaption></figure><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/b62e4f8ca6113e6538961e412dc23cb18d22920bcb26cfe2697462a15ff37309.png" alt="[From Synergy]: A somewhat more recent (Q1’21) size v growth chart for reference. " blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">[From Synergy]: A somewhat more recent (Q1’21) size v growth chart for reference.</figcaption></figure><p>While it’ll be a long, <em>long</em> time before the hyperscalers’ R&amp;D investments into chip design and TSMC’s investments into chip manufacturing can produce anything like a pair of <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.quora.com/Is-the-sophon-or-an-unfolded-photon-possible">sophons</a> [<em>in The Three Body Problem sophons are a fictional pair of quantumly-entangled supercomputers made from dimensionally unfolding, etching circuits, and dimensonally refolding single protons</em>], the cloud hyperscalers have the next best thing — millions upon millions of globally-networked servers. Although the hyperscalers are unable to achieve the perfect information gathering abilities of the Trisolaran’s sophons, for the purposes of analyzing their competitors’ strategic moves, each of three Big Three hyperscalers are more than capable.</p><p>The cloud computing industry is as close as to a complete information game as an industry can get in the real economy. As a result of the industry’s oligopolistic structure, common hiring practices, and the very nature of the industry’s business, <strong><em>there has never been an industry in which the industry participants know as much about each other’s competitive positioning than in the hyperscale cloud industry</em></strong>. The industry’s oligopolistic structure and common hiring practices [<em>in addition to </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.hbs.edu/ris/Publication%20Files/20-063_97e5ef89-c027-4e95-a462-21238104e0c8.pdf"><em>tech clustering</em></a><em>, nearly two decades of competitive history, common suppliers for millions of servers, shared pool of customers looking to get the lowest price in negotiations, etc.</em>] mean that <strong><em>there are no true secrets between the Big Three hyperscalers</em></strong>.</p><p>Any entry-level FAANGM engineer is capable of finding ways to scrape information from LinkedIn [<em>alternative data providers offer this kind of data for hedge funds looking to gain insights, e.g., company growth and new strategic directions via novel role listings</em>] and systematically learn about what their competitors’ hiring practices. I would bet that Microsoft probably had an internal dashboard of tech industry hiring stats up within a month of closing their 2016 acquisition of LinkedIn. This is unconfirmed conjecture, but <strong><em>you can virtually guarantee that each of the hyperscalers have entire teams dedicated to analyzing and interpreting the competitive moves of the other two</em></strong> — it doesn’t help that these three cloud giants all also compete in the search business, although in that industry Google is the dominant player.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/b119c805d99fad5a2168777bde11188125972255ee3513f1c2da6a18adc9b3f6.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Although Porter’s 1982 case study on the corn wet milling industry is nearly four decades old and concerns a highly dissimilar industry, stark parallels (oligopolistic industry structure, vendor lock-in concerns, commodified product, analysis of capacity buildout) between it and the modern cloud computing industry make the case extremely instructive for a competitive analysis of the hyperscale cloud oligopoly. If Porter’s expectation that “intelligent rivals will converge in their expectations about each others’ behavior” is true and we believe that the management teams of Amazon, Microsoft, and Google constitute “intelligent rivals”, then it stands to reason that there exists at least <em>some</em> convergence in expectations of agent behavior within among the hyperscale players. Put more plainly, it is <em>unimaginable</em> that the management teams of the hyperscalers aren’t <em>constantly</em> conducting game theoretic, strategic analysis on their co-oligopolists (and the broader competitive landscape) using the same tools (big data, analysis tools, AI/ML, hyperscale computation) that they’ve made billions selling to their customers — what else would they be doing?</p><p>This industry analysis emphasizes interpreting competitive actions and market signals within the Cloud industry through the internal perspective of the hyperscalers and analyzing potential industry shifts by recognizing the Big Three hyperscalers’ positions as the competitive landscape’s centers of gravity. The trio constitute three of the world’s top five largest companies by market cap and operate in a near complete information condition with respect to their cloud businesses.</p><p>A partial list of competitive information that each of the Big Three might or might not be collecting, on a scale from “table stakes information that is 100% being collected by all three” to “alternative information that would be trivial to collect”, include the following:</p><ul><li><p><strong>price changes in competitors’ cloud offerings</strong>: You can be <strong><em>100% certain</em></strong> that each hyperscaler is continuously monitoring the pricing of service offerings of the other two.</p></li><li><p><strong>implied demand curves and implied price elasticity for cloud services</strong>: You can be <strong><em>100% certain</em></strong> that each hyperscaler has an algorithm to calculate demand curves and price elasticity for their cloud services, probably segmented by user type (enterprise vs SMB, geography, past usage, etc.).</p><ul><li><p>Microsoft’s <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/The-Economics-of-the-Cloud-da03f6f90c9e4e2da697c934656f4508"><em>The Economics of the Cloud</em></a> (2010), a decade+ old paper, mentions “price elasticity” twice; there’s no doubt that they’ve since refined and systematized their thinking, and no doubt that Amazon and Google have done the same.</p></li><li><p>From <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/Electrocloud-e5425548e1ac4c98870b07022197ed34"><em>Electrocloud!</em></a> by Byrne Hobart:</p><blockquote><p>Take all the different mixes of general computing, specialized processes, storage at various latencies, memory, and the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://aws.amazon.com/products/">more or less endless long tail of AWS products</a>, consider that every one of them has a demand curve and that Amazon sees all these demand curves, and then consider that Amazon has a better view into the <em>cost</em> of providing some mix of services than anyone else, and it&apos;s easy to see where AWS&apos;s economics come from.</p></blockquote></li></ul></li><li><p><strong>developer activity</strong>: That Microsoft is analyzing data from Github activity goes without saying. That Amazon and Google are monitoring developer activity, one way or another, should also go without saying. Alternative data platforms also <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://twitter.com/AznWeng/status/1386676592064307200">look at LinkedIn</a> for indications of demand for competing software/frameworks/etc. in job descriptions and employee CVs.</p></li><li><p><strong>patent and trademark filings</strong>: Over a year ago my non-programmer friend built a Python-based webscraper to continuously monitor patent and trademark filings from USPTO, which is now probably obsolete since <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://developer.uspto.gov/api-catalog">USPTO launched an API</a>. If this was low-hanging fruit back then, it’s practically on the floor now.</p></li><li><p><strong>developments within standards-setting bodies</strong>: Contributors to and participants of regional and international standards-setting bodies feature employees of Big Tech companies.</p></li><li><p><strong>potential supply constraints</strong>: Amazon has the best visibility of any organization in the world regarding supply chain issues because of their sprawling retail and logistics business, but each of the three hyperscalers have a transparent view of potential supply constraints regarding data center equipment (semiconductors, HVAC, building materials, etc.) because their scale of operation provides them leverage and a large surface area for collecting information.</p></li><li><p><strong>planned and ongoing capacity buildout (datacenters, satellites, subsea cables, etc.) by competitors</strong>: All three players own and operate satellites and you can’t really hide a datacenter build out. Even without satellite imagery, the list of ideal large-scale datacenter locations (proximity to populations of data demand, cool climate close to water sources, favorable government, access to existing cable infrastructure) is a primary focus which means each player already has ongoing conversations with local governments and real estate developers in geographies of interest.</p></li><li><p><strong>planned product/service launches</strong>: Varying look through depending on the type of product or service a competitor is looking to release. Some services require specialized expertise or hardware and news about new key hires or orders for newly-designed hardware eventually spreads.</p></li><li><p><strong>information on competitors’ factories and subcontractors</strong>: Given the increasing emphasis on net-zero carbon pledges and ESG by both socially-conscious employees and investors, information about competitors’ material/hardware value chains is becoming increasingly important. Bad press about ESG-related issues means a higher cost of capital (as seen in financing difficulties for O&amp;G projects in 2021) but, more importantly, more reasons for prospective hires to work at your competitors instead of your company.</p></li><li><p><strong>employee sentiment</strong>: This information would involve sourcing employment information (i.e., LinkedIn, Twitter bios, Facebook data, etc.), linking identites to social media profiles, and aggregating profiles by company. If those social media profiles aren’t made private, i.e. If an AWS employee has a public twitter account, the text data from, let’s say, “all Tweets from last month” can be analyzed for sentiment and keywords. Something like this is marginally useful but is trivial enough to construct using tools like <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.maltego.com/">Maltego</a> that I’d be surprised if it <em>wasn’t</em> being done.</p></li><li><p><strong>live location of key executives</strong>: There’s at least one Discord server dedicated to tracking the private jet activity of notable tech entrepreneurs. Aircraft activity is <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.osintessentials.com/aviation">essentially public information</a> that can be tracked by using trivially available means — once a specific aircraft has been identified to belong to specific person or organization, that person or organization’s flight activity can be tracked. This type of information is less salient but <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://twitter.com/BezosJets">nonetheless real</a>. [<strong><em>Note</em></strong> <em>(02/11/22): I originally wrote this subsection in early January ‘22 prior to </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.nbcnews.com/news/us-news/teen-rejects-elon-musks-5000-offer-shut-jet-tracker-rcna14256"><em>media coverage</em></a><em> about Elon Musk offering $5,000 to a teen tracking his PJ. Sans military craft and Air Force One, aircraft locations are </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.kaspersky.com/blog/tracking-airplanes-how-flightradar24-works/8389/"><em>essentially public information</em></a><em>. There have been dedicated, niche channels for tracking VIP aircraft for a while now, and this doesn’t even include the more sophisticated alt data services that hedge funds buy/build for tracking key figures.</em>]</p></li></ul><p>If hedge funds operating within <em>billions</em> of AUM can find financial justification for using <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.youtube.com/watch?v=wUgqJTTmVZI">satellite imagery to estimate global oil tanker volumes using the size of shadows</a>, then imagine what kinds of information gathering <em>trillions</em> of dollars of market capitalization justifies.</p><p>AWS, Azure, and Google Cloud are not three bodies of dumb mass whose trajectories devolve into a deterministically chaotic system defined by initial conditions. The three-bodies of the hyperscale cloud oligopoly, though still subject to the capitalist analogue of Newton’s laws in the form of market competition, are masses that have the ability to engage in co-opetition and tacitly coordinate in order to minimize collisions. As much as these three are warring states in the so-called “Cloud Wars” are competing with each other to capture market share, they are also all co-oligopolists in a <em>expanding</em> industry in its “early innings” that is the beneficiary of multiple secular tailwinds.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/98ff1aa13a82f2a8f2dc50858147e058eb7fcb812d63b17bc8c17bdb01751f0c.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/2aead5264d9b6e0755cf8ad90bab57f3a5896a0ab536eab55e0eb4b0fb6b9b41.gif" alt="[From VisualCapitalist]: A depiction of our solar system that’s similar to another so-called “helical” model of our solar system that has been contested (Slate: No Our Solar System is NOT a “Vortex”) — in any case, this model conveniently illustrates the idea of a system (three-body cloud oligopoly) within a system (broader tech ecosystem, broader economy, etc.) that I’m trying to convey." blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">[From VisualCapitalist]: A depiction of our solar system that’s similar to another so-called “helical” model of our solar system that has been contested (Slate: No Our Solar System is NOT a “Vortex”) — in any case, this model conveniently illustrates the idea of a system (three-body cloud oligopoly) within a system (broader tech ecosystem, broader economy, etc.) that I’m trying to convey.</figcaption></figure><p>The hyperscale triopoly can therefore be conceptualized as a three body system that has achieved a stable configuration through tacit coordination and the remainder of <em>Three Body</em> expands upon this conceptual metaphor in order to explore the Cloud’s competitive landscape. I map the five parameters necessary for initializing a three body system in the classical mechanics context onto five sets of economics and business/finance concepts ...</p><ul><li><p><strong>initial positions</strong> ⇒ <em>current strategic positioning of the hyperscalers</em></p></li><li><p><strong>Newton’s laws of Motion</strong> ⇒ <em>a competitive “law of motion” in the form of Christensen’s law of conservation of attractive profits</em></p></li><li><p><strong>mass</strong> ⇒ <em>TAM and market cap</em></p></li><li><p><strong>Newton’s law of universal gravitation</strong> ⇒ <em>an economic “law of gravity” in the form of the law of supply and demand</em></p></li><li><p><strong>velocity</strong> ⇒ <em>continuing efforts by hyperscalers to integrate both forwards and backwards along the value chain</em></p></li></ul><p>... with the goal of solving for the subsequent motion, not of actual mass bodies, but of the various players within the Cloud’s ecosystem, with special attention paid to the triopoly at the center of it all.</p><hr>]]></content:encoded>
            <author>0x125c@newsletter.paragraph.com (0x125c)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/6c721cef1735ba6bc769afbf3b6186429561619576d7b8c5f43c3d132716dd1c.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Primer on the Economics of Cloud Computing]]></title>
            <link>https://paragraph.com/@0x125c/primer-on-the-economics-of-cloud-computing</link>
            <guid>czR7A4qtKL5ZuwOg0Gy9</guid>
            <pubDate>Tue, 01 Mar 2022 17:47:19 GMT</pubDate>
            <description><![CDATA[Part 3 of Planetary-Scale Computation: An industry primer on the hyperscale CSP oligopoly (AWS/Azure/GCP):Let’s Get Physical, (Cyber)Physical!: Flows of Atoms, Flows of ElectronsA Cloudy History: Four Histories of Cloud ComputingPrimer on the Economics of Cloud ComputingThree-Body: Competitive Dynamics in the Hyperscale OligopolyInitial Positions and Laws of [Competitive] MotionMass and the Law of [Economic] GravitationVelocity and the n-body problemThe Telos of Planetary-Scale Computation: O...]]></description>
            <content:encoded><![CDATA[<p>Part 3 of <em>Planetary-Scale Computation: An industry primer on the hyperscale CSP oligopoly (AWS/Azure/GCP)</em>:</p><ol><li><p><em>Let’s Get Physical, (Cyber)Physical!: Flows of Atoms, Flows of Electrons</em></p></li><li><p><em>A Cloudy History: Four Histories of Cloud Computing</em></p></li><li><p><strong><em>Primer on the Economics of Cloud Computing</em></strong></p></li><li><p><em>Three-Body: Competitive Dynamics in the Hyperscale Oligopoly</em></p><ol><li><p><em>Initial Positions and Laws of [Competitive] Motion</em></p></li><li><p><em>Mass and the Law of [Economic] Gravitation</em></p></li><li><p><em>Velocity and the n-body problem</em></p></li></ol></li><li><p><em>The Telos of Planetary-Scale Computation: Ongoing and Future Developments</em></p></li></ol><p>Table of Contents for <strong><em>Primer on the Economics of Cloud Computing</em></strong>:</p><ul><li><p><em>The perfect business model doesn’t</em> <em>exi—</em></p></li><li><p><em>Techne: Cloud economics in theory</em></p><ul><li><p><em>Why firms buy cloud services: Individual supply-demand</em></p></li><li><p><em>Why firms sell cloud services: Aggregate supply-demand</em></p></li></ul></li><li><p><em>Metis: Cloud economics in practice</em></p><ul><li><p><em>The Cost of a Cloud</em></p></li><li><p><em>Cloudy Revenue Streams; Cloudy Profit Pools</em></p></li></ul></li></ul><hr><h2 id="h-primer-on-the-economics-of-cloud-computing" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Primer on the Economics of Cloud Computing</h2><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/8006fecfc1ddba24421f56b8aecaa06e8d7a454578a1d78bdb76ba9f159b249e.jpg" alt="Zonal Harmonic (2017) by Tomás Saraceno" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Zonal Harmonic (2017) by Tomás Saraceno</figcaption></figure><hr><h2 id="h-the-perfect-business-model-doesnt-exi" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The perfect business model doesn’t exi—</h2><p><em>Techne vs Metis.</em></p><p>The disciplines of economics and finance are typically taught by oscillating between:</p><ol><li><p>Teaching students idealized mental models to provide higher-order contexts for disparate anecdata</p></li><li><p>Having students refine these models through analysis and discussion on more realistic and multifaceted edge cases (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Overfitting">ideally without overfitting</a>)</p></li><li><p>And having them reintegrate these previously non-conforming elements into [<em>what are hopefully</em>] more flexible and dynamic mental models</p></li></ol><p>This process is typical for the teaching of <em>any</em> discipline but [I believe that] economics and finance are a special case in that the delta between the disciplines’ Platonic ideals and reality of the disciplines’ practices (aka <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://mesopotamianmarine.wordpress.com/2013/04/01/democracy-metis-or-techne-3/">techne vs metis</a>, respectively) are the largest of any discipline and <em>remain large</em> even as one learns more about them. Whereas students of fields like theoretical physics, mathematics, philosophy, etc. tend to gravitate towards techne as they further the development of their theories, and students of agriculture, engineering, medicine, marketing, accounting, etc. tend to gravitate towards metis as they begin practicing their disciplines, neither techne nor metis seem to be an exclusive, stable attractor for students of economics and finance.</p><p>In finance, metis without techne results in the “I use technical analysis exclusively. What do you mean automation? Python?” Davey daytrader archetype who underfits reality, while techne without metis can contribute to <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.wired.com/2009/02/wp-quant/">global financial collapse</a> through model overfit. The ability to properly synthesize practice and theory can lead to profitable opportunities, as was the case for oil traders in 2020 who were quick to switch from Black/Black-Scholes-based options pricing models to the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://www.jpmcc-gcard.com/digest-uploads/2020-winter/issue-pages/Page%2060_70%20GCARD%20Winter%202020%20Sterijevski.pdf">Bachelier model when oil futures went negative for the first time in history</a>.</p><p>That this gap between the ideal and the real exists makes sense given that the practice business and commerce exists to solve <em>real</em> problems that the disciplines of business/economics/accounting/finance only attempt to systematize the analysis and practice of <em>after the fact</em>. The <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://jtbd.info/2-what-is-jobs-to-be-done-jtbd-796b82081cca">job to be done (JBTD)</a> for the healthcare industry is to improve peoples’ health; the JBTD for the airline industry to get people where they want to go; the JBTD for a bakery is bake bread. While principal-agent problems and regulatory capture can eventually lead to market distortions that pervert the industry’s JBTD, these distortions are usually born after the fact — the metis of the business of breadmaking <em>eventually</em> leads to the techne of managing the business and finances of the bakery (metis → techne). Not so with the cloud computing industry (techne → metis) .</p><p>The modern cloud computing industry (as well as its computer timesharing predecessor) was initially born of an attempt to capitalize on what was an internal <em>financial</em> problem rather than an already existing, external demand.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/9cdea89d3a59d66abd0aa5590d240e368b3d34b74533c6d98490d9507f54c0fd.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>In a sentence, the basic idea behind cloud computing is to aggregate demand for computing resources at scale in order to diversify the timing of resource use, thereby maximizing asset (i.e., computer/server) utilization and exploiting various economies of scale. Cloud computing was initially a <em>financial</em> innovation that only <em>subsequently</em> enabled and benefited from (and still enables and benefits from, present tense) <em>technological</em> innovation. That Jeff Bezos was a financial analyst (no less at a quant firm that heavily utilized computational resources) prior to founding Amazon should come as no surprise to anyone who learns this fact.</p><p>The grounding of modern cloud computing industry in economic and financial theory means that the underlying economic and financial concepts governing the industry’s economics are as salient today as they were over a decade ago, with no rationale for a fundamental paradigm shift in cloud economics until we’re able to achieve feats like the quantum entanglement of qubits or actual clouds of solar-powered smart dust. The sale of computing resources refers to demand vs reserved vs spot prices (analogous to spot vs future prices in hard/soft commodities markets) and there are even “Cloud Economist” roles inside and outside of hyperscaler cloud companies. The business model’s “immaculate conception” (with respect to economic/financial grounding) is what makes it “perfect” — the sale of cloud-based infrastructure services is as perfect as it gets for real-world business models (maybe we’ll find a more perfect business model in the Metaverse, but even that will be running on the Cloud) in terms of minimizing the delta between economic theory and business practice. It’s for this reason why the primary documents which I’ll be referencing to outline the economics of cloud computing (everything after this subsection is largely just me refactoring concepts from these older documents) can be close to a decade old without diminishment in relevance.</p><hr><h2 id="h-techne-cloud-economics-in-theory" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Techne: Cloud economics in theory</h2><h3 id="h-why-firms-buy-cloud-services-individual-supply-demand" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Why firms buy cloud services: Individual supply-demand</h3><p>Prior to the advent of cloud-based infrastructure businesses would usually buy and manage their own servers for internal (employee-facing) and external (customer/client-facing) use. This necessitated an upfront expenditure of capital (i.e., CapEx) both in order to procure the space (rackspace, rooms, buildings) and equipment (servers, cooling, racks, networking, etc.) as well as continuous operating expenditure (i.e., OpEx) in order to run the whole thing (sysadmins, networking engineers, and other flavors of “IT guys”). The main problem with this approach was that upfront prediction of computing needs necessarily resulted in either:</p><ol><li><p>Investing <em>too much</em> upfront (”overprovisioning”)</p></li><li><p>Investing <em>too little</em> upfront (”underprovisioning”)</p></li></ol><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/2e80ea6784edf7020bfd1fd45d71b0290c26ea35092e97919d0614f3ddae211f.png" alt="Source: The Open Group" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Source: The Open Group</figcaption></figure><p>Overprovisioning meant that your servers were underutilized and you allocated company capital that could have been put to better use elsewhere (e.g., You spent $10 million on servers that could have been spent sales and marketing). Underprovisioning meant that your servers awere overutilized and you didn’t buy <em>enough</em> IT equipment, potentially causing a loss in sales and/or reputation (e.g., You spent $10 million on a marketing campaign but your campaign was <em>too</em> successful and now servers can’t handle all the traffic requests). The chunkiness of traditional IT meant that a single firm either had too little capacity to meet sudden bursts in demand for storage and compute or had too much server capacity relative to what could effectively be utilized. It was this dual problem that the first wave of Cloud customers sought to avoid.</p><p>The lumpy, discrete nature of IT CapEx [<em>as opposed to fluid/continuous; companies can’t, like, buy 2 extra servers on Thursday because they’re expecting 1% more traffic next Tuesday</em>] meant that companies were <em>always</em> either under or overutilizing their servers’ memory and compute resources. Furthermore, the increasingly “viral” nature of the Internet meant that requests for any particular website’s services might spike out of nowhere but businesses without enough capacity would have to reject potential new customers (i.e., “You just lost customers” and “Unfulfilled Demand”).</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/c2cf3f74e2daa7d1f3bd709d5ecefc7ef09e35596b0794301a3d5b1524a0a81e.png" alt="Source: The Open Group" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Source: The Open Group</figcaption></figure><p>This was the position that Amazon initially found itself in due to the cyclical nature of their e-commerce business nearly two decades ago and is the same position that <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.alibabacloud.com/blog/how-does-alibaba-cloud-power-the-biggest-online-shopping-festival_231673">Alibaba found itself</a> <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Alibaba_Cloud">about a decade ago</a> — either these companies had to find a way to outsource (the “buy” in buy vs build) extra IT capacity during high demand days/months/seasons or they could integrate backwards, internalize the cost and sell the excess, thereby transforming a cost line into a revenue line. While this primer focuses on hyperscalers ex-China, the reasons why both the Big 3 hyperscalers and Chinese hyperscalers invested as heavily as they did into cloud CapEx was because they had a strong incentive to — they were receiving demand at a sufficient scale to justify investments in CapEx and the idea to sell the excess came naturally. Attempted entrants like <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Alibaba_Cloud">Oracle, IBM</a>, and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.youtube.com/watch?v=1aUV_PRuTC4">HP</a> were unsuccessful because, among other reasons, they didn’t <em>already</em> have an existing consumer-facing business that necessitated traffic at scale and so they never had organic internal mandates to begin a “incremental cloud CapEx → internally developed pools of engineering and sales expertise → more cloud CapEx → more expertise” flywheel.</p><h3 id="h-why-firms-sell-cloud-services-aggregate-supply-demand" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Why firms sell cloud services: Aggregate supply-demand</h3><p>Cloud service providers (CSPs) like AWS essentially take on the role of capitalizing servers in the aggregate and recouping the investment through selling the use of their equipment. Users pay for “compute time” (i.e., CSP X charges you for Y seconds of instance Z, where “instance Z” is the particular processor being utilized) and storage (i.e., CSP X charges you for Y minutes/days/months of memory use of Z-th level accessibility, where accessibility determines the retrieval time of your data). CSPs have abstracted the market **for computers/servers into a marketplace for compute time and memory. For businesses, a server is useful only insofar as it serves as provider of compute-time and storage capacity — CSPs subsume the need to acquire and manage IT equipment by directly offering the compute and storage that businesses actually care about.</p><p>The firms that ended up becoming natural suppliers of cloud infrastructure services (i.e., Amazon, Microsoft, and Google) benefited from multiple economies of scale and expertise flywheels that continue spinning to this day.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/4c4b758a43f5452a6eff89bd1dda7c074997f49d54f4840e2bcf9b300077e96b.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Put simply, the hyperscalers <em>first</em> began selling cloud services because they were the natural sellers of underutilized resources (compute, storage) that they already had reason to procure en masse and <em>continued</em> to sell cloud services because the business had exhibited multiple economies of scale. On the cost side, having scale meant lower unit costs through better negotiation leverage when buying hardware and electricity as well as having a larger base over which to amortize semi-fixed costs like labor, land, and facilities (the DC’s “shell”) over. Furthermore, aggregating compute demand lets scaled players diversify away variability in order to maximize asset utilization:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/464c6a4d104805e8dc20e1ce7d4677459ab9ba35e7550498bf62e7089a4211ef.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>And for a more recent articulation of this idea from a <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://youtu.be/9NEQbFLtDmg?t=714">2021 AWS re:Invent keynote by Peter DeSantis</a> [<em>11:55 to 12:20 — these 25 seconds are worth watching for the intuitive visualization of workload demand aggregation that Peter shows</em>]:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/709dc73b6338fc7749fe6070bab075d34e46621ab687b0178fa2fbd5a5dbff1a.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Scale economies and the large amounts of capital expenditure and expertise required to manage cloud infrastructure meant that scaled players quickly grew moats, with in-house expertise and steady process improvements continually raising barriers for would-be entrants. Furthermore, in the process of scaling up their cloud offerings the hyperscalers were able to build comprehensive profiles of their customers’ demand curves and discover the price elasticities of their suite of cloud services. The nature of selling IaaS means that all the consumption information is logged without uncertainty (relative to, say, General Mills selling cereal wholesale and relying on various distributors for sales vs price info within various, disparate geographies) and feedback delay between price-setting of on-demand/spot instances and customer demand is non-existent.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/c677cfd7ffae199196996bda294f1257c13b6a4e39bd4f5a45c8b0ccbe1f0178.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><hr><h2 id="h-metis-cloud-economics-in-practice" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Metis: Cloud economics in practice</h2><h3 id="h-the-cost-of-a-cloud" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">The Cost of a Cloud</h3><p><em>Discretizing the cost of a cloud</em>.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/c0a207a1dd78c117319ca4c7ca4d18db0c2b19c1b3b042fc6b124f22858b1b04.jpg" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Cloud economics in practice requires that we oscillate away from an implicitly virtual, dematerialized view of the cloud and rematerialize these “instances”, “workloads”, and “endless long tail of AWS products” into the assemblages of atoms that actually comprise these abstractions. “Cloud economics” [<em>that is, from the POV of the hyperscale CSPs; the term has an entirely different meaning if considered from the POV of cloud customers</em>] are really just “networked data center economics” and “networked data center economics” are the interdependent economics of concerns including but not limited to ...</p><ul><li><p>justifying the design of custom hardware (from chips to chassis) for internal DC use through amortization of design/engineering/R&amp;D spend across high volumes of production</p></li><li><p>DC site selection through an analyses of relative cooling and energy costs (on-site temperature and climate, access to water, access to renewable energy, etc.), favorability of regulatory environment (potential tax credits, geopolitical environment, environmental idiosyncracies requiring special hardware [<em>i.e., fiber cables in Australia require a special design due to indigenous termite species that eat through typical fiber cables],</em> etc.), and access to on-site employees (from on-site engineers to security guards hired through temp agencies)</p></li><li><p>procurement of energy contracts and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.datacenterdynamics.com/en/opinions/an-industry-in-transition-2-water-risks-to-data-centers/">water access</a> with utilities and local governments; carbon offset purchase agreements (as well as the cost of 3P auditing of the organizations selling these carbon offsets [<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.bloomberg.com/features/2020-nature-conservancy-carbon-offsets-trees/">or bearing the cost of eventual bad press</a>])</p></li><li><p>managing the risks and costs of material and component procurement associated with global supply chain management that is increasingly subject to disruption</p></li><li><p>millions of miles of subsea cables and dark fiber and the wavelength division multiplexer equipment that enables high throughput through lower-density fiber (relative to intra-DC/AZ fiber)</p></li><li><p>leasing DC space/facilities from 3P players like Equinix to balance time-to-market (TTM) needs within geographies and the time required to plan for regional infrastructure buildout</p></li><li><p>hiring salespeople to sell these cloud services to people in charge of IT budgets in other organizations</p></li></ul><p>... etc, etc, etc.</p><p>In other words, we don’t really have full access to the metis of cloud economics because the economics and return profiles of these projects (and their interdependencies [<em>i.e., product cannibalization, revenue/cost synergies, strategic tradeoffs]</em>) is internal information that no one expects to be made transparent for either investors or the general public. That being said, it’s obviously still valuable for us to map out the contours of whatever is made available to us from this complex, planetary-scale system. <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/Let-s-Get-Physical-Cyber-Physical-Flows-of-Atoms-Flows-of-Electrons-6296006ba6db462ab0aed37c57587188"><em>Let’s Get Physical, (Cyber)Physical!: Flows of Atoms, Flows of Electrons</em></a> tries to reconcile the cloud’s virtuality and materiality (i.e., it’s cyberphysicality) through an extensive exploration of the Cloud through the perspective of the electrons and atoms that flow through it. However, here, we’ll be limiting our analysis of the Cloud at the level of the datacenter, a level of analysis that is complex enough in and of itself.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/582af94ef234f663e0e4e0186c5cbeac7311e948a42ada8b59d6cfc5a290a279.png" alt="From An Insider’s Look: Google’s Data Centers (Cloud Next ‘19)" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">From An Insider’s Look: Google’s Data Centers (Cloud Next ‘19)</figcaption></figure><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/6e79d0b994e8d4d5360bdf4a227ce49b32810ffa6854857b49a814ab4ae8b074.png" alt="[From Eaton: Redefining the economics of running the modern data center]" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">[From Eaton: Redefining the economics of running the modern data center]</figcaption></figure><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/What-Goes-Into-a-Data-Center-49b01a43d6324d1bab95d3be33fc23ff">A data center is a factory that transforms and stores bits</a>. The Cloud is the name we give to the network(s) of these bit-transforming-and-storage factories. The economics of a data center are wholly concerned with the physics of this bit transformation and storage process (<em>whereas the economics of the Cloud [defined here as a network of datacenters] concerns itself with bit transformation, storage, </em><strong><em>AND</em></strong><em> distribution across networks</em>) — all of a datacenter’s costs and revenues have to do with how efficiently and effectively it’s able to securely process and store information. If fifth-dimensional beings gifted Earthlings a shiny, four-dimensional hypercube that violated the laws of physics and exhibited the capability for infinite compute/storage capacity, instant data transmission via distance-independent quantum entanglement, and all at zero energy cost, then there wouldn’t be a need for any data centers and the cloud infrastructure industry wouldn’t need to exist.</p><p>Since that day has yet to come, the laws of physics and material realities remain the primary governors of data center economics. The speed of light is what makes multiple cloud “Regions” spread across the globe a competitive necessity (versus, say, a single GIANT data center in Antarctica). The potential for earthquakes, tornados, floods, bombings, fires, and other un/natural incidents are why datacenters exhibit diseconomies of scale after a certain size (tradeoff between intra-DC latency vs disaster risk provides the imperative for geographic distribution) and why hyperscalers introduce redundancy to their fiber optic routes. The first of law of thermodynamics is what necessitates cooling and heat exchange equipment within datacenters and what makes cooler climates near water relatively attractive sites for placing datacenters. Energy is used in bit transformation and storage, with excess heat energy itself requiring energy to remove from the bit factory to limit accelerated equipment depreciation and prevent cooking alive the meat-based employees on-site.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/6f483ea4ec854fb6ac3ed94d6c08155efd3408ebc8fecd1707470a5cf4477451.png" alt="[From Microsoft Research: What Goes Into a Data Center? (2009)] An illustrative (i.e., not necessarily accurate, especially given the publishing date) diagram of how electricity flows through the data center. " blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">[From Microsoft Research: What Goes Into a Data Center? (2009)] An illustrative (i.e., not necessarily accurate, especially given the publishing date) diagram of how electricity flows through the data center.</figcaption></figure><p>The best place to start on how to think about data center economics on the cost side is <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://mvdirona.com/jrh/work/">James Hamilton’s</a> canonical research blog posts on large-scale data center infrastructure. Although many of these posts (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://perspectives.mvdirona.com/2010/09/overall-data-center-costs/">*Overall Data Center Costs</a>, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://perspectives.mvdirona.com/2008/11/cost-of-power-in-large-scale-data-centers/">Cost of Power in Large-Scale Data Centers</a>, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://perspectives.mvdirona.com/2008/12/annual-fully-burdened-cost-of-power/">Annual Fully Burdened Cost of Power</a>*) are over a decade old at this point and Hamilton himself has attested to the obsolescence of many of the input assumptions [<em>due to technological advancement and shifts in hyperscaler buy vs build decisions, among other things</em>] in these posts, the utility of the underlying frameworks remains evergreen.</p><p>From a Microsoft Research paper co-authored by Hamilton titled <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/The-Cost-of-a-Cloud-Research-Problems-in-Data-Center-Networks-310e7c7bfffe4da9970ca359c095b467"><em>The Cost of a Cloud: Research Problems in Data Center Networks</em></a> (2009):</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/17558faf082f040d3a416298734091e377a11e2aafe6e2c73ffbb8b1deea1f2c.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>This line of analysis is elaborated upon and decomposed in both *<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://perspectives.mvdirona.com/2010/09/overall-data-center-costs/">Overall Data Center Costs </a>*and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://perspectives.mvdirona.com/2008/11/cost-of-power-in-large-scale-data-centers/"><em>Cost of Power in Large-Scale Data Centers</em></a> where Hamilton provides us with Excel files of the dependent variables and his working assumptions. From Hamilton’s open-sourced model in <em>Overall Data Center Costs</em> [<em>I’ve re-colored assumptions to be blue</em>]:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/e0d11497e81b0a7c757f1c3ddde8a23a20ffe55a7bcf10b8a4685bdba5cf9ecf.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>For clarity, the assumptions in Hamilton’s model can be [<em>imperfectly and provisionally</em>] grouped into three categories — infrastructure (server and non-server) assumptions, power cost/efficiency assumptions, and amortization assumptions:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/e2777a2eef4b8ff078c72b57ce3527eb9b5e62d62728cbce3d3d73eb069fc956.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>While the appropriate inputs for hyperscale DCs has changed with over a decade’s worth of technological innovation and embedded experience, Hamilton’s breakdown remains relevant — upfront infrastructure and equipment costs are amortized depending on their estimated useful life and variable energy costs are calculated after factoring in efficiency (PUE) and slack (avg critical load usage). As for the evolution of the input assumptions, Hamilton gives us some hints in a response to a question on his blog:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/49920dd250b33c28b4367a5a3481942f72a7c6a1f38b67716ba7424d6d534e68.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>The most important area of change in DCs between 2009 and now [<em>and what will continue to be the most important as DCs adapt for higher mixes of AI/ML-based workloads</em>] is the shift from low cost, commodity servers, presumably designed and manufactured by 3P providers for compute workloads that were overindexed in 2009 relative to the more diverse set of workloads that exist now. Educated guesses about the evolution of DC cost breakdowns are possible for the motivated and diligent analyst but, for the time being, that analyst is not me.</p><hr><h3 id="h-cloudy-revenue-streams-cloudy-profit-pools" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Cloudy Revenue Streams; Cloudy Profit Pools</h3><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/7d35c2884ef687e39682c8dd062dbb6e78b3aab1192efc3aca1cee504d94498f.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>In keeping with the theme of providing you a framework without specific numbers [<em>anyone who truly </em><strong><em>needs</em></strong><em> access to quantified cost/revenue/margin breakdown estimations of AWS/Azure/GCP probably already has access through their own channels</em>], we can square our skeleton datacenter cost breakdown with the simple observation that CSPs sell different cloud-based services and each of these services contribute some percentage to the their top and bottom lines. The rule of thumb was that margins get higher the higher up the stack the services you’re selling are: IaaS therefore provide the lowest margins and SaaS the highest — this logic changed into one of bifurcation in which PaaS has increasingly served a low margin, commoditized complement to IaaS and SaaS, of which the latter earns higher margins than the former. That being said, the “low” [gross] margins of cloud infrastructure are only low by high growth software standards [<em>and are considerably higher than software companies when we consider operating margin, given the negative margins of many high growth software companies</em>] — Bernstein estimates from 2013 [<em>from the Bernstein Blackbook on AWS that has proven itself to be evergreen, if only not optimistic </em><strong><em>enough</em></strong><em> given AWS’s extraordinary growth</em>] pegged EC2 and S3 gross margins at around 50%, margin profiles which analysts estimate to have stayed consistent, or even expanded, over nearly a decade of price cuts through offsetting cost savings from both industry-wide (e.g., Moore’s Law) and hyperscaler-specific (e.g., custom architectures and hardware) tech and process improvements.</p><p>While AMZN doesn’t give product/service-level breakdowns for AWS (GOOG only started breaking out GCP revenues in 2018 whereas MSFT gives you three numbers for Azure/Cloud to intentionally frustrate analysts [<em>okay, probably not, but it feels that way sometimes; up until maybe 2021 you could only download .docx files from their IR portal so you had to convert to .pdf yourself, like ??? why ???</em>]), various estimates about revenue and margin breakdowns exist in the public domain and every research house publishes their own estimates.</p><p>Timothy Prickett Morgan of <em>TheNextPlatform</em> published an illustration of his revenue breakdown estimate for AWS in <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/Navigating-the-Revenue-Streams-and-Profit-Pools-of-AWS-f50a9f5b7aa24d8ea5bd96609e73bcf0"><em>Navigating the Revenue Streams and Profit Pools of AWS</em></a> (2018) that pegged the revenue breakdown of Compute, Storage, Networking, and Software at 20-30% of AWS overall each for Q4’17:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/86e31fcd9661d8175450c3d3bcacec01d26f421299d7cb07b0a408795b11f4bd.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>While Morgan doesn’t specify his methodology in his breakdown, what’s clear is that it doesn’t jive well with other breakdown estimates. Bernstein estimates from 2013 pegged EC2 at 70% of revenues and Corey Quinn, per <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.cnbc.com/2021/09/05/how-amazon-web-services-makes-money-estimated-margins-by-service.html">this 2021 NBC article</a>, estimates that over 50% of AWS revenue comes from EC2. Shift in revenue mix [<em>and certainly margin contribution</em>] away from increasingly “commoditized” EC2 (and S3) over time makes intuitive sense. The dream for hyperscalers (certainly AWS and Azure, less so for GCP) is that they commoditize the complements (i.e., their basic compute and storage offerings) to higher-margin cloud offerings (analytics, AI/ML, software in general) and lock-in customers to service **their customers from both ends. A persistent, fundamental question of the cloud computing industry is whether or not non-hyperscale players (i.e., ISVs) can modularize the cloud infrastructure of hyperscalers and sell higher-margin software offerings on top of “commoditized” infrastructure — answering this question will one of the focuses of <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/Three-Body-Competitive-Dynamics-in-the-Hyperscale-Oligopoly-d25c3b44d7d84a028430d731d45c28cf"><em>Three-Body: Competitive Dynamics in the Hyperscale Oligopoly</em></a>.</p><p>While the evolution of the industry is an open question [<em>that we explore thoroughly later</em>], NBC’s <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.cnbc.com/2021/09/05/how-amazon-web-services-makes-money-estimated-margins-by-service.html"><em>How Amazon’s cloud business generates billions in profit</em></a> provides us with some useful benchmarks to help fill out the revenue and margin profiles of our skeleton (emphasis mine):</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/7826d17960561a597cf4e61b096e6491dc545d25735319784564aa33622579f7.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>The article provides more details and estimated margins but the general idea is this — basic cloud computing services can be sold at gross margins around 50%, services higher up the stack sell at gross margins above 50%, and blended IaaS gross margins sit at around 60%, with OpEx bringing that down to 25-30% operating margins. OpEx is split between SG&amp;A and R&amp;D with what has been historically estimated to be an equal split but has [<em>I’m assuming</em>] shifted towards R&amp;D given the semi-fixed nature of SG&amp;A and the higher intensity of R&amp;D for proprietary hardware design. Beyond the energy and the D&amp;A allocations previously discussed, other contributors to the bottom-line of cloud compute are server utilization rates (higher = better, up to the point where overutilization risks breaching SLAs), embedded costs of using x86-based chip architectures in servers (vs lower-cost ARM, or open source RISC-V; hyperscalers have been using ARM in their proprietary chip designs), and costs of licensing if your server instances’ OS aren’t open source [<em>AWS has to pay MSFT when their VMs utilize Windows-based instances, which helps explain why </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.makeuseof.com/tag/linux-market-share/"><em>90+% of AWS EC2 instances run Linux</em></a><em>; AWS has recently been promoting </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://aws.amazon.com/about-aws/whats-new/2021/12/amazon-ec2-m1-mac-instances-macos/"><em>their new Apple M1-based EC2 instances</em></a><em> which they ostensibly pay Apple licensing fees to use</em>].</p><hr>]]></content:encoded>
            <author>0x125c@newsletter.paragraph.com (0x125c)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/c7784c3300ff714fbffa5e7fee234cd1cc99a91f8538a49a576f4379f7c14977.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Initial Positions and Laws of [Competitive] Motion]]></title>
            <link>https://paragraph.com/@0x125c/initial-positions-and-laws-of-competitive-motion</link>
            <guid>PvZ5yOU0vrTqpt5TkohI</guid>
            <pubDate>Tue, 01 Mar 2022 17:46:24 GMT</pubDate>
            <description><![CDATA[Part 4.1 of Planetary-Scale Computation: An industry primer on the hyperscale CSP oligopoly (AWS/Azure/GCP):Let’s Get Physical, (Cyber)Physical!: Flows of Atoms, Flows of ElectronsA Cloudy History: Four Histories of Cloud ComputingPrimer on the Economics of Cloud ComputingThree-Body: Competitive Dynamics in the Hyperscale OligopolyInitial Positions and Laws of [Competitive] MotionMass and the Law of [Economic] GravitationVelocity and the n-body problemThe Telos of Planetary-Scale Computation:...]]></description>
            <content:encoded><![CDATA[<p>Part 4.1 of <em>Planetary-Scale Computation: An industry primer on the hyperscale CSP oligopoly (AWS/Azure/GCP)</em>:</p><ol><li><p><em>Let’s Get Physical, (Cyber)Physical!: Flows of Atoms, Flows of Electrons</em></p></li><li><p><em>A Cloudy History: Four Histories of Cloud Computing</em></p></li><li><p><em>Primer on the Economics of Cloud Computing</em></p></li><li><p><em>Three-Body: Competitive Dynamics in the Hyperscale Oligopoly</em></p><ol><li><p><strong><em>Initial Positions and Laws of [Competitive] Motion</em></strong></p></li><li><p><em>Mass and the Law of [Economic] Gravitation</em></p></li><li><p><em>Velocity and the n-body problem</em></p></li></ol></li><li><p><em>The Telos of Planetary-Scale Computation: Ongoing and Future Developments</em></p></li></ol><p>Table of Contents for <strong><em>Initial Positions and Laws of [Competitive] Motion</em></strong>:</p><ul><li><p><em>Initial Positions: Where are the Three Bodies?</em></p></li><li><p><em>Initial Positions and Laws of Motion: Why Three Bodies?</em></p></li><li><p><em>Laws of [Competitive] Motion: The Law of Conservation of Attractive Profits</em></p></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/705930edc25c466063faec8a6498b1465f010414ecbf19d9a80027b966c02295.png" alt="Morpho Double Helix (2015) by Rafael Araujo" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Morpho Double Helix (2015) by Rafael Araujo</figcaption></figure><hr><h2 id="h-initial-positions-where-are-the-three-bodies" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Initial Positions: Where are the Three Bodies?</h2><p><em>Let t(0) = Jan ’22. What are the initial positions of the three bodies?</em></p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/7082ac72a2f7d06e5bf4ac7754b1c5cb037fcff1c761a4f317aed3e917803832.png" alt="Breaking orbit. by @lyssamarielowe" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Breaking orbit. by @lyssamarielowe</figcaption></figure><blockquote><p>As Covid-19 impacts every aspect of our work and life, we have seen two years&apos; worth of digital transformation in two months.</p><p>— <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.microsoft.com/en-us/microsoft-365/blog/2020/04/30/2-years-digital-transformation-2-months/"><strong>Satya Nadella</strong></a></p></blockquote><p>An oversimplified, high-level overview about the relative strategic positioning of the Big Three cloud hyperscalers from early 2020 (say, pre-NYC lockdown) would have gone something like this: AWS is Goliath, Azure might catch up to AWS because incremental market share gains in cloud penetration are likelier to be from larger, legacy (non-“tech-first”) enterprises where Microsoft has an advantage due to near ubiquitous penetration of software products (Windows, Office, VSCode, Github, etc.) and a clearly articulated hybrid cloud strategy, and GCP’s new CEO has to prove that they’ve sufficiently internalized the need to build out their enterprise sales muscle so that they can quickly get to sufficient scale to reach the promised land of mid-20% operating margins.</p><p>In other words, intra-industry competitive positioning hasn’t <em>dramatically</em> changed between then and now — plenty of analysts could have (and did) pretty much give the same synopsis circa 2019. At the risk of being only slighty less reductionist than I’ve already been, the even higher-level story goes along the lines of:</p><ul><li><p>Docker and Kubernetes enabled widespread developer adoption of containerization and container orchestration, a development that was driven by and, in turn, accelerated the mainstreaming of microservice architectures to replace traditional, monolithic architectures</p></li><li><p>An ethos of vendor-agnostic modularity, driven by concerns of vendor lock-in and empowered by the adoption of containerization, normalized the rhetorical and actual adoption of hybrid cloud and multi-cloud architectures, possibly allowing for the long forewarned phenomenon of cloud commoditization to grow deeper roots</p></li><li><p>The decade-long process of commoditization of basic compute, network, and storage services from cloud providers accelerated hyperscaler focus on higher margin, differentiated services like industry/vertical-specific solutions (effectively going up the stack) and AI/ML, as well as cement their infrastructure dominance by integrating backwards into chip design</p></li></ul><p>In relative terms, GCP had/has more to gain from positive trends in containerization adoption and resultant commodification of lower-level offerings because of their third place position, Azure had/has more to gain from hybrid cloud adoption where they have a clearly articulated strategy [<em>ctrl+F “hybrid” in each hyperscaler’s respective 10-K’s — you hit 0 results for AMZN</em>], AWS has more to lose from multi-cloud adoption [<em>a category that Google considers to subsume hybrid cloud and claims as a trend it is positioned to gain from</em>], both GCP and Azure claim horses in the race for dominance in AI with respect to both developer mindshare [<em>where Google’s TensorFlow platform competes with Facebook’s/Meta’s Pytorch platform that AWS partners with them on to combat TensorFlow</em>] and exploration of industry use cases [<em>where Google’s DeepMind battles with the Azure x OpenAI partnership</em>], and Azure has a relative advantage in providing industry-specific solutions due to strong enterprise relationships and simply by virtue of not being Amazon.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/0a020dde528ed23f21b9f094a68e6a19db654d69bccf414b1637207266e5504b.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Despite macro uncertainties justifying IT budget cuts, reduced spend on non-critical cloud workloads throughout 2020, and lockdown-induced delays in executing hybrid cloud implementations, Cloud companies were clear beneficiaries of an unprecedented catalyst for accelerated digital transformation in the form of COVID-19.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/239deff3a5ed4fb0c057720c23827402602e7a9edae7788a3b6f0d516be4bcb6.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>That cloud companies were beneficiaries of what Satya Nadella dubbed “two years’ worth of digital transformation in two months” and the broader narratives continuing to develop around remote working/learning is clearly reflected in IaaS run rate numbers and, despite numerous to-be-expected sector rotations out of tech since 2020, the aggregate market capitalization of the cloud sector.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/c80c43cbc24411012410440909bd2e0812cdb6c1cb4c19b65cba92c160bcf0e4.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>While the <em>acceleration</em> of digital transformation forced on companies by a global pandemic led to strategic shifts for most industries, the accelerated digital future brought about by COVID was more a change in <em>speed</em> than <em>direction</em> for an industry that already sought to abstract away materiality, physicality, and distance as part its core value proposition. In other words, the relative competitive positioning of the hyperscale cloud players hasn’t dramatically changed from what it had been before the global pandemic.</p><p>Even this synopsis from 2017 about the Cloud industry written by ...</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/730bfa3c5322ceda0c92c7ffb4f85bad797b9af50f666cd1729b221d774e0031.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>... is still relevant.</p><p>Ongoing developments have, for the most part, been anticipated by those closely following the industry.</p><hr><h2 id="h-initial-positions-and-laws-of-motion-why-three-bodies" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Initial Positions and Laws of Motion: Why Three bodies?</h2><p><em>Why three bodies? Why not one? Why not one hundred? How do they interact with each other?</em></p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/cfd47d18f1b55077ab1eb292e3d26126938ff83e44b4028e4ec239abccce348e.jpg" alt="Behold, Kentacohut." blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Behold, Kentacohut.</figcaption></figure><p>That the public cloud infrastructure market consolidated into a three-party oligopoly (ex-China) is usually taken for granted in discussions about the industry. While economies of scale logics provide easy explanations for why cloud infrastructure isn’t a fragmented market with <em>dozens</em> of similarly-sized cloud service providers, why cloud infrastructure didn’t become a monopoly is less well articulated. One explanation is that a select few companies, out of a limited number of potential entrants, chose to capitalize on the market opportunity while barriers to entry were still surmountable. A more overlooked explanation is that the market for public cloud infrastructure services <em>demanded</em> more entrants. With respect to the former explanation I covered how the economics of leasing excess internal capacity favored consumer-facing tech companies in another section, but the latter explanation for why the industry tended away from a monopoly structure can be found in Porter’s case study of the corn wet milling oligopoly.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/7dd0f38c083493e7330a0db6036a0eaf734a98d19e3fc6aadee23d85312f3ae0.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>This bottom-up explanation (assuming [“bottom-up” = emergent, market-based demand] and [”top-down” = hierarchical, calculation-based decision making]) provides a complement to existing narratives about big tech incumbents engaging in opportunistic backwards integration into providing cloud services — the market opportunity had to be <em>created</em> through the interaction of multiple players such that there was sufficient capacity and choice for enterprise customers (this argument is less relevant to start-ups who weren’t starting with legacy IT tech debt) to consider re-architecting their technology stack. Porter’s idea is that supply and demand are actually <strong><em>interdependent</em></strong> phenomena which bootstrap each other in a market.</p><p>How big the Cloud sector would be today if Microsoft and Google did not exist and AWS had a monopoly on cloud infrastructure is a moot question because other eligible entrants would have eventually recognized excess industry returns and potential customers would have been incentivized to help these competitors develop (which isn’t to say AWS could not have been a monopoly under different path-dependent circumstances). Businesses are afraid of vendor lock-in for the same reason why twenty-something single New Yorkers might date more than one person at a time — they want optionality and they’re afraid of being tied down.</p><p>Like the final three contestants in <em>The Bachelor</em> or <em>Flavor of Love</em> [<em>by far the worst show “concept” category in existence, imo</em>], the presence of other competitors creates a favorable competitive dynamic for would-be customers. Leverage and negotiation posture exist along a spectrum: digital Switzerland cloud companies like Snowflake <strong><em>wouldn’t be able to exist without the ability to play infrastructure providers off on each other</em></strong> and even wholly committed, long-time single-CSP customers like Netflix (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://about.netflix.com/en/news/completing-the-netflix-cloud-migration">NFLX completed their cloud migration to AWS in 2016</a> but use OpenConnect, their internal CDN, to stream their content to you) can tell their cloud provider what basically amounts to “You better treat me right, there are two other fish in the sea ...” but in our modern-day equivalent MBA/corpo-speak mixed with legalese. Sure I’m talking to other people, <em>aren’t you?</em></p><p>In fact, the empirical evidence on the cloud computing industry supports the existence of this “more competitors is better for customers”-dynamic.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/17d0bac66327974419d808fbe1d87758f048cf70dedbf79aa5f622884776e07d.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><em>[</em><strong><em>Note</em></strong><em>: For those with access to Goldman’s research portal, Goldman’s Cloud Quarterly series gives detailed summaries of LTM price cuts between the three hyperscalers; for a certain subset of employees at the Big Three, you probably already know where to find the relevant internal dashboard]</em></p><p>The data clearly shows the emergence of competition within the public cloud infrastructure from 2013 to 2015 in the form of accelerated price cutting from the market incumbent, AWS. While, in classic textbook fashion, Microsoft sought to stabilize price cutting in 2013 <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://news.microsoft.com/2013/04/16/windows-azure-announces-general-availability-and-promises-to-match-any-aws-price-drop/">[1]</a><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://techcrunch.com/2013/04/16/windows-azure-announces-general-availability-and-promises-to-match-any-aws-price-drop/">[2]</a><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.datacenterknowledge.com/archives/2013/04/16/microsoft-adds-big-piece-to-cloud-puzzle">[3]</a> for what Microsoft referred to as “commodity services”, in equally classic textbook fashion, game theoretic logics [<em>Porter analogizes the competitive situation in an oligopoly to the Prisoner’s Dilemma and specifically references Thomas Schelling’s work on game theory</em>] regarding competition over market share dictated that détente would be unsustainable: in 2014, with the helpful push of Google, price wars resumed <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.geekwire.com/2014/cloud-storage-wars-microsofts-azure-battles-amazon-web-services-price/">[4]</a><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.zdnet.com/article/microsoft-chops-azure-prices-to-match-amazons-latest-reductions/">[5]</a><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.cnbc.com/2014/04/01/microsoft-joins-amazon-and-google-in-cloud-price-war.html">[6]</a><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://venturebeat.com/2014/10/01/googles-back-with-more-cloud-price-cuts/">[7]</a>.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/eef7982bbd5a170e96241f516518b943848ee3465da457650afed21a9d1024de.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Incumbent initiates price cuts to combat market entry → runner-up commits to retaliation on price cuts to reduce competitive uncertainty → third-place underdog destabilizes the competitive dynamic to gain market share. Variations of this dynamic continue to play themselves out in the cloud oligopoly to this day, although the basis of competition is not necessarily always price. As we’ll explore in the next section, Google’s (<em>”The small firm ... may have much to gain and little to lose by initiating a move ...”</em>) open-sourcing of their container orchestration software, Kubernetes, and their internal AI/ML framework, TensorFlow, both represent examples of the competitive pursuit of corporate self-interest to the detriment of competitors (from an analysis of only first-order effects, that is) and to the benefit of the broader market (WOO, go capitalism!!!!).</p><p>Whereas the simulation of a three body system under Newton’s Laws deterministically tend towards unpredictable chaos, the hyperscale oligipoly’s near complete information condition and inherent competitive logics of this three body/cloud system tend towards predictably intense competition.</p><p><em>[Although Cloudflare is an Nth-body, it’s still worth mentioning here how their recent elimination of egress fees from their R2 object store immediately pressured </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://venturebeat.com/2021/11/25/amazons-aws-expands-free-egress-data-transfer-limits/"><em>AWS into expanding free data transfer limits</em></a><em>, but more on this later]</em></p><hr><h2 id="h-laws-of-competitive-motion-the-law-of-conservation-of-attractive-profits" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Laws of [Competitive] Motion: The Law of Conservation of Attractive Profits</h2><p><em>On [modularization vs interdependence], [commoditization vs differentiation], and dynamically shifing bases of competition within the cloud computing value chain. On the competitive laws governing the motion and positioning of bodies within the system.</em></p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/ce949eb218ce6fccd25ac8e7b847b5bb57b271e6bbeca1bb8e2da61d240f111c.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/24706983db130d2531bc6d4eeb8461d6697f7864d8a5e1fd488f570d6e2b6845.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>In addition to the cloud computing industry’s unique grounding in economic and financial theory discussed here in <em>Primer on the Economics of Cloud Computing</em> and its distinct status as an oligopoly with near complete information, the Cloud’s value chain is arguably one of the most appropriate industries to analyze through the lens of Clayton Christensen’s <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.christenseninstitute.org/interdependence-modularity/">interdependence-modularity framework</a> that he develops in <em>The Innovator’s Solution</em>.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/3f3d0dede3ddf827c38d7b4ad953b006f3c829a3853d657b67a48f5e7723e251.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Analyzing the cloud computing industry could be considered to be a natural continuation of Christensen’s own analysis of the mainframe computer and PC industries that he undertakes in his book, with obvious connections that have not been lost on other analysts like scuttleblurb ...</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/07fb2bbc1eac0a05ad925f30ccdf419d621cb0c654861b94e97155b368c9fb44.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>... and from <strong>Ben Thompson</strong> indirectly <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://stratechery.com/2018/intel-and-the-danger-of-integration/">here</a> and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://stratechery.com/2019/amd-launches-7nm-chips-sony-partners-with-microsoft-apple-and-aws/">here</a>, more directly <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://stratechery.com/2019/google-and-ambient-computing/">here</a>, and explicitly <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://stratechery.com/2019/microsoft-ignite-azure-arc-additional-insight-notes/">here</a> and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://stratechery.com/2021/cloudflares-disruption/">here</a> (and doubtless a bunch of other times as well).</p><p>What’s more is that Christensen’s framework interdependence-modularity framework explicitly builds on Porter’s value chain concepts from the well-known five forces and differentiation vs low cost stategy frameworks featured in <em>Competitive Strategy</em>.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/9951cc458fe60814c8556e3f52d3d127e2c49d7b78142cb2e661811f4259cd9b.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Applying Porter’s base framework and Christensen’s dynamic overlay to a competitive analysis of the Cloud industry’s value chain provides us with a highly explanatory lens through which the strategic positioning and tactical moves of the players within the industry can be understood. Each player within the Cloud ecosystem is aiming to commoditize adjacent subsystems while positioning to be in that area of the value chain where differential profits accrue to after disintegrated subsystems [temporarily] recongeal into new, interdependent architectures.</p><p>For Google, the decision to open source Kubernetes can be interpreted as an attempt to commoditize basic compute instances and lower switching costs for customers to try out Google’s cloud offerings, in which Google positions its AI/ML solutions as the newly integrated, interdependent, differentiated solution through optimizing their TPU chips for the TensorFlow framework which they open sourced [<em>a tactic known as </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.gwern.net/Complement"><em>commoditizing the complement</em></a>].</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/7f0ed28fabf49c7f3f2abe0b09c632a764997b35c6fb2e90dd494138b858d23b.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Microsoft has clearly articulated their strategy to be one of providing a full-stack offering that’s greater than the sum of its parts. Instead of a dogmatic insistence on the Windows OS, Microsoft has assented to the operating system to becoming a modular consideration in its datacenters (Azure offers Linux-based instances alongside their Windows VM instances) and is, in fact, seeking to commoditizing/modularizing not just the OS but <em>everything</em> and claiming that it, alone, can provide the best interdependent, full-stack solution for enterprises. This is not just my conjecture but Microsoft’s express strategy.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/f5e28f339c79e81663eec14c21cab2c6d3e874ae7d271dc4b2cc9072c4c6f0c2.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Satya wants to “commoditize digital tech” while positioning Microsoft to provide a differentiated approach in the form of an “interdependent whole ... in these platforms and clouds”. Recognizing that Satya is explicitly following a Christensen-esque approach provides a coherent explanation Microsoft’s approach towards integrating VSCode and GitHub and Teams and Dynamics and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.theverge.com/2021/11/2/22758951/microsoft-loop-fluid-components-office-collaboration-app">Loop</a> and HoloLens, etc. into a tidy whole of which Azure is the centerpiece and its cloud services (IaaS, PaaS, <em>and</em> SaaS ... and Metaverse(?)) justify recurring subscription revenue.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/edf721720c7024dadcbc11af90e01f5a637e514e1105d392a09dddaa954d2647.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>From the perspective of an Independent Software Vendor (ISV), Snowflake’s strategy is predicated on modularizing (and therefore commoditizing) the infrastructure layer provided by hyperscale cloud providers and capturing value through a superior database service that comes procedurally from pure product focus and structurally from vendor-agnosticism. Their data marketplace initiative is meant to create a platform that constitutes a “performance-defining subsystem” that lies on top of an increasingly commoditized substrate.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/e25c2d0bd0a4f9c0e7d66488ba10767809de56e8bca8636535003a73c031812e.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>With respect to the application of this lens to buzzwordy deployment models, the movement towards hybrid and multi-cloud architectures can be interpreted as a customer-driven impetus to modularize lower-level compute/network/storage products and further preempt/unwind vendor lock-in, as well as preserve the negotiating leverage and optionality that comes with an architecture that has minimal switching costs between infrastructure providers — you’ll be nicer to me come contract renewal season if switching my workloads to Azure can be done at a literal push of a button. The existence of other capable competitors within the oligopoly makes the threat credible, which motivates all three hyperscalers to accommodate customers lest they lose market share to an even more accommodating competitor. Further out, edge architectures and the hypothetical fully-IoT-connected world holds the promise of reintegrating the subsystems to a decentralized/distributed edge that services latency-sensitive workloads which can’t wait for a response from centralized datacenters.</p><p>The hyperscalers’ attempts at vertical integration can be understood as a strategic response to the continual disintegration and modularization of those subsystems in the middle of the value chain. The middle is being hollowed out, with IaaS and PaaS increasingly seen as an <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.gartner.com/en/documents/3982143/solution-criteria-for-cloud-integrated-iaas-and-paas">integrated I+PaaS offering</a> in which the infrastructure layer delivers the value and the platform layer is table stakes because there exist open-source alternatives that can be easily plugged in (and if one hyperscaler doesn’t make it easy to plug in, then the other two will, thereby making composability the default strategy for all three). This vertical integration by cloud hyperscalers is taking the form of backwards integration through chip design as well as what is effectively forwards integration through an increased industry-specific focus for customers.</p><p><strong><em>Through this lens, the Cloud ecosystem looks like one big game of different players trying to commoditize competitors and adjacent value chains while simultaneously integrating core subsystems along the value chain and making tactical moves to catalyze a reformation of the value chain to their advantage.</em></strong></p><p>[<em>More on hyperscaler strategies around non-public deployment models and vertical integration later.</em>]</p><p>Porter’s observation that products have a tendency to become commoditized as buyers accumulate knowledge about them over time, thereby shifting the basis of competition towards price as the product becomes less differentiated is borne out in the data.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/4a88a32a98ed643eb70a56052b625f5742c071ac5fe6e382a0e191f66db26538.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Not only is the rate of technological improvement of products in growing industries superlinear (and certainly in tech-oriented industries like those within adjacent to cloud computing), customers exhibit diminishing marginal utility (a supralinear function) for more of the same thing — the intersection of superlinear innovation and supralinear satiation leads to a situation of previously valuable interdependent systems overshooting customer needs.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/b78e373eb64c51a73a3b59d3effa9e9598e843045774bcbee3665d89589a3eae.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/0a3ce7f2fc0342bda5b4ee0e4e855e92db39153729827cde215586d2b56c6586.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Companies need to make sure they <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.thediff.co/p/surfing-the-right-s-curve">surf the right S-curve</a> and manage their dis/advantages and competencies along the value chain as previously differentiable products and services become “good enough” and modularizable, the industry’s basis of competition shifts, and new areas for reintegration of subsystems appear at adjacent stages of the value chain.</p><p>More on the “reciprocal process of commoditization and decommoditization” within the Cloud’s value chain as we continue to flesh out this three body system.</p><hr>]]></content:encoded>
            <author>0x125c@newsletter.paragraph.com (0x125c)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/c8c5e79415e6149d9c9e63c9bb10c027328bc5bcf867198ef09bc444818e7ccc.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[A Cloudy History: Four Histories of Cloud Computing]]></title>
            <link>https://paragraph.com/@0x125c/a-cloudy-history-four-histories-of-cloud-computing</link>
            <guid>VLnMz6QLLB0BsbbgZ6zq</guid>
            <pubDate>Sat, 15 Jan 2022 00:41:27 GMT</pubDate>
            <description><![CDATA[Part 1 of Planetary-Scale Computation: An industry primer on the hyperscale CSP oligopoly (AWS/Azure/GCP):Let’s Get Physical, (Cyber)Physical!: Flows of Atoms, Flows of ElectronsA Cloudy History: Four Histories of Cloud ComputingPrimer on the Economics of Cloud ComputingThree-Body: Competitive Dynamics in the Hyperscale OligopolyInitial Positions and Laws of [Competitive] MotionMass and the Law of [Economic] GravitationVelocity and the n-body problemThe Telos of Planetary-Scale Computation: O...]]></description>
            <content:encoded><![CDATA[<p>Part 1 of <em>Planetary-Scale Computation: An industry primer on the hyperscale CSP oligopoly (AWS/Azure/GCP)</em>:</p><ol><li><p><em>Let’s Get Physical, (Cyber)Physical!: Flows of Atoms, Flows of Electrons</em></p></li><li><p><strong><em>A Cloudy History: Four Histories of Cloud Computing</em></strong></p></li><li><p><em>Primer on the Economics of Cloud Computing</em></p></li><li><p><em>Three-Body: Competitive Dynamics in the Hyperscale Oligopoly</em></p><ol><li><p><em>Initial Positions and Laws of [Competitive] Motion</em></p></li><li><p><em>Mass and the Law of [Economic] Gravitation</em></p></li><li><p><em>Velocity and the n-body problem</em></p></li></ol></li><li><p><em>The Telos of Planetary-Scale Computation: Ongoing and Future Developments</em></p></li></ol><hr><h2 id="h-a-cloudy-history" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">A Cloudy History</h2><p><em>A Brief History of Cloud Computing (the marketing term) A Brief History of Cloud Computing (the idea) A Brief History of Cloud Computing (the business model) A Brief History of Cloud Computing (as we know it today)</em></p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/19627c7df48021213c634b9241ee3341d4f68bf39b700a83383e4ab40a1828a4.jpg" alt="Jean Jennings, Marlyn Wescoff, and Ruth Lichterman with the ENIAC computer, 1946." blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Jean Jennings, Marlyn Wescoff, and Ruth Lichterman with the ENIAC computer, 1946.</figcaption></figure><blockquote><p>The gap between the physical reality of the cloud, and what we can see of it, between the idea of the cloud and the name that we give it — “ cloud ” — is a rich site for analysis. While consumers typically imagine “the cloud” as a new digital technology that arrived in 2010 – 2011, with the introduction of products such as iCloud or Amazon Cloud Player, perhaps the most surprising thing about the cloud is how old it is. Seb Franklin has identified a 1922 design for predicting weather using a grid of “ computers ” (i.e., human mathematicians) connected by telegraphs. AT&amp;T launched the “electronic ‘skyway’” — a series of microwave relay stations — in 1951, in conjunction with the first cross-country television network. And engineers at least as early as 1970 used the symbol of a cloud to represent any unspecifiable or unpredictable network, whether telephone network or Internet.</p><p>— <strong>Tung-Hui Hu</strong>, <em>A Prehistory of the Cloud</em></p></blockquote><hr><h3 id="h-a-brief-history-of-cloud-computing-the-marketing-term" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">A Brief History of Cloud Computing (the marketing term)</h3><p>The history of the Cloud is an unclear one ... <em>cloudy</em>, even (hahaha ok sorry). An article published by <em>MIT Technology Review</em> in 2011, titled <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.technologyreview.com/2011/10/31/257406/who-coined-cloud-computing/"><em>Who Coined ‘Cloud Computing’?</em></a>, traced the coinage of the term “cloud computing” back to a May 1997 <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://tarr.uspto.gov/servlet/tarr?regser=serial&amp;entry=75291765">US PTO trademark application</a> and, by contacting the founder of the now-defunct startup that applied for the trademark (NetCentric), unearthed the story of the first known mention of the now ubiquitous phrase. The startup founder, Sean O’Sullivan, was in negotiations with Compaq regarding a potential $5 million investment into O’Sullivan’s business plan to have NetCentric’s software platform enable ISPs to “ISPs to implement and bill for dozens, and ultimately thousands, of “cloud computing-enabled applications,” according to the plan.”</p><blockquote><p>In their plans, the duo predicted technology trends that would take more than a decade to unfold. Copies of NetCentric’s business plan contain an imaginary bill for “the total e-purchases” of one “George Favaloro,” including $18.50 for 37 minutes of video conferencing and $4.95 for 253 megabytes of Internet storage (as well as $3.95 to view a Mike Tyson fight).</p></blockquote><p>George Favaloro was a Compaq marketing executive who “had recently been chosen to a lead a new Internet services group” at Compaq. Favoloro’s internal memo at Compaq, titled <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://s3.us-west-2.amazonaws.com/secure.notion-static.com/1ec2bb72-14fa-4d04-8fbe-2eed88823024/compaq_cst_1996_0.pdf?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&amp;X-Amz-Credential=AKIAT73L2G45EIPT3X45%2F20220115%2Fus-west-2%2Fs3%2Faws4_request&amp;X-Amz-Date=20220115T001437Z&amp;X-Amz-Expires=86400&amp;X-Amz-Signature=51b1403c893955842dce0870edf9450dc9ccf0d3477df33db4cc8c721afa9c0e&amp;X-Amz-SignedHeaders=host&amp;response-content-disposition=filename%20%3D%22compaq_cst_1996_0.pdf%22&amp;x-id=GetObject"><em>Internet Solutions Division Strategy for Cloud Computing</em></a>, is dated November 14, 1996 and is ostensibly the earliest known mention of the phrase “cloud computing.”</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/986bdbaacfa27a78e91dc53ed2241d252463c27a51748e9c20eb2b3be22ae8e3.png" alt="“The emergence of the Internet is driving the migration of communication and collaboration applications into the Internet “cloud” (e.g., telephony, fax).”" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">“The emergence of the Internet is driving the migration of communication and collaboration applications into the Internet “cloud” (e.g., telephony, fax).”</figcaption></figure><p>While there’s some uncertainty about which of two men actually originated the phrase, “Both agree that ‘cloud computing’ was born as a marketing term.”</p><p>Ten years after O’Sullivan and Favoloro’s meetings in Compaq’s Houston office in 1996, Eric Schmidt (then CEO of Google at the time) would make the first public mention of “cloud” and “cloud computing” in a modern, still-relevant context (as in, not “telephony, fax”) at a <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.google.com/press/podium/ses2006.html">2006 industry conference</a>:</p><blockquote><p><strong>Eric</strong>: What&apos;s interesting [now] is that there is an emergent new model, and you all are here because you are part of that new model. I don&apos;t think people have really understood how big this opportunity really is. It starts with the premise that the data services and architecture should be on servers. We call it <strong>cloud computing</strong> – they should be in a &quot;<strong>cloud</strong>&quot; somewhere. And that if you have the right kind of browser or the right kind of access, it doesn&apos;t matter whether you have a PC or a Mac or a mobile phone or a BlackBerry or what have you – or new devices still to be developed – you can get access to the <strong>cloud</strong>. There are a number of companies that have benefited from that. Obviously, Google, Yahoo!, eBay, Amazon come to mind. The computation and the data and so forth are in the servers. ... <strong>Eric</strong>: And so what&apos;s interesting is that the two – &quot;<strong>cloud computing</strong> and advertising – go hand-in-hand. There is a new business model that&apos;s funding all of the software innovation to allow people to have platform choice, client choice, data architectures that are interesting, solutions that are new – and that&apos;s being driven by advertising.</p><p>...</p><p><strong>Eric</strong>: I think, if you think about it, all of the companies in the search space are benefiting from this conversion I was talking about earlier, to this new <strong>cloud</strong> model where people are living in more and more online.</p></blockquote><p>Despite these comments being the first well-known, modern uses of the term, it should be noted that Amazon Web Services, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://web.archive.org/web/20151015165250/http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;p=irol-newsArticle&amp;ID=830816">“Launched in July 2002”</a>, had been in existence for a little more than four years prior to Schmidt’s interview, although it was only in 2006 that AWS launched S3 and EC2 (in <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://web.archive.org/web/20151015165250/http://phx.corporate-ir.net/phoenix.zhtml?c=176060&amp;p=irol-newsArticle&amp;ID=830816">July</a> and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://aws.amazon.com/about-aws/whats-new/2006/08/24/announcing-amazon-elastic-compute-cloud-amazon-ec2---beta/">August</a>, respectively). A fuller account of seminal product releases in Cloud Computing can be found on <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Cloud_computing#History">Wikipedia</a>.</p><hr><h3 id="h-a-brief-history-of-cloud-computing-the-idea" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">A Brief History of Cloud Computing (the idea)</h3><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/c3f822c7c912bafd4c62f85d2d19f5ccc7123cac8859f70f75735a5ee3fb4076.png" alt="Pictured here: Two faithful attendants of a proto-Multivac. (Silicon Valley)" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Pictured here: Two faithful attendants of a proto-Multivac. (Silicon Valley)</figcaption></figure><blockquote><p>Alexander Adell and Bertram Lupov were two of the faithful attendants of Multivac. As well as any human beings could, they knew what lay behind the cold, clicking, flashing face — miles and miles of face — of that giant computer. They had at least a vague notion of the general plan of relays and circuits that had long since grown past the point where any single human could possibly have a firm grasp of the whole.</p><p>Multivac was self-adjusting and self-correcting. It had to be, for nothing human could adjust and correct it quickly enough or even adequately enough — so Adell and Lupov attended the monstrous giant only lightly and superficially, yet as well as any men could. They fed it data, adjusted questions to its needs and translated the answers that were issued. Certainly they, and all others like them, were fully entitled to share In the glory that was Multivac&apos;s.</p><p>— <strong>Isaac Asimov</strong>, <em>The Last Question</em> (1956)</p></blockquote><p>While the marketing phrase “cloud computing” ostensibly originated in Compaq’s offices in 1996, the <em>idea</em> of what we would recognize to be modern-day cloud computing goes back much further to at least 1956. Isaac Asimov’s prophetic articulation of a lineage of supercomputers in his short stories (most notably in his personal favorite story, <em>The Last Question</em>, first published in 1956) includes <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Multivac">Multivac</a>, a fictional supercomputer inspired by an actual general-purpose computer called <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/UNIVAC">UNIVAC</a> and one of the earliest fictional conceptions (if not <em>the</em> earliest) of what can recognized to be a contemporary data center. While Asimov’s description of Multivac changes throughout the various stories that the fictional supercomputer is featured in, his description of a vast and inscrutable “self-adjusting and self-correcting” “giant computer” spanning “miles and miles” that “had long since grown past the point where any single human could possibly have a firm grasp of the whole” perfectly describes the modern data center. Asimov’s descriptions of Multivac in his other short stories fleshes out the cloud-like nature of his fictional computer.</p><p>From <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://s3.us-west-2.amazonaws.com/secure.notion-static.com/f40d7a83-2a1b-4c02-90ca-a3baf1f9e1f5/Franchise-Asimov-1955.pdf?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&amp;X-Amz-Credential=AKIAT73L2G45EIPT3X45%2F20211229%2Fus-west-2%2Fs3%2Faws4_request&amp;X-Amz-Date=20211229T232429Z&amp;X-Amz-Expires=86400&amp;X-Amz-Signature=1969f28ac24c1315ca1b75abbc43358a08236763bb6ea6af7f631b9bd3998459&amp;X-Amz-SignedHeaders=host&amp;response-content-disposition=filename%20%3D%22Franchise-Asimov-1955.pdf%22&amp;x-id=GetObject"><em>Franchise</em></a> (1955):</p><blockquote><p>However, we are plugged into Multivac right here by beam transmission. What multivac says can be interpreted here and what we say is beamed directly to Multivac, so in a sense we’re in its presence.</p></blockquote><p>From <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://s3.us-west-2.amazonaws.com/secure.notion-static.com/9cb1d90c-7f6a-4ead-87bc-901f7b206cfe/isaac_asimov-all_the_troubles_of_the_world.pdf?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&amp;X-Amz-Credential=AKIAT73L2G45EIPT3X45%2F20211229%2Fus-west-2%2Fs3%2Faws4_request&amp;X-Amz-Date=20211229T233125Z&amp;X-Amz-Expires=86400&amp;X-Amz-Signature=aa1813de5591e12e6daa4708a1d5c5b5f32030b3c68b8cd9c010e0dbb995c76a&amp;X-Amz-SignedHeaders=host&amp;response-content-disposition=filename%20%3D%22isaac_asimov-all_the_troubles_of_the_world.pdf%22&amp;x-id=GetObject"><em>All the Troubles of the World</em></a> (1958):</p><blockquote><p>Within reach of every human being was a Multivac station with circuits into which he could freely enter his own problems and questions without control or hindrance, and from which, in a matter of minutes, he could receive answers.</p></blockquote><p>Of the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/List_of_fictional_computers">early conceptions of fictional computers in literature</a>, Asimov’s Multivac seems to be the clearest case of science fiction manifesting into reality, a tendency that shows no sign of stopping in our age of technological acceleration.</p><p>The idea of a centralized computing resource accessed at distance by multiple parties via terminal (i.e., “typewriters”, “remote console”) was finding grounding in reality around the same time as Asimov’s Multivac stories were being published. <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/John_McCarthy_(computer_scientist)">John McCarthy</a>, a co-author on the document that coined the term “artificial intelligence” and dubbed one of the founding fathers of the field, was perhaps the first to suggest publicly the idea of utility computing in a speech given to celebrate MIT&apos;s centennial: that computer time-sharing technology might result in a future in which computing power and even specific applications could be sold through the utility business model (like water or electricity)” (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/John_McCarthy_(computer_scientist)#Contributions_in_computer_science">Wikipedia</a>).</p><p><strong>McCarthy’s</strong> 1961 centennial lecture for MIT was titled <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://archive.org/details/managementcomput00gree/page/220/mode/2up"><em>Time-Sharing Computer Systems</em></a> and was published in <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://archive.org/details/managementcomput00gree/"><em>Management and the Computer of the Future</em></a> (1962) (p.220-248):</p><blockquote><p><strong>McCarthy</strong>: I am going to discuss the important trend in computer design toward time-sharing commputer systems. By a time-sharing computer I shall mean one that interacts with many simultaneous users through a number of remote consoles. Such a system will look to each user like a large private computer.</p></blockquote><blockquote><p><em>Time Sharing</em></p><p><strong>McCarthy</strong>: I should like to go on now to consider how the private computer can be achieved. It is done by time sharing a large computer. Each user has a console that is connected to the computer by a wired channel such as a telephone line. The consoles are of two kinds, one cheap and the other better but more expensive. The cheap console is simply an electric typewriter that is used for both input and output.</p></blockquote><p>McCarthy’s contributions to the development of the concepts of utility computing and time-sharing computer systems <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://web.stanford.edu/~learnest/nets/timesharing.htm">helped give rise</a> to the computer time-share industry that is the much neglected precursor to the modern cloud computing industry.</p><hr><h3 id="h-a-brief-history-of-cloud-computing-the-business-model" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">A Brief History of Cloud Computing (the business model)</h3><blockquote><p>By focusing on the time-shared user as an economic subject, we can understand many of the attitudes that structure present-day digital culture. For the irony is that though the word “time-sharing” went out of fashion with the advent of mini- and personal computers in the 1980s, the very same ideas have morphed into what seems to be the most modern of computing concepts: cloud computing. In cloud computing, time on expensive servers (whether storage space, computational power, software applications, and so on) can be rented as a service or utility, rather than paid for up front.</p><p>— <strong>Tung-Hui Hu</strong>, <em>A Prehistory of the Cloud</em></p></blockquote><p>Computer <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Time-sharing">time-sharing</a> refers to “the sharing of a computing resource among many users at the same by means of multiprogramming and multi-tasking.” Prior to the rise of personal computing (called ”<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Home_computer">home computers</a>” at the time) in the 1980s, time-sharing was the prominent computing model because it spread the cost of expensive mainframe computers across multiple terminal users who typically interacted with the system on an intermittent basis (i.e., Sit at terminal, compute some stuff, think and write for a few minutes, and compute some more stuff). Users literally shared the compute-time of a central CPU which would allocate computing resources across active users on the network (the network was usually a university campus or a corporate office).</p><p>From <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.geeksforgeeks.org/time-sharing-operating-system/"><em>geeksforgeeks</em></a>:</p><blockquote><p>A time shared operating system uses CPU scheduling and multi-programming to provide each with a small portion of a shared computer at once. Each user has at least one separate program in memory. A program loaded into memory and executes, it performs a short period of time either before completion or to complete I/O.This short period of time during which user gets attention of CPU is known as time slice, time slot or quantum. It is typically of the order of 10 to 100 milliseconds.</p></blockquote><p>From <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://muse.jhu.edu/article/235244/pdf"><em>Economic Perspectives on the History of the Computer Time-Sharing Industry, 1965–1985</em></a> by <strong>Martin Campbell-Kelly, Daniel D. Garcia-Swartz</strong>:</p><blockquote><p>Time-sharing developed in the mainframe era. A time-sharing system consisted of a large central computer to which many terminals were connected. One terminal served one user, providing a computing experience comparable to an early personal computer, at least 15 years before PCs were routinely available. At the heart of time-sharing was an operating system that divided the computer’s resources among users, so that each user had the illusion that he or she was the sole person on the machine. The market for time-sharing existed because it was the only means at that time of providing a personal computing experience at a reasonable cost.</p></blockquote><blockquote><p>The first, experimental time-sharing system—the Compatible Time Sharing System— was demonstrated at the Massachusetts Institute of Technology in November 1961.</p></blockquote><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System"><em>Compatible Time-Sharing System</em></a> (CTSS) was the first functioning time-sharing system and was demonstrated the same year as MIT’s centennial in 1961 — CTSS was borne of a project that McCarthy himself initiated at MIT in 1959. He predicted the rise of the time-share industry based on his idea of computing as a public utility in the way that the telephone system was (and still is) a public utility.</p><p>From <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://archive.org/details/managementcomput00gree/page/236/mode/2up"><em>Computing as a Public Utility</em></a>:</p><blockquote><p><strong>McCarthy</strong>: In concluding I should like to say a word on management and the computer of the future. At present, computers are bought by individual companies or other institutions and are used only by the owning institution. If computers of the kind I have advocated become the computers of the future, then computation may someday be organized as a public utility, just as the telephone system is a public utility. We can envisage computing service companies whose subscribers are connected to them by telephone lines. Each subscriber needs to pay only for the capacity that he actually uses, but he has access to all programming languages characteristic of a very large system. ... <strong>McCarthy</strong>: The computing utility could become the basis for a new and important industry.</p></blockquote><p>For two decades from the 60s through the 80s, the commercial computer timesharing industry rapidly rose as the time-share architecture model become popularized, before rapidly falling into obscure obsolescence as Moore’s Law improved both the cost and performance of semiconductors, enabling the emergence of smaller, more convenient, and more affordable PCs. While the existence of the computer time-share industry isn’t well-known and its connection to modern-day cloud computing (itself a nebulous ... even <em>cloudy</em> ... topic for most people) is seldom referenced, there’s a strong argument that this industry was the original Cloud industry.</p><p>From <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://nvlpubs.nist.gov/nistpubs/legacy/sp/nistspecialpublication800-145.pdf"><em>The NIST Definition of Cloud Computing</em></a> (Sep 2011):</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/850f74336a3c1d0b2b946599c97b174f700dae385a612dd61964ef3c50625ef2.png" alt=" " blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Commercial time-sharing services enabled “ubiquitous, convenient, on-demand network access of configurable computing resources” to extent that what was made available by the timesharing industry was considered ubiquitous, convenient, and configurable at the time — it was, and still is, a matter of degree. People were accessing big computers miles and miles away and paying for provisioned resources. Sounds like Cloud to me.</p><p>Again, from <em>Economic Perspectives on the History of the Computer Time-Sharing Industry, 1965</em> — 1985:</p><blockquote><p>Commercial timesharing services developed as part of a larger phenomenon, the so-called data processing service industry. This industry had several components. First, there was the industry’s so-called batch data processing component. Batch data processing services had been around roughly since 1955—companies received raw data from customers via mail or messenger, processed the data according to the customers’ requests, and then delivered the processed data through the same channels. Second, there was the industry’s online component. It developed rapidly in the 1960s in parallel with the progress of computer and communication technologies—here customers achieved access to computing power via communication lines and terminals rather than via mail and messenger. The remaining components of the data processing services industry included software (both programming services and products) and facilities management. Here, we are primarily concerned with the time-sharing component of the industry’s online services sector.</p></blockquote><p>Cloud-based SaaS, anyone? Although Cloud Computing (the marketing term) didn’t exist until 1996, Cloud Computing (the business model) clearly existed as far back as the 1960s, even if it wasn’t called as such.</p><hr><h3 id="h-a-brief-history-of-cloud-computing-as-we-know-it-today" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">A Brief History of Cloud Computing (as we know it today)</h3><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/2727985502c5e9c5a0239ddb0cf1cc743d52a0e9dd2ae0ec100dd101c88527ca.png" alt="From History of the cloud by Blesson Varghese" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">From History of the cloud by Blesson Varghese</figcaption></figure><p>From <strong>Stanford Engineering</strong>, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://web.stanford.edu/class/ee204/Publications/Amazon-EE353-2008-1.pdf"><em>Amazon Enters the Cloud Computing Business</em></a> (2008)</p><blockquote><p>Launched in July 2002, Amazon Web Services (AWS) allowed developers to outsource their online and application infrastructure needs at commodity prices. AWS included the following services:</p><ul><li><p>Alexa Web Information Service: web information service (acquired in 1999)</p></li><li><p>Mechanical Turk: dividing work into many tasks for humans (2005)</p></li><li><p>Elastic Compute Cloud: computing platform (2006)</p></li><li><p>Simple Storage Service: storage platform (2006)</p></li><li><p>Simple Queue Service: web service for storing and queuing messages across the Internet (2007)</p></li><li><p>Flexible Payments Service: online payment platform (2007)</p></li><li><p>Simple DB: web service for running queries on structured data in real time (2007)</p></li><li><p>Persistent Storage: allows developers to earmark a storage volume online for people to save files in different file systems (2008)</p></li></ul></blockquote><p>In contrast to previous three brief “histories” of cloud computing, the history of cloud computing as we know it today is much more well-trodden territory by virtue of its relative recency and because the industry’s emergence enabled the rise of an Internet that accelerated the pace of the creation, collection, and organization of data. The most concise history of modern cloud computing goes something like this: Amazon started selling compute (EC2) and storage (S3) services in late 2006 to turn a underutilized balance sheet item (servers) into a revenue line item, Microsoft and Google launched competitors in 2008, and they’ve been competing ever since. Stanford Engineering’s 2008 AWS case study gives an account of Amazon’s entry into the cloud computing business from a perspective that is usesful given the uncertainty of AWS’s success at the time of case publication and both Blesson Varghese’s <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.bcs.org/articles-opinion-and-research/history-of-the-cloud/">History of the Cloud</a> and Wikipedia’s <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Cloud_computing#History">history section for Cloud Computing</a> give comprehensive and detailed timelines that I would add no value to by simply refactoring and repeating.</p><p>Writing about the “history” of modern cloud computing in 2022 would be like writing about the history of oil in <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Standard_Oil#Sherman_Antitrust_Act">1890</a> — some pieces are in place, sure, but <em>a lot</em> is about to happen (this is not to imply that any of the hyperscalers are under serious threat of antitrust action, despite <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.bloomberg.com/news/articles/2021-12-22/amazon-cloud-unit-draws-fresh-antitrust-scrutiny-from-khan-s-ftc">recent rumblings at the FTC</a>). Like oil, cloud computing has the potential to fundamentally transform the world as we know it. However unlike oil, an energy source whose obsolescence is a question not of <em>if</em> but <em>when</em>, there is <em>no conceivable future</em> in which cloud computing ceases to exist barring humanity-wide catastrophic crises (nuclear war, big ass meteor, etc.). Ask most anyone on Earth to articulate their conception of humanity in one hundred, one thousand, or one hundred thousand years and in nearly all non-extinction-based formulations of the future there will be better, faster, and more networked computers. Regardless of whether or not Amazon, Microsoft, or Google exist as corporate entities in 1,000 years or if even corporations themselves cease to exist as an institutional form by that point, if humanity is technologically advanced society in 3022 then cloud computing will exist in some [ostensibly more advanced] form.</p><p>Cloud computing’s seemingly inexorable nature is reminiscent of Mark Fisher’s concept of “cybernetic realism”, the older (Fisher coined cybernetic realism in his 1999 PhD thesis, “<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://wrap.warwick.ac.uk/110900/1/WRAP_Theses_Fisher_1999.pdf">Flatline Constructs: Gothic Materialism and Cybernetic Theory-Fiction</a>”) and lesser known cousin of Fisher’s concept of “<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://libcom.org/files/Capitalist%20Realism_%20Is%20There%20No%20Alternat%20-%20Mark%20Fisher.pdf">capitalist realism</a>.” Fisher described capitalist realism as “<em>the widespread sense that not only is capitalism the only viable political and economic system, but also that it is now impossible even to imagine a coherent alternative to it</em>” — I submit that “cloud computing realism” might be described as the widespread sense that not only is cloud computing the only viable infrastructural technology for organizing any future political and economic systems (from American democratic capitalism to Chinese market communism), but also that it is now impossible even to imagine a coherent alternative to it. Even Ursula Le Guin’s novel articulation of an anti-capitalist, anarchic society through her exposition of Anarres in <em>The Dispossessed</em> (1974) featured “computers that coordinated the administration of things, the division of labor, and the distribution of goods, and the central federatives of most of the work syndicates” (reminiscent of Chile’s actual <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.youtube.com/watch?v=RJLA2_Ho7X0">Cybersyn project</a>, undertaken from 1971 to 1973) in what is essentially an unnamed cloud computing system — computer realism subsumes capitalist realism. [read Fisher’s thoughts on Le Guin’s novel in <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://k-punk.abstractdynamics.org/archives/011295.html">an archived k-punk blog post</a> (the blog itself is hosted through an obscure cloud services company called <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.bloomberg.com/profile/company/0344871D:US">New Dream Network, LLC</a>)]</p><p>While my embryonic articulation of “cloud computing realism” is wholly distinct from Fisher’s “cybernetic realism”, it will in fact be cloud computing (which itself has origins in science-fiction, as we’ve explored) that brings Fisher’s cybernetic realism into reality.</p><p>From Fisher’s <em>Flatline Constructs</em> (1999):</p><blockquote><p>If Baudrillard’s <em>theory-fctions</em> of the three orders of simulacra must be taken seriously, which means: as realism about the hyperreal, or “<em>cybernetic realism</em>”, it is because they have <em>realised</em> that, in capitalism, fction is no longer merely representational but has invaded the Real to the point of constituting it.</p></blockquote><p>And in a quote from Nvidia CEO Jensen Huang that would’ve made even Jean Baudrillard blush — from Nvidia’s <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.youtube.com/watch?v=jhDiaUL_RaM">GTC November 2021 Keynote</a>:</p><blockquote><p>[51:45] <strong>Jensen</strong>: Companies can build virtual factories and operate them with virtual robots in Omniverse. The virtual factories and robots are the digital twins of their physical replica. The physical version is the replica of the digital, since they&apos;re produced from the digital original.</p></blockquote><p>As of their latest quarter, Nvidia generated a little more than 40% of their revenues from selling products and solutions to data center with around a 50/50 split between hyperscale customers and enterprise customers. Fisher’s cybernetic realism therefore goes hand-in-hand with cloud computing realism, MC Escher style.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/dc05079ad5969664f78a5ef5cf349b7c4e09531e7332f5f87595e9a667a0f0fa.png" alt="M.C. Escher, Drawing Hands (1948)" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">M.C. Escher, Drawing Hands (1948)</figcaption></figure><p>While the bulk of the remainder of this five-part primer will be grounded in the materiality, economics, and competitive dynamics of the cloud computing industry, touching upon history only insofar as they provide context for a business and strategy oriented perspective, my hope is that the reader keeps these four histories in his/her mind with the recognition that the fourth history of is currently being written and developed. The history of cloud computing, as we know it today, is just beginning.</p><hr><h3 id="h-resources" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Resources</h3><p>[N/A] <strong>Wikipedia</strong>: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Cloud_computing#History">Cloud Computing [History]</a> [1962] <strong>John McCarthy</strong>: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://archive.org/details/managementcomput00gree/page/220/mode/2up">Time-Sharing Computer Systems</a> [1967] <strong>Paul Baran</strong>: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.nationalaffairs.com/public_interest/detail/the-future-computer-utility">The future computer utility</a> [Aug ‘06] <strong>Search Engine Strategies Conference</strong>: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.google.com/press/podium/ses2006.html">Conversation with Eric Schmidt hosted by Danny Sullivan</a> [Mar ‘08] <strong>IEEE</strong>: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://muse.jhu.edu/article/235244/pdf">Economic Perspectives on the History of the Computer Time-Sharing Industry, 1965—1986</a> [Apr ‘08] <strong>Cloud Computing</strong>. <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.wired.com/2008/04/mf-amazon/">Available at Amazon.com Today</a> [May ‘08] <strong>Stanford Engineering</strong>: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://web.stanford.edu/class/ee204/Publications/Amazon-EE353-2008-1.pdf">Amazon Enters the Cloud Computing Business</a> [Apr ‘09] <strong>Edge</strong>: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.edge.org/conversation/david_gelernter-john_markoff-clay_shirky-lord-of-the-cloud">Lord of the Cloud</a> [Jul ‘11] <strong>Dr. Rao Nemani</strong>: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://ijcset.net/docs/Volumes/volume1issue6/ijcset2011010602.pdf">The Journey from Computer Time-Sharing to Cloud Computing: A Literature Review</a> [Oct ‘11] <strong>Wired</strong>: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.technologyreview.com/2011/10/31/257406/who-coined-cloud-computing/">Who Coined ‘Cloud Computing’?</a> [Aug ‘15]: <strong>Tung Hui-Hu</strong>: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://mitpress.mit.edu/books/prehistory-cloud">A Prehistory of the Cloud</a> [Mar ‘19] <strong>Blesson Varghese</strong>: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.bcs.org/articles-opinion-and-research/history-of-the-cloud/">History of the Cloud</a> [Nov ‘19] <strong>ispsystem</strong>: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.ispsystem.com/news/brief-history-of-virtualization">A brief history of virtualization, or why do we divide something at all</a> [Nov ‘20] <strong>Jerry Chen</strong>: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://greylock.com/greymatter/jerry-chen-the-evolution-of-cloud/">The Evolution of Cloud</a></p><hr>]]></content:encoded>
            <author>0x125c@newsletter.paragraph.com (0x125c)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/7f57db2be147859fa091cd0867cdfc9641879d8f8316db202db5aad44ae374dc.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Towards a blueprint for planetary-scale cryptomedia]]></title>
            <link>https://paragraph.com/@0x125c/towards-a-blueprint-for-planetary-scale-cryptomedia</link>
            <guid>z2LiRg7Re2CqEc50CTIc</guid>
            <pubDate>Tue, 30 Nov 2021 22:36:18 GMT</pubDate>
            <description><![CDATA[Illustration by Merijn Hos.You are viewing “Towards a blueprint for planetary-scale cryptomedia” on Mirror. This is the minimalist, mobile-friendly version without in-line comments and embedded hypermedia. View on Notion if you’d like to view the accompanying hypermedia gallery and interact with dynamic content. A browser that supports WebGL is recommended.synopsis² aka "is this worth my time?" (bc attn is increasingly the scarcest resource):[1] intro → [2] framework → [3.0] combinatorial cry...]]></description>
            <content:encoded><![CDATA[<figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/12284959eba9ba1096b9da0faa3dac931157eaba3b0636e3892e05b5fd33a163.jpg" alt="Illustration by Merijn Hos." blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Illustration by Merijn Hos.</figcaption></figure><p>You are viewing “<strong><em>Towards a blueprint for planetary-scale cryptomedia</em></strong>” on Mirror. This is the minimalist, mobile-friendly version without in-line comments and embedded hypermedia.</p><p>View on <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://dcfstate.notion.site/Towards-a-blueprint-for-planetary-scale-cryptomedia-944a5ae77fd94209b4d59594a4c9e704">Notion</a> if you’d like to view the accompanying hypermedia gallery and interact with dynamic content. A browser that supports WebGL is recommended.</p><hr><h3 id="h-synopsis-aka-is-this-worth-my-time-bc-attn-is-increasingly-the-scarcest-resource" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">synopsis² aka &quot;is this worth my time?&quot; (bc attn is increasingly the scarcest resource):</h3><p><strong>[1] intro → [2] framework → [3.0] combinatorial cryptomedia → [3.1] primordial buzzword soup → [3.2] humanity’s mood ring → [3.3] computational market democracy</strong></p><p><strong>[1]:</strong> humanity is a collective, planetary-scale force. we need planetary-scale media. this media needs to be decentralized → planetary-scale cryptomedia</p><p><strong>[2]:</strong> scalability, decentralization, and agency/interactivity (s/d/a) are key attributes of digital media. modern media is attempting to simulmax all three attributes. the platonic ideal of an open metaverse is the simulmax of s/d/a.</p><p><strong>[3.0]:</strong> s/d/a simulmax for media requires ai/ml and crypto. s/d/a simulmax = open metaverse = digital representation of Jung&apos;s concept of &quot;collective unconscious.&quot; refik anadol is currently the best example of an artist using ai/ml techniques to scale collective representation within media.</p><p><strong>[3.1]:</strong> toys become big things. ppl are currently combining ai x nft x crypto into toys like iNFTs and AI-generated cryptomedia. these toys are considered within &quot;gaming&quot; and &quot;art&quot; but will evolve to be more seriously later.</p><p><strong>[3.2]:</strong> imagine a real-time representation of humanity&apos;s collective mood, accomplished through some combination of homomorphic encryption + federated learning + NLP + RTS. we already have existing precedents for this, and HMR is just an obvious evolution along s/d/a framework.</p><p><strong>[3.3]:</strong> iNFTs (or another mechanisms for self-sovereign personal digital agents) might eventually be used for digital democracy. interfacing digital agents with market (fiat &amp; crypto) is <em>already</em> possible. digital agent interop w/in the State’s govt interfaces will require time for widespread discussion about the legitimacy of digital democracy.</p><h3 id="h-synopsis" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Synopsis:</h3><p><strong>[1] Introduction</strong></p><p>The story of the 20th and 21st is a story about humanity&apos;s realization and continued reconciliation of its status as an interconnected, planetary-scale force. Benjamin Bratton posits that &quot;<strong>Planetary-Scale Computation should be understood as the means of and for the liberation and articulation of public reason, collective intelligence and technical abstraction as collective self-composition.</strong>&quot; This goal provides an imperative for a collective media (i.e., &quot;any extension ourselves&quot;, in the words of Marshall McLuhan) that uses planetary-scale computation to help us better know (sense, feel, understand, etc) ourselves — this media must be decentralized in order to scale to 7.9 billion people.</p><p><strong>[2] Framework</strong></p><p>Media and media platforms can be placed within a framework of scalability, decentralization, and agency that is analogous to that of the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://vitalik.ca/general/2021/04/07/sharding.html">Scalability Trilemma</a> for blockchains. The evolution of media in the digital age of the 21st century has historically tended towards further scalability and further user agency/interactivity but only recently has media evolution along further decentralization taken place. The platonic ideal of the &quot;Open Metaverse&quot; exists as the simultaneous maximization of scalability, decentralization, and agency and therefore requires crypto (i.e., cryptoeconomic mechanisms, cryptographic protocols) to serve as the economic membrane for both decentralization but also scale that cannot be achieved in centralized models which become user extractive over time. AI and crypto can solve the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://unenumerated.blogspot.com/2017/02/money-blockchains-and-social-scalability.html">social scalability</a> problems of our global, media ecosystem — Web3&apos;s promise is that of social scalability, that is to increase &quot;the number of people who can beneficially participate in the institution&quot; of our public digital space.</p><p><strong>[3.0] Combinatorial Cryptomedia</strong></p><p>The simultaneous maximization of scalability, decentralization, and agency for media and media platforms requires further development and adoption of crypto and AI/ML-based innovations in social scalability. AI/ML techniques are already being applied to represent collective psychic processes by artists like Refik Anadol, who is actualizing the concepts of the &quot;collective unconscious&quot; (Carl Jung) and the &quot;collective memory&quot; (Maurice Halbwachs) within his artworks.</p><p><strong>[3.1] Primordial Buzzword Soup: Crypto, NFTs, AI, and the Metaverse</strong></p><p>It turns out that the actualization of the Open Metaverse and the representation of our collective psychic processes within digital media turns out to be the same act. The <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://cdixon.org/2010/01/03/the-next-big-thing-will-start-out-looking-like-a-toy">toys that will become the next big things</a> are being created through combinationial experimentation of primitives within our zeitgeist’s primordial buzzword soup (crypto! NFTs! AI! metaverse!). We will be focusing on the toys emerging at the intersection of cryptomedia (of which I consider NFTs a subcategory) and AI/ML, namely iNFTs and AI-generated cryptomedia. This particular track of media evolution corresponds to a rightward movement in the upper half of my framework, in which already decentralized cryptomedia are seeking to increase scalability and agency <strong>(NFTs → iNFTs and dNFTs → multisig i/dNFTs → fully decentralized cryptomedia)</strong>. The synthesis of crypto and AI challenge <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.youtube.com/watch?t=1735&amp;v=J2klGJRrjqw&amp;feature=youtu.be">characterizations</a> of these technologies that map them along opposite ends of the centralization-decentralization spectrum. Some continually advancing tech primitives (both crypto and non-crypto) that hold promise in helping achieve S/D/A simulmax include zkSTARKs/zkSNARKs, timelock encryption and VDFs, GPT+, Merkle-CRDTs, further adoption of freal-time streaming (RTS) architectures, dynamic bonding curves, and crypto-native serverless.</p><p><strong>[3.2] Humanity&apos;s Mood Ring</strong></p><blockquote><p><strong>Towards a Blueprint</strong>: [<em>real-time streaming (RTS) architecture</em>] + [<em>VQGAN+CLIP</em> OR <em>NLP Sentiment Visualizer</em>] + [<em>federated learning</em>] + [<em>homomorphic encryption</em>] + [<em>DAO</em>] + [<em>Web3 social media/messaging</em>] + [<em>ZKPs</em>] + [<em>anti-Sybil mechanisms</em>: some combination of token-curated registries, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://blog.kleros.io/proof-of-humanity-an-explainer/">Proof-of-Humanity</a>, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/CirclesUBI/whitepaper">decentralized trust graphs</a>, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://youtu.be/6LbRtvdRVBw?t=2040">&quot;DAO for verifying humans&quot;)</a></p></blockquote><p>Imagine wearing a mood ring that reflects the current collective mood of humanity — this is Humanity&apos;s Mood Ring (HMR). HMR has multiple existing precedents and is close to being, if not already, possible from a technical standpoint (though social adoption will lag as concepts like federated learning, differential privacy, and crypto permeate the Overton Window).</p><p><strong>[3.3] Computational Market Democracy</strong></p><blockquote><p><strong>Towards a Blueprint</strong>: [<em>self-sovereign digital agents</em>: iNFTs] + [<em>interoperability with interfaces for markets &amp; governance</em>] + [Distributed/Decentralized/Mesh C/N/S] + [<em>continued advances in AI/ML</em>]</p></blockquote><p>A preliminary exploration of the intersection of direct, digital [market] democracy and personal digital agents, as well as speculative visions for how CMD <em>could</em> manifest, ultimately benchmarked to how this prospective future is <em>already</em> being manifested.</p><hr><h2 id="h-1-introduction" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">[1] Introduction</h2><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/5f84b024385ac60ff7205feff862f51a1b764d75f53fc0e442779ddb2ef7887c.png" alt=" from The Red Book by Carl G. Jung" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">from The Red Book by Carl G. Jung</figcaption></figure><blockquote><p>&quot;The decisive question for man is: Is he related to something infinite or not? That is the telling question of his life. Only if we know that the thing which truly matters is the infinite can we avoid fixing our interests upon futilities, and upon all kinds of goals which are not of real importance.&quot; — <strong>Carl G. Jung</strong>, <em>Memories, Dreams, Reflections</em></p></blockquote><blockquote><p>“ … in the last analysis what is the fate of great nations but a summation of the psychic changes in individuals?” — <strong>Carl G. Jung</strong>, <em>The Archetypes and the Collective Unconscious</em></p></blockquote><p>The story of the 20th and 21st century is a story about humanity&apos;s realization and continued reconciliation of its status as an interconnected, planetary-scale force. The formalization of this relatively recent shared reality is a continuing process, for example, the Anthropocene Working Group of the International Commission of Stratigraphy is currently seeking to <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.nature.com/articles/d41586-019-01641-5">identify a definitive geological marker based on radionuclides from atomic detonations from 1945 to 1963</a> in order to formalize the beginning of the geologic epoch of the Anthropocene, defined &quot;<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.merriam-webster.com/dictionary/Anthropocene">as the period of time during which human activities have had an environmental impact on the Earth</a>.&quot; Although the specific, defining &quot;moment&quot; of this collective realization will be a continued topic for retrospective narrativization, the mass articulation of the concept of mutually assured destruction via full-scale nuclear war throughout the Cold War era is clearly a prime candidate. Since the end of the Cold War, our global &quot;<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/History_by_period#Contemporary_Period_(1945%E2%80%93present)">Contemporary Period</a>&quot; has tautologically been defined by those happenings that highlight the unmistakably interconnected nature of humanity — global contagion and virality takes on financial, ecological, memetic/informatic, and, as has been made evident to us most recently in the form of COVID-19, biomaterialistic forms. As I put the finishing touches on this piece on November 26th, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.ft.com/content/8df04124-c8e7-4c30-b4da-082ea635f308">a new coronavirus variant</a> reminds us, yet again, of the inescapably interconnected nature of our modern existence.</p><p>From <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.youtube.com/watch?v=y8LzY24ChtQ"><em>What is Planetary-Scale Computation For?</em></a> by <strong>Benjamin Bratton</strong>:</p><blockquote><p><em>[8:00]</em> <strong>Benjamin</strong>: The very concept of climate change itself, not the chemical and ecological phenomena, but the <em>concept</em> of climate change (the model of the statistical regularity that we refer to as &quot;climate change&quot;) is itself an epistemological accomplishment of planetary-scale computation. Without that sensing, modeling, calculation, simulation apparatus, from satellites to temperature [sensing] to the supercomputing simulations, the very idea of climate change itself would not have been provided. [It] would not have been possible.</p></blockquote><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/1441cf825376cdee81a1f0586e69b7e8b3bbc3696885ae8adf7a0cfa661f97cb.png" alt="What is Planetary-Scale Computation For? — Benjamin Bratton (22:15)" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">What is Planetary-Scale Computation For? — Benjamin Bratton (22:15)</figcaption></figure><p>The goal of &quot;liberation and articulation of public reason, collective intelligence and technical abstraction&quot; provides an imperative for media (i.e. &quot;any extension of ourselves&quot;, in the words of Marshall McLuhan) that helps the collective better know (<em>sense</em> and <em>understand</em>) itself. <strong><em>Towards a blueprint for planetary-scale cryptomedia</em></strong> is a first swipe at developing a framework for thinking about how the ongoing development of decentralized media through crypto (i.e., cryptoeconomic mechanisms and cryptographic protocols) might enable humanity to better know itself as this class of technologies (i.e., cryptomedia) continues to be developed and adopted. A breakdown of this work is as follows:</p><ul><li><p>&quot;<strong>Towards ...&quot;</strong>: I recognzie that the scope of this work is extremely ambitious (some might argue hubristic) so I&apos;ll be happy to have been <em>directionally</em> correct.</p></li><li><p><strong>&quot;... a blueprint for ...&quot;</strong>: &quot;Blueprint&quot; because I will point out the <em>actual</em> or near existence of modules that are <em>already</em> making planetary-scale cryptomedia a reality.</p></li><li><p><strong>&quot; ... planetary-scale ...&quot;</strong>: Borrowed from Benjamin Bratton&apos;s concept of <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://strelkamag.com/en/article/new-world-order-for-planetary-governance">planetary-scale computation</a>. For our purposes, that cryptomedia is &quot;planetary-scale&quot; means that it is technically <em>and</em> socially scalable across the 7.9+ billion people living on Earth.</p></li><li><p><strong>&quot; ... cryptomedia&quot;</strong>: Jacob Horne (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://zora.co/">ZORA</a> co-founder and from whom I co-opt the term &quot;cryptomedia&quot;) has defined cryptomedia as a &quot;medium for anyone on the internet to create <em>universally accessible</em> and <em>individual ownable</em> hypermedia.&quot; NFTs are a subset of cryptomedia.</p></li></ul><hr><h2 id="h-2-the-framework" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">[2] The Framework</h2><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/20b02b122f54c44327c84ae47cc907785fc108fea0392e64e9593521f339d100.png" alt="This three dimensions of media in this framework are analagous to the three properties of blockchains implied by the Scalability Trilemma." blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">This three dimensions of media in this framework are analagous to the three properties of blockchains implied by the Scalability Trilemma.</figcaption></figure><p><em>View on </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.figma.com/file/funHMjtAf1Pnaq7pbq5iDg/planetary-scale-cryptomedia?node-id=0%3A1"><em>Figjam</em></a><em> for better resolution.</em></p><p>This framework places media and media platforms along three dimensions:</p><ul><li><p><strong>x-axis</strong>: Scalability</p></li><li><p><strong>y-axis</strong>: Decentralization</p></li><li><p><strong>node shape</strong>: Agency / Interactivity / Degrees of Freedom</p></li></ul><p>Broadly speaking, Web 2.0 media companies sought to maximize some combination of the scalability of their media assets and the level of agency offered to the user/consumer/player. While jpeg/mp3/mp4 files are nearly infinitely scalable in that the same mp4 file can be replicated and distributed at <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://stratechery.com/aggregation-theory/">zero marginal cost</a>, the affordances for interaction possible in these mediums are scant. On the other hand, while video games like Minecraft and Fortnite are highly interactive and afford higher degrees of freedom, the scalability of multiplayer video games across large numbers of players is limited by <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://forums.eveonline.com/t/super-lag-during-large-fleet-battles-node-crashes-etc/90824">technical constraints in synchronizing interactions and dependencies</a> between distributed clients (players). It was only until relatively recently that the idea of another dimension of consideration for media, namely decentralization, become widespread with the cultural rise of NFTs and &quot;Web3&quot;.</p><p>The platonic ideal of the &quot;Open Metaverse&quot; exists as a circle in the top right corner of my proposed framework, thereby representing media that simultaneously maximizes scalability, decentralization, and agency. That <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://naavik.co/business-breakdowns/into-the-void">the establishment of a Metaverse requires crypto to serve as a robust economic membrane</a> can be explained through this framework by interpreting Web3 as a reaction to the limitations of scalability achieved through centralization — the centralized nature of Web 2.0 platforms limit both the extensity and intensity of participation in spaces that disproportionately extract from users. Web3&apos;s promise is <em>social scalability</em>, that is to increase &quot;the number of people can beneficially participate in the institution&quot; of our public digital space.</p><p>from <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://unenumerated.blogspot.com/2017/02/money-blockchains-and-social-scalability.html">Money, blockchains, and social scalability</a> by <strong>Nick Szabo</strong>:</p><blockquote><p>Social scalability is the ability of an institution –- a relationship or shared endeavor, in which multiple people repeatedly participate, and featuring customs, rules, or other features which constrain or motivate participants’ behaviors -- to overcome shortcomings in human minds and in the motivating or constraining aspects of said institution that limit who or how many can successfully participate. Social scalability is about the ways and extents to which participants can think about and respond to institutions and fellow participants as the variety and numbers of participants in those institutions or relationships grow. It&apos;s about human limitations, not about technological limitations or physical resource constraints.</p></blockquote><blockquote><p>Even though social scalability is about the cognitive limitations and behavior tendencies of minds, not about the physical resource limitations of machines, it makes eminent sense, and indeed is often crucial, to think and talk about the social scalability of a technology that facilitates an institution. The social scalability of an institutional technology depends on how that technology constrains or motivates participation in that institution, including protection of participants and the institution itself from harmful participation or attack. One way to estimate the social scalability of an institutional technology is by the number of people who can beneficially participate in the institution. Another way to estimate social scalability is by the extra benefits and harms an institution bestows or imposes on participants, before, for cognitive or behavioral reasons, the expected costs and other harms of participating in an institution grow faster than its benefits.</p></blockquote><p>The crux of the current iteration of this Web3/crypto/Metaverse buzzword zeitgeist can be boiled down to one simple question: What does a socially scalable planetary-scale media that trustlessly affords individuals agency and expressivity look like? <strong>Or, put simply, how do you get 7.9 billion people in the same room together?</strong></p><p>If we were to define that &quot;room&quot; as a very large physical building then Tim Urban has shown us that <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://waitbutwhy.com/2015/03/7-3-billion-people-one-building.html">cramming 7+ billion people together</a> is possible, but &quot;beneficial participation&quot; would probably be impossible under such conditions. We could dematerialize our definition of &quot;room&quot; to a single voicecall between all 7.9 billion of us but, as anyone who has ever played COD on Xbox Live knows, beneficial participation would still be impossible. A single chatroom, even if we suspended the laws of physics and assumed perfect synchronization with no latency across 7.9 billion connected devices, wouldn&apos;t be socially scalable for similar reasons. These are instances in which affordances for individuals&apos; agency in shared spaces results in chaos.</p><p>If we dial back the individual affordances for agency/DOF/interactivity to allowing each person the bare minimum of communicating a &quot;0&quot; or &quot;1&quot; by flipping a single, pre-assigned pixel on a single 88,882 x 88,882 (= ~7.9 billion) grid then, even assuming that we&apos;ve solved the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://blog.kleros.io/proof-of-humanity-an-explainer/">proof of Humanity</a> problem and that there&apos;s sufficient trust in the backend host(s), we still haven&apos;t scaled <em>beneficial</em> participation.</p><p>We can dial up individual affordances for expression/agency in digital space to its present maximum, in the form of a hypothetically perfectly decentralized and trustless Minecraft server instance with zero latency across 7.9 billion distributed devices, but the experience would probably devolve into a virtual rave at the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Focal_point_(game_theory)">Schelling point</a> of (x=0, y=0, z=0). Needless to say, neither the addition of a global chatroom nor voicechat (either proximity-based vs global voicechat) would enable scalable beneficial participation.</p><hr><h2 id="h-3-combinatorial-cryptomedia" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">[3] Combinatorial Cryptomedia</h2><blockquote><p>Innovations in social scalability involve institutional and technological improvements that move function from mind to paper or mind to machine, lowering cognitive costs while increasing the value of information flowing between minds, reducing vulnerability, and/or searching for and discovering new and mutually beneficial participants. — <strong>Nick Szabo</strong>, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://unenumerated.blogspot.com/2017/02/money-blockchains-and-social-scalability.html">Money, blockchains, and social scalability</a></p></blockquote><p>The manifestation of the &quot;Open Metaverse&quot;, abstractly conceptualized as the simultaneous maximization of scalability, decentralization, and agency under my proposed framework, requires further development and adoption of crypto and AI/ML-based <em>&quot;Innovations in social scalability&quot;</em> that <em>&quot;move function from ... mind to machine&quot;,</em> thereby <em>&quot;lowering cognitive costs while increasing the value of information flowing between minds, reducing vulnerability&quot;</em> while also <em>&quot;searching for and discovering new and mutually beneficial participants&quot;</em> at the same time.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/0cec59a87f8c78d09357a9d4108c2b489746c726b696168ba104b590b9c0f97a.png" alt=" " blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><div data-type="youtube" videoId="MbT4809Jupg">
      <div class="youtube-player" data-id="MbT4809Jupg" style="background-image: url('https://i.ytimg.com/vi/MbT4809Jupg/hqdefault.jpg'); background-size: cover; background-position: center">
        <a href="https://www.youtube.com/watch?v=MbT4809Jupg">
          <img src="{{DOMAIN}}/editor/youtube/play.png" class="play"/>
        </a>
      </div></div><p><em>&quot;Unsupervised&quot; alludes to the use of </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Unsupervised_learning"><em>unspervised machine learning</em></a><em> algorithms. Refik is known to use the </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://umap-learn.readthedocs.io/en/latest/"><em>UMAP dimension reduction technique</em></a><em> to create works such as these. UMAP belongs is in the class of unsupervised ML techniques known as unsupervised dimensionality reduction techniques (aka &quot;clustering algorithms&quot;), of which </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://setosa.io/ev/principal-component-analysis/"><em>PCA</em></a><em> and </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://distill.pub/2016/misread-tsne/"><em>t-SNE</em></a><em> also belong.</em></p><p>The <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/New_media_art">new media art</a> works of <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://refikanadol.com/about/">Refik Anadol</a> represent the current cutting edge of the application of AI/ML methods on vast amounts of data in order to create art. While no human has the capability to meaningfully process the output of a hypothetical chatroom with 7.9 billion concurrent participants, machine learning algorithms <em>can</em>. Refik&apos;s &quot;<em>Data Universe</em>&quot; uses UMAP to reduce the 138,151 records in the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/MuseumofModernArt/collection">MoMA research dataset</a> down into 7 dimensions (x, y, z, r, g, b, t) where (x, y, z) represent coordinates in a 3 dimensional space, (r, g, b) represent red/green/blue, and (t) represents time. The largeness of the scale at which he&apos;s thinking about applying his techniques are made evident by his body of work and in statements such as those made in a <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.moma.org/magazine/articles/658">recent interview with the MoMA and Feral File</a>, in which they speak about Refik&apos;s MoMA ongoing exhibition and corresponding NFT sales.</p><blockquote><p><strong>Refik</strong>: The first month of my residency at AMI, I found a wonderful open-source cultural archive in Istanbul, called <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://saltresearch.org/primo_library/libweb/static_htmls/salt/info_about_en_US.jsp">SALT</a>, with 1.7 million documents. Seeing these documents inspired me to think about how I could use my training in both AI and visual arts to creatively engage with vast archives of human experience. Could we apply AI algorithms to a library that is open to everyone?</p></blockquote><blockquote><p><strong>Refik</strong>: For me, art reflects humanity’s capacity for imagination. And if I push my compass to the edge of imagination, I find myself well connected with the machines, with the archives, with knowledge, and the collective memories of humanity.</p></blockquote><p>While Refik&apos;s use of advanced AI/ML methods in his ambition to artistically represent &quot;the collective memories of humanity&quot; is cutting edge, neither the idea of a collective psychic process nor the application of technology to actually represent said process is new.</p><p>The underlying idea goes at least as far back as Carl G. Jung&apos;s conception of the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Collective_unconscious">collective unconscious</a> in a (now lost) 1916 essay, &quot;<em>The Structure of the Unconscious</em>.&quot; More notably however was Jung&apos;s &quot;<em>The Archetypes and the Collective Unconscious</em>&quot;(1959), published two years prior to Jung&apos;s death in 1961. Though published decades before the rise of the Internet, his commentary resonates in our digital age more now than ever:</p><blockquote><p>“The mirror does not flatter, it faithfully shows whatever looks into it; namely, the face we never show to the world because we cover it with the persona, the mask of the actor.” ― <strong>C.G. Jung</strong>, <em>The Archetypes and the Collective Unconscious</em> (1959)</p></blockquote><p>The practice of applying technology to represent collective psychic processes goes as far back as Maurizio Bolognini&apos;s <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.bolognini.org/foto/cims.htm"><em>Collective Intelligence Machines</em></a> series, which began in the wake of the peak of the Dot-com era in 2000.</p><p>From <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.generativeart.com/on/cic/papersGA2004/b9.htm"><em>Programmed Machines: Infinity and Identity</em></a> (Dec 2004) by Maurizio Bolognini:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/236088fca84a4f3399ff8c7d742ea544d15e047cfc325a7e3f2c264880a4f5f2.png" alt=" " blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><blockquote><p>I would like to clarify this aspect by pointing out the ways in which results that are out of my control (in most cases images) are generated in my works. Delegating this process to a device is possible by adopting two different approaches:</p><ol><li><p>the use of algorithms capable of making random choices (randomisation): any computer can generate pseudo-random numbers starting from a given numerical series which can be activated from various points each defined by a random event (for example, time measured in milliseconds);</p></li><li><p>the introduction of an evolutionary principle which transfers intelligence to the system; this can be done in two further ways: through <strong>the application of artificial intelligence or collective intelligence</strong>. In the former case, programming techniques (genetic algorithms, neural networks etc.) are used to develop different possible solutions according to their fitness to given objectives. In the latter case, procedures are applied which enable the public to interact and become part of the device.</p></li></ol></blockquote><p>So while neither the idea nor practice are new, between 1916 to 2000 to 2021, the progress of a suite of technologies that can be composed to simultaneously solve for scalability, decentralization, and agency are making the full manifestation of media that captures humanity&apos;s collective psychic processes a real, imminent possibility.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/b2c82a582190bf776490a911564d27896d634de82f15d32ca601de4439895124.gif" alt="72 hour timelapse of Place (Reddit) (2017); Source: When Pixels Collide" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">72 hour timelapse of Place (Reddit) (2017); Source: When Pixels Collide</figcaption></figure><hr><h2 id="h-31-primordial-buzzword-soup-crypto-nfts-ai-and-the-metaverse" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">[3.1] Primordial Buzzword Soup: Crypto, NFTs, AI, and the Metaverse</h2><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/28d515c5fb5c340869745d7248e9a564d5eaf13a54d041cdf3814a7a971e119d.png" alt="from Artist in the Cloud (Jul 2019) by Gene Kogan" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">from Artist in the Cloud (Jul 2019) by Gene Kogan</figcaption></figure><p>Actualizing the Open Metaverse and representing our collective psychic processes within digital media turns out to be the same act. The underlying technological medium is agnostic to the labels that contemporary society assigns to the message conveyed by the medium so while the Metaverse (including the less articulated crypto-dependent instantiation of the Metaverse) is currently predominantly associated with gaming and art, we shouldn&apos;t forget that <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://cdixon.org/2010/01/03/the-next-big-thing-will-start-out-looking-like-a-toy">big things start out looking like toys</a> as evidenced by the likes of <strong>Facebook</strong> (originally a website Mark Zuckerberg built to rate women&apos;s looks → now world&apos;s 8th largest public co) and <strong>Nvidia</strong> (initially focused on rendering 3D graphics for video games before ML explosion induced GPU demand → now world&apos;s 9th largest public co).</p><p>The [toy → big thing] dynamic is currently unfolding in our current zeitgeist of primordial buzzword soup (Crypto! NFTs! AI! Metaverse!) in which people are exploring combinations and conjugations of technological primitives that are presently manifesting as digital toys (aka media). These toys, for anyone paying attention, hold the latent potential for revolutionary change on a planetary-scale and, in much the same way that a college kid&apos;s dumb website and advancements in video gaming hardware led to revolutions in human connection and AI/ML, at least a few of the toys borne of the <em>artificialmetacryptoverseintelligence</em> buzzword soup will lead to effects of similar magnitudes.</p><p>This accelerating movement of combinatorial experimentation between these buzzword primitives is illustrated in the top half of the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/Scalability-Decentralization-and-Interactivity-8fa41092919649d6908ba1edcdfcbe7c"><strong>Scalability, Decentralization, and Interactivity</strong></a> framework, in which decentralized cryptomedia are seeking to increase scalability and agency <strong>(NFTs → iNFTs and dNFTs → multisig i/dNFTs → fully decentralized cryptomedia)</strong> and those mediums that are already sufficiently decentralized <em>and</em> scaled are tending towards increasing agency and interactivity (in addition to continued scale and decentralization). The former movement (NFTs → ... → fully decentralized cryptomedia) is what we&apos;ll be largely focusing on for the remainder of this work.</p><p><em>[Not explicitly represented within my framework but still worth mentioning is the less-pronounced movement of scaled media entities/platforms towards decentralization as in the case of Twitter&apos;s </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://blueskyweb.org/"><em>BlueSky</em></a><em> initiative and Google&apos;s push for </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Federated_Learning_of_Cohorts"><em>FLoCs</em></a><em>, though this movement is a topic for another time.]</em></p><p>With respect to media, the intersection of crypto (of which I consider NFTs a subcategory) and AI/ML is currently manifesting as either:</p><ol><li><p>Intelligent NFTs (iNFTs), or NFTs with embedded intelligence. <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.alethea.ai/">Alethea AI</a> and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.alteredstatemachine.xyz/">Altered State Machine</a> are two projects exploring this space.</p></li><li><p>AI-generated cryptomedia, in which cryptoeconomic protocols govern the creation process and the resultant media may or may not be issued as an NFT. <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://botto.com/">Botto</a> (issues creations as NFTs) and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://abraham.ai/">Abraham</a> (does not currently issue creations as NFTs) are two projects exploring this space.</p></li></ol><p>These particular syntheses of crypto and AI challenge the characterization (see <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://youtu.be/J2klGJRrjqw?t=1735">here</a>, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://emerging.substack.com/p/technology-governance-fit">here</a>, and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://stratechery.com/2021/technological-revolutions-follow-up-crypto-and-cheap-energy-the-stratechery-2-schedule/">here</a>) of crypto and AI as generally mapping on opposite ends of the spectrum of centralization vs decentralization:</p><blockquote><p>&quot;Two of the areas of tech that people are very excited about in Silicon Valley today are crypto on the one hand and AI on the other. And even though I think these things are underdetermined, I do think these two map in a way, politically, very tightly on this centralization-decentralization thing. <strong>Crypto is decentralizing, AI is centralizing</strong>. If you want to frame it more ideologically, you could say that <strong>crypto is Libertarian and AI is Communist</strong>. ... AI is Communist in the sense that it&apos;s about Big Data, it&apos;s about Big Governments controlling all the data, knowing more about you than you know about yourself ... I think there probably <em>are</em> ways that AI could be Libertarian and there are ways that could crypto could be Communist but I think that&apos;s harder to do.&quot;</p><p>— <strong>Peter Thiel</strong> in <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://youtu.be/J2klGJRrjqw?t=1735"><em>Cardinal Conversations: Reid Hoffman and Peter Thiel on &quot;Technology and Politics&quot;</em></a> (Feb 2018)</p></blockquote><p>It turns out, however, that the application of crypto (i.e., cryptoeconomic mechanisms and cryptographic protocols) has the ability to transmute AI into a decentralizing force rather than a centralizing one. Gene Kogan highlights how the application of a combination of &quot;homomorpic encryption + smart contract + oracle&quot; to federated learning (i.e., the &quot;FL&quot; in the &quot;FLoC&quot; algorithm being pushed by Google) can eliminate the negatives of privacy loss and unfair economic extraction typically associated with centralized AI/ML methods:</p><p>from <em>Lecture on Decentralized AI</em> (Dec 2017) by Gene Kogan:</p><ul><li><p>(<em>1:32:58</em> to <em>1:50:40</em>) of <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://youtu.be/AcKMSLn_9Yg?t=5579">lecture</a>.</p></li><li><p>Slides 136-143 and 186-213 of <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://docs.google.com/presentation/d/1RI6BnBsJtTBg3djZbD1hDjuhQagPBAUtAqcqoYWQTxU/edit#slide=id.g2c43a7a0eb_6_0">accompanying slidedeck</a>.</p></li></ul><p><strong>[1] Centralized machine learning:</strong></p><div data-type="youtube" videoId="B1lf36QqKL4">
      <div class="youtube-player" data-id="B1lf36QqKL4" style="background-image: url('https://i.ytimg.com/vi/B1lf36QqKL4/hqdefault.jpg'); background-size: cover; background-position: center">
        <a href="https://www.youtube.com/watch?v=B1lf36QqKL4">
          <img src="{{DOMAIN}}/editor/youtube/play.png" class="play"/>
        </a>
      </div></div><p><strong>[2] Decentralized machine learning via (Federated learning + homomorphic encryption + smart contract + oracle):</strong></p><div data-type="youtube" videoId="H08pBegFJtY">
      <div class="youtube-player" data-id="H08pBegFJtY" style="background-image: url('https://i.ytimg.com/vi/H08pBegFJtY/hqdefault.jpg'); background-size: cover; background-position: center">
        <a href="https://www.youtube.com/watch?v=H08pBegFJtY">
          <img src="{{DOMAIN}}/editor/youtube/play.png" class="play"/>
        </a>
      </div></div><p><em>Note: Title is supposed to say &quot;Federated learning + homomorphic encryption + smart contract + oracle&quot; but &quot;oracle&quot; is cut off in this graphic.</em></p><p>I highlight the specific primitives proposed by Gene in his 2017 lecture, not to claim that the specific combination of technological primitives that he outlined is <em>the</em> requisite architecture for decentralizing AI, but to show the concreteness of how elements of the buzzword soup can be tangibly recombined to desirable effect. There&apos;s every reason to believe that further development of tech primitives, both crypto and non-crypto, will lead to further <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.strangeloopcanon.com/p/combinatorial-theory-of-progress">combinatorial</a> <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://mattsclancy.substack.com/p/combinatorial-innovation-and-technological">innovation</a> towards increasing the social scalability of cryptomedia.</p><p>A non-exhaustive list of the tech primitives where progress has been made in four years since Gene&apos;s 2017 lecture relevant to scaling cryptomedia is as follows:</p><ul><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://consensys.net/blog/blockchain-explained/zero-knowledge-proofs-starks-vs-snarks/">zkSNARKs and zkSTARKs</a></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.gwern.net/Self-decrypting-files">timelock encryption</a> and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://blog.trailofbits.com/2018/10/12/introduction-to-verifiable-delay-functions-vdfs/">verifiable delay functions</a> (VDFs)</p></li><li><p>GPT-3 (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://beta.openai.com/">Open Beta</a> as of Nov 2021)</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.figma.com/blog/how-figmas-multiplayer-technology-works/">CRDTs</a> + <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://docs.ipfs.io/concepts/merkle-dag/">Merkle-DAGs</a> → <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://arxiv.org/pdf/2004.00107.pdf">Merkle-CRDTs</a></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://aws.amazon.com/streaming-data/">real-time streaming</a> (RTS)</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://tokeneconomy.co/dynamic-token-bonding-curves-41d36e43befa">dynamic bonding curves</a></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.youtube.com/watch?v=GOrD8kt2h1M">crypto-native serverless</a></p></li></ul><p>All of the pieces are now in place for us to conclude by examining two speculative examples of how combinations of primitives could eventually be used <strong>towards a blueprint for planetary-scale cryptomedia</strong> so that we might finally be able to get 7.9 billion people in the same proverbial room together.</p><hr><h2 id="h-32-humanitys-mood-ring" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">[3.2] Humanity’s Mood Ring</h2><p><strong>Towards a Blueprint</strong>:</p><blockquote><p>[<em>real-time streaming (RTS) architecture</em>] + [<em>VQGAN+CLIP</em> OR <em>NLP Sentiment Visualizer</em>] + [<em>federated learning</em>] + [<em>homomorphic encryption</em>] + [<em>DAO</em>] + [<em>Web3 social media/messaging</em>] + [<em>ZKPs</em>] + [<em>anti-Sybil mechanisms</em>: some combination of token-curated registries, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://blog.kleros.io/proof-of-humanity-an-explainer/">Proof-of-Humanity</a>, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/CirclesUBI/whitepaper">decentralized trust graphs</a>, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://youtu.be/6LbRtvdRVBw?t=2040">&quot;DAO for verifying humans&quot;)</a></p></blockquote><p>&quot;Humanity&apos;s Mood Ring&quot; (HMR) would allow for a real-time representation of our collective psyche via a cryptomedia object that reflects the collective, passive output of each DAO member that owns part of the cryptomedia. &quot;Passive&quot; because the data ingest from the AI/ML ingest could be sourced from connected Web3 social media and messaging apps that the DAO member gives permission for the &quot;Mood Ring&quot; to access in the background — homomorphic encryption and federated learning would allow for contributing members to mathematically guarantee against privacy concerns. You and your friend could have a serious, private conversation on a Web3 messaging platform while connected to an HMR program that&apos;s running sentiment analysis on the convo, with mathematical guarantees against data leakage from the background process.</p><p>A combination of RTS architecture applied to an AI-powered image creation algorithm (be it a modified VQGAN+CLIP or a <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Sentiment_analysis">sentiment analysis</a> visualizer) would allow for the cryptomedia object to be continuously shifting with the mood of all of its members, as proxied by an analysis of their media output on Web3 social apps. <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.youtube.com/watch?v=IlliqYiRhMU">Liquid neural network</a> (aka continuous-time neural network) architecture could eventually be applied to these image output programs to approach true &quot;real-time&quot; HMR. A sentiment analysis-based visualizer could be assign colors to certain emotions and continuously output a multi-colored <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Bouba/kiki_effect">Bouba/kiki</a> shape. A VQGAN+CLIP-based architecture might continuously output something like:</p><p>from <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://medium.com/@genekogan/artist-in-the-cloud-8384824a75c7"><em>Artist in the Cloud</em></a> by <strong>Gene Kogan</strong>:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/ee1d9f7e369a72167a9a246f929cc538928fa62ab1adbe75fe1ea6e142b8c592.gif" alt="Baseline autonomous artificial artist: a trainable generative model whose data, code, curation, and governance are crowd-sourced from a decentralized network of actors. The behavior of the program emerges from its cumulative interactions with these actors." blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Baseline autonomous artificial artist: a trainable generative model whose data, code, curation, and governance are crowd-sourced from a decentralized network of actors. The behavior of the program emerges from its cumulative interactions with these actors.</figcaption></figure><p>If everyone in the HMR DAO started talking about &quot;Vitalik Buterin riding a unicorn to slay the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.nickbostrom.com/fable/dragon.html">Dragon-Tyrant</a> with an Ethereum tipped sword&quot; then the image output would autonomously reflect those elements. HMR&apos;s cryptomedia &quot;object&quot; could eventually be a 3D entity that eventually interacts with elements in virtual environments by inheriting interoperability/capability with various physics engines that govern the virtual environments.</p><p>Such an architecture could theoretically encompass every human (through some ensemble of various anti-Sybil mechanisms) or be used to create collective mood rings for specific affinity groups (i.e., people who live in NYC, people who own this class of NFTs, etc.) by integrating zero-knowledge proofs (ZKPs) to confirm those relevant identifying aspects without sacrificing personal privacy. Manipulation of HMRs could <em>literally</em> kill everyone&apos;s vibe, so anti-Sybil mechanisms and ZKPs would be required to scalably prove that contributors were individual humans. Cryptographic guarantees of privacy-preservation and decentralized, collective ownership takes the sting out of dystopian, surveillance capitalist critiques of such a system. Assuming you trusted the recording hardware, you could literally contribute data to the HMR via <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ouraring.com/">biometric mood ring</a> without concern that your data is being abused.</p><p>If this idea sounds implausible, consider this small slice of what has <em>already</em> been done and how conceptually close these projects are to HMR:</p><ul><li><p><strong>[May 2010]</strong> <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://mislove.org/twittermood/">Pulse of the Nation: U.S. Mood Throughout the Day inferred from Twitter</a> was made &quot;using over 300 million tweets (Sep 2006 - Aug 2009)&quot; and &quot;represented as density-preserving cartograms.&quot;</p><ul><li><p><em>Pulse of the Nation</em> + scale beyond US + use decentralized and privacy-preserving platforms <strong>⇒</strong> <strong>HMR</strong></p></li></ul></li><li><p><strong>[Summer 2013]</strong> MIMMI <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.vice.com/en/article/d74g4z/public-art-installation-uses-twitter-analysis-to-create-a-minneapolis-mood-ring">[1]</a><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://walkerart.org/magazine/be-nice-mimmi-is-listening">[2]</a> was an &quot;&apos;emotional gateway&apos; to Minneapolis&quot; by &quot;sourcing all tweets from people within 15 miles of MIMMI&quot; and using &quot;open source textual analysis technology to gauge whether tweets were positive or negative&quot; to &quot;[analyze] that text in real time.&quot;</p><ul><li><p><em>MIMMI</em> + scale up # of users + use decentralized and privacy-preserving platforms <strong>⇒</strong> <strong>HMR</strong></p></li></ul></li><li><p><strong>[Apr 2017 — Now]</strong> <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ai.googleblog.com/2017/04/federated-learning-collaborative.html">Gboard on Android</a> is a technology deployed &quot;to millions of heterogenous phones&quot; within a system that &quot;needs to communicate and aggregate the model updates in a secure, efficient, scalable, and fault-tolerant way&quot;</p><ul><li><p><em>Gboard</em> + decentralized, user-owned ML model (instead of owned by GOOG shareholders) + NLP-based sentiment analysis visualizer <strong>⇒ HMR</strong></p></li></ul></li><li><p><strong>[March 2020 — Now]</strong> Matt Kane&apos;s <em>&quot;Right Place &amp; Right Time&quot;</em> is &quot;<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://cointelegraph.com/news/algorithmic-crypto-art-changes-appearance-to-reflect-bitcoin-volatility">an artwork that evolves dynamically in response to BTC price action.</a>&quot;</p><ul><li><p><em>Right Place &amp; Right Time</em> + make output continuous + replace Bitcoin price data ingest with crowdsourced sentiment data + AI/ML-based image output instead of pre-parameterized phase space of the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://async.art/art/master/0x6c424c25e9f1fff9642cb5b7750b0db7312c29ad-245">rotation, scale, and position of correlating layers</a> <strong>⇒</strong> <strong>HMR</strong></p></li></ul></li><li><p><strong>[October 2021 — Now]</strong> &quot;<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://botto.com/about">Botto is a decentralized artist that generates art based on community feedback&quot; and &quot;Anybody can join and govern Bottom, the decentralized autonomous artist.</a>&quot;</p><ul><li><p><em>Botto</em> + passive contribution via homomorphic encryption on Web3 social activity instead of active voting + continuous output instead of discrete output <strong>⇒</strong> <strong>HMR</strong></p></li></ul></li></ul><p>We&apos;re already all connected to each other, technology just helps us to stop pretending that we&apos;re not.</p><hr><h2 id="h-33-computational-market-democracy" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">[3.3] Computational Market Democracy</h2><p><strong>Towards a Blueprint</strong>:</p><blockquote><p>[<em>self-sovereign digital agents</em>: iNFTs] + [<em>interoperability with interfaces for markets &amp; governance</em>] + *[Distributed/Decentralized/Mesh C/N/S + Telecoms*] + [<em>continued advances in AI/ML</em>]</p></blockquote><blockquote><p>Nothing is more dangerous than the influence of private interests in public affairs, and the abuse of the laws by the government is a less evil than the corruption of the legislator, which is the inevitable sequel to a particular standpoint. In such a case, the State being altered in substance, all reformation becomes impossible. A people that would never misuse governmental powers would never misuse independence; a people that would always govern well would not need to be governed.</p><p>If we take the term in the strict sense, there never has been a real democracy, and there never will be. It is against the natural order for the many to govern and the few to be governed. <strong>It is unimaginable that the people should remain continually assembled to devote their time to public affairs</strong>, and it is clear that they cannot set up commissions for that purpose without the form of administration being changed.</p><p>In fact, I can confidently lay down as a principle that, <strong>when the functions of government are shared by several tribunals, the less numerous sooner or later acquire the greatest authority</strong>, if only because they are in a position to expedite affairs, and power thus naturally comes into their hands.</p><p>— <strong>Jean-Jacques Rousseau</strong>, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/The-Social-Contract-538efb29c4cd418bba23176b04d2edaa"><em>The Social Contract</em></a> (1762)</p></blockquote><blockquote><p><strong>Electric technology is directly related to our central nervous systems, so it is ridiculous to talk of &quot;what the public wants&quot; played over its own nerves</strong>. This question would be like asking people what sort of sights and sounds they would prefer around them in an urban metropolis! Once we have surrendered our senses and nervous systems to the private manipulation of those who would try to benefit from taking a lease on our eyes and ears and nerves, we don&apos;t really have any rights left. Leasing our eyes and ears and nerves to commercial interests is like handing over the common speech to a private corporation, or like giving the earth&apos;s atmosphere to a company as a monopoly. Something like this has already happened with outer space, for the same reasons that we have leased our central nervous systems to various corporations. As long as we adopt the Narcissus attitude of regarding the extensions of our own bodies as really out there and really independent of us, we will meet all technological challenges with the same sort of banana-skin pirouette and collapse.</p><p>— <strong>Marshall McLuhan</strong>, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/Understanding-Media-The-Extensions-of-Man-8bf05642a1bd42e38bfc2c5faa1f4e9b"><em>Understanding Media: The Extensions of Man</em></a> (1964)</p></blockquote><p>The number of components in this &quot;blueprint&quot; is fewer than that of HMR&apos;s blueprint because I cheated — the only technological primitive in this blueprint is &quot;iNFTs&quot;. &quot;Distributed/Decentralized/Mesh C/N/S (Compute. Network, Storage) + Telecoms&quot; isn&apos;t meant to represent a &quot;primitive&quot;, so much as it&apos;s meant to represent the material infrastructure (atoms) that facilitates modern communication (bits). The emphasis on this infrastructure being a &quot;Distributed/Decentralized/Mesh&quot; infrastructure is because the adoption of Computational Market Democracy (CMD) necessitates non-centralized architectures for the simple reason that no concentrated (monopolistic, oligopolistic) group of private interests (companies) could ever be fully entrusted with facilitating the [digital] representation of the body politic. The potential conflicts of interest write themselves. Imagine in the year 2040 that the facilitation of the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Electronic_voting_in_Estonia">electronic vote</a> for America&apos;s Presidential election was contracted to AWS and that the vote was between two candidates with vastly differing views on regulating Amazon — <em>even with cryptographic guarantees</em>, could the American public ever fully trust AWS to facilitate this service?</p><p>As Mike Summers outlines in <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://spectrum.ieee.org/online-voting-isnt-as-flawed-as-you-thinkjust-ask-estonia">Online Voting Isn’t as Flawed as You Think—Just Ask Estonia</a>, while you&apos;d only &quot;no more than US $200 for mixing and decrypting votes within an hour of the close of polls&quot; running on the &quot;the equivalent of 40 Amazon Web Services&apos; m4.10xlarge virtual servers to run a U.S. presidential election&quot;, even &quot;assuming a 100% turnout of every U.S. citizen of voting age&quot;, &quot;civic leaders will probably be reticent to jump straight into using cloud computing for online voting&quot; and it would be more likely that &quot;the jurisdictions in each state, and in some instances individual counties, would want to purchase their own servers and infrastructure for online voting&quot;.</p><p>A discussion around whether even the State or, more precisely, particular governments (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/State_(polity)">Wikipedia</a>: &quot;<em>The state is the organization while the government is the particular group of people, the administrative bureaucracy that controls the state apparatus at a given time</em>&quot;) can be entrusted with the function of facilitation is a logical extension of my hypothetical AWS example. Imagine that the US had an online referendum in 2040 on whether the country should allow &quot;the jurisdictions in each state&quot; to host the computing infrastructure for online voting versus a public blockchain on a decentralized mesh network — could this referendum be facilitated by the &quot;jurisdictions in each state&quot; which themselves are the subject of the referendum? To quote McLuhan, &quot;... it is ridiculous to talk of &apos;what the public wants&apos; played over its own nerves&quot; — the imperative for decentralized communication networks for facilitating democratic processes is self-evident.</p><p>By now it should be clear that bottleneck in the blueprint for CMD is the problem of socially legitimizing the establishment of &quot;interoperability with interfaces for markets &amp; governance&quot; as it pertains to existing State institutions (and that, paradoxically, the process of legitimatizing CMD requires facilitation by those very institutions that CMD seeks to disrupt and obviate). However, outside of State institutions, the existence of interfaces with markets and governance that interoperate with digital agents <em>already exists</em>. With respect to traditional public equity markets, the combination of <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://alpaca.markets/docs/">API-first brokerages</a> and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.serverless.com/blog/cron-jobs-on-aws">automated serverless deployment via cloud compute services</a> <em>already</em> enables anybody with Internet access and knowledge to provision &quot;digital agents&quot; to allocate their capital across markets. With respect to crypto markets, tools like <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.youtube.com/watch?v=0HEudgDEVWs">Furucombo</a> and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.youtube.com/watch?v=YQGRIT8aV5c">Gelato</a> are introducing low code abstraction solutions to the already existing abstraction layers of <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://defiprime.com/yield-aggregators">yield aggregators</a>, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://thedefiant.io/dex-aggregators-the-search-engines-of-defi-trading/">DEX aggregators</a> (wen DEX aggregator-aggregator??), and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.indexcoop.com/#Producs">crypto indices</a> — digital agents can be (i.e., they already are) natively integrated into these interfaces because of the open nature of decentralized blockchains. With respect to crypto-native governance within protocols and DAOs, I&apos;d imagine that delegation, voting, proposals, etc. is already possible (though I have yet to see examples at this stage of ecosystem development).</p><p>These were developments that would have been impossible for Rousseau to foresee when he wrote &quot;It is unimaginable that the people should remain continually assembled to devote their time to public affairs ...&quot; in 1762, a time at which ~60% of France&apos;s labor force was still working in agriculture:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/35491bedb8b0423fec5e60917c4c6883bab71ba0dd734d19b07cfb154c3bf6d4.png" alt="from Our World in Data" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">from Our World in Data</figcaption></figure><p>Would the Rousseau of 2021 rethink his position about the amount of time the people have to devote to public affairs after discovering less than 3% of his countrymen are engaged in working the land? Would a <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://beta.openai.com/playground/p/default-chat">chat with GPT-3</a> and the knowledge that access to AI agents is rapidly tending towards democratization make him reconsider his stance on the unimaginability of continuous assembly?</p><p>At this point I should note that the intention of this section isn&apos;t to discuss the normative of whether or not CMD is good/bad, should/shouldn&apos;t be, etc. Widespread, democratic participation is <em>the</em> sine qua non for a widespread, democratic discussion around how widespread, democratic participation should be facilitated. CMD&apos;s legitimacy can only be established within the broader, public sphere as better formed articulations of the idea take shape and disseminate. Therefore, the remainder of this section will be to provide a sample of existing articulations of how CMD <em>could</em> manifest.</p><p><strong>Existing (NOW OR SOON: 1 to 10 years) → Intelligent NFTs</strong></p><p>from Altered State Machine is Enabling AI Ownership via NFTs, Hereditary AI and Minting Brains (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.alteredstatemachine.xyz/">Altered State Machine</a>):</p><blockquote><p><em>[02:24]</em> <strong>Aaron McDonald</strong>: Altered State Machine is two new primitives for the metaverse and the blockchain space. The first primitive is a way to prove you own an artificial intelligence agent through an NFT. What that does, is it enables connectivity of agents to processes in the blockchain and crypto space. That could be things like the tradeability of an AI. It could be something related to connecting an AI to DAO governance. It could be a way to embed AIs in protocols or use AIs as Oracles. All these different kinds of things you can do when you can connect an agent to the same ownership mechanics as an NFT.</p></blockquote><blockquote><p><em>[10:39]</em> <strong>Aaron McDonald</strong>: People watching the AIs as they learn, it’s like watching a kid learn how to walk. It’s quite an engaging piece of content in its own right but not only that, it lends itself really well to this emerging play to earn space because what you are seeing in that space, is you’ve got two classes now. You’ve got the asset owner class and you’ve got the player class, right. This whole two tier system where people might not be able to afford access but they’re renting humans to play it for them because they don’t have the time to play the game, right. We can flip that model a little bit because AIs can play autonomously. And so you can own the asset without having to rent a human to play the game. And so this new type of play to earn mechanic can emerge out of that.</p></blockquote><blockquote><p><em>[29:33]</em> <strong>Aaron McDonald</strong>: If we build out from that metaverse gaming sphere into the other two spheres, which are DeFi and the third being the notion of digital humans. In the DeFi space, what you have now is there are a lot of bots in that space already but they exist outside of the framework of protocol governance or outside of the framework of transparency that blockchain brings to protocols. And so there’s almost these two worlds that exist. There’s the apparent world, which everyone can see. And then there is this external world, which is murky. And what we can do with the agents now, is bring them into the framework of transparency. Now, a DAO can own the liquidation bot or you can have a quant bot that is owned by a DeFi fund on chain. And people can invest in these agents and train these agents, become good at a task. And then other people can invest through the NFT and make that process of investing in distribution an on chain thing, as opposed to something that happens outside of the blockchain environment.</p></blockquote><blockquote><p><em>[29:33]</em> <strong>Aaron McDonald</strong>: If we build out from that metaverse gaming sphere into the other two spheres, which are DeFi and the third being the notion of digital humans. In the DeFi space, what you have now is there are a lot of bots in that space already but they exist outside of the framework of protocol governance or outside of the framework of transparency that blockchain brings to protocols. And so there’s almost these two worlds that exist. There’s the apparent world, which everyone can see. And then there is this external world, which is murky. And what we can do with the agents now, is bring them into the framework of transparency. Now, a DAO can own the liquidation bot or you can have a quant bot that is owned by a DeFi fund on chain. And people can invest in these agents and train these agents, become good at a task. And then other people can invest through the NFT and make that process of investing in distribution an on chain thing, as opposed to something that happens outside of the blockchain environment.</p></blockquote><p>from Arif Khan - The Rise of Intelligent NFTs (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.alethea.ai/">Alethea AI</a>):</p><blockquote><p><em>[21:30]</em> <strong>Arif Khan</strong>: We believe fundamentally that the Metaverse with billions of these interactive characters and the intelligence that will power these characters will be driven by AI.</p></blockquote><blockquote><p><em>[47:00]</em> <strong>Arif Khan</strong>: Now a quick example here would be you have a waifu character, or a Cryptopunk, or a Hashmask and now you want it to have a personality and you want to interact with it and you want to have a conversation with it or you want to learn from it. You can easily add that in and create a new class of characters — your Cryptopunk can talk to you, your waifu can talk to you, it can give its own personality, it can be a virtual assistant for you, it can set up appointments for you, it can be a part of your life and it will extend into the design spaces that exist today.</p></blockquote><p><strong>Speculative (NEAR TERM: 10 to 100 years) → Augmented Democracy</strong></p><p>from A bold idea to replace politicians by <strong>César Hidalgo</strong>:</p><blockquote><p><em>[8:15]</em> <strong>César Hidalgo</strong>: Politicians these days are packages and they&apos;re full of compromises. But you might have someone who can represent only you if you are willing to give up the idea that that representative is a human. If that representative is a software agent, we could have a senate that has as many senators as we have citizens. And those senators are going to be able to read every bill and they&apos;re going to be able to vote on each one of them.</p></blockquote><blockquote><p><em>[9:10]</em> <strong>César Hidalgo</strong>: So it would be a very simple system. Imagine a system where you log in, you create your avatar, and then you want to start training your avatar. So you can provide your avatar with your reading habits, or connect it to your social media, or you can connect it to other data, for example by taking a psychological test. And the nice thing about this is that there&apos;s no deception ... you are providing data to a system that is <em>designed</em> to be used to make political decisions on your behalf. Then you take that data and you choose a training algorithm. It&apos;d be an open marketplace in which different people can submit different algorithms to predict how you would vote based on the data that you&apos;ve provided and this system is open, so nobody controls the algorithms ... and eventually you can audit the system — you can see how your avatar is working and if you like it you can leave it on auto-pilot, if you want more control you can choose that they ask you everytime it makes a decision, or you can choose anywhere in between.</p></blockquote><p>from <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.peopledemocracy.com/">Augmented Democracy</a> by <strong>César Hidalgo</strong>:</p><blockquote><p><strong>WHAT IS AUGMENTED DEMOCRACY?</strong> Augmented Democracy (AD) is the idea of using digital twins to expand the ability of people to participate directly in a large volume of democratic decisions. A digital twin, software agent, or avatar is loosely defined as a personalized virtual representation of a human. It can be used to augment the ability of a person to make decisions by either providing information to support a decision or making decisions on behalf of that person. Many of us interact with simple versions of digital twins every day. For instance, movie and music sites, such as Netflix, Hulu, Pandora, or Spotify, have virtual representations of their users that they use to choose the next song they will listen to or watch the movies they are recommended. The idea of Augmented Democracy is the idea of empowering citizens with the ability to create personalized AI representatives to augment their ability to participate directly in many democratic decisions.</p></blockquote><blockquote><p><strong>HOW WOULD ONE OF THESE EXPERIMENTAL &quot;AUGMENTED DEMOCRACY SYSTEMS&quot; WORK?</strong></p><p>What an Augmented Democracy system would need to do is predict how each of its users would vote on a bill that is being discussed in that country’s congress or parliament. These predictions would provide an estimate of the support that the bill would have received if it had voted directly by the population of users of an AD system instead of their elected representatives.</p><p>To provide these predictions, an AD system would need information from both users and bills. People participating in the system would provide information on a voluntary basis and have the ability to withdraw the information at any moment. This information could include active and passive forms of data. Active data includes surveys, questionnaires, and, more importantly, the feedback that people would provide directly to its digital twin (for instance, by correcting their twin when it made the wrong prediction). Passive data would include data that users already generate for other purposes, such as their online reading habits (e.g., New York Times vs Wall Street Journal), purchasing patterns, or social media behavior.</p><p>The digital twin would then combine a user’s data with information about a bill to predict how that user would vote on that bill. The algorithm used to make a prediction would be chosen by the user from an open marketplace of algorithms, and the user would be able to change it at any time. These algorithms could also make predictions on more nuanced issues, such as which specific parts of the bill the user is more inclined to agree or disagree with, or what pieces of data about a user prompted the AI to suggest a decision.</p></blockquote><blockquote><p>My recommendation for an AD system is not to design it based on the platform paradigm that dominates today’s web but instead use the protocol paradigm that dominated the early days of the web. Platforms, such as Facebook, are almost natural monopolies, whereas protocols, such as email, allow the creation of distributed systems (more similar to markets than monopolies).</p><p>In a protocol-based AD system each user can store their data in their own “personal data store,” or “data pod” (like the pods proposed by Tim Berners Lee in his Solid Project). The data can be in any of the many cloud providers of such a service or in a home computer. This is similar to email. Unlike social media, email services are provided by a large number of universities, companies, and other organizations. In a platform world (e.g., Facebook), some people have control over the entire platform. In a protocol system, like email, nobody has access to all of the email servers. It is a deeply fragmented and federated system by default.</p><p>The algorithms used to make predictions are also distributed and not unique. They exist as part of a marketplace that is open for people wanting to contribute algorithms. Users can select which algorithms they allow to interact with their data, based on how accurate they think the algorithm predictions are, how much they trust the algorithm’s creators, and other criteria.</p><p>These decentralized architectures can help revert the data concentration problem, by bringing questions to the data, instead of centralizing all the data in a few places. These questions are answered in a decentralized manner. And yes, this could be an application of blockchain technology (although that may also not be the only alternative).</p><p>There are, of course, advantages and disadvantages of using protocols instead of platforms. The big advantage of protocols is their distributed nature. This allows protocol-based systems to avoid centralization of data and mitigates many of the privacy and monopoly concerns that are natural in platforms. The disadvantage is that, because of their decentralized nature, protocols are much more difficult to update and improve than platforms.</p></blockquote><blockquote><p>As I explain in my TED talk, the level of participation in democracy is relatively low, even though the number of participatory instances is relatively small. If we were to expand democracy to more instances of participation (like participatory budgets or having direct democracy for parliamentary decisions), the empirical data suggests that the participation of people would be minimal and decrease with additional instances. So the technical problem of direct democracy is not one of limited communication but one of the limited time and cognitive bandwidth of people. To participate in hundreds of decisions, we don’t need additional communication technologies but technologies that augment the number of different things a person can pay attention to.</p></blockquote><p><strong>Sci-fi (LONG TERM: 100 to 1,000 years) → Metacortex</strong></p><p>from <em>Accelerando</em> (2005) by Charles Stross:</p><blockquote><p>The metacortex – a distributed cloud of software agents that surrounds him in netspace, borrowing CPU cycles from convenient processors (such as his robot pet) – is as much a part of Manfred as the society of mind that occupies his skull; his thoughts migrate into it, spawning new agents to research new experiences, and at night, they return to roost and share their knowledge.</p></blockquote><blockquote><p>&quot;The president of agalmic.holdings.root.184.97.AB5 is agalmic.holdings.root.184.97.201. The secretary is agalmic.holdings.root.184.D5, and the chair is agalmic.holdings.root.184.E8.FF. All the shares are owned by those companies in equal measure, and I can tell you that their regulations are written in Python. Have a nice day, now!&quot;</p><p>He thumps the bedside phone control and sits up, yawning, then pushes the do-not-disturb button before it can interrupt again. After a moment he stands up and stretches, then heads to the bathroom to brush his teeth, comb his hair, and figure out where the lawsuit originated and how a human being managed to get far enough through his web of robot companies to bug him.</p></blockquote><blockquote><p>Radical new economic theories are focusing around bandwidth, speed-of-light transmission time, and the implications of CETI, communication with extraterrestrial intelligence. Cosmologists and quants collaborate on bizarre relativistically telescoped financial instruments. Space (which lets you store information) and structure (which lets you process it) acquire value while dumb mass – like gold – loses it. The degenerate cores of the traditional stock markets are in free fall, the old smokestack microprocessor and biotech/nanotech industries crumbling before the onslaught of matter replicators and self-modifying ideas. The inheritors look set to be a new wave of barbarian communicators, who mortgage their future for a millennium against the chance of a gift from a visiting alien intelligence. Microsoft, once the US Steel of the silicon age, quietly fades into liquidation.</p></blockquote><blockquote><p>About ten billion humans are alive in the solar system, each mind surrounded by an exocortex of distributed agents, threads of personality spun right out of their heads to run on the clouds of utility fog – infinitely flexible computing resources as thin as aerogel – in which they live. The foggy depths are alive with high-bandwidth sparkles; most of Earth&apos;s biosphere has been wrapped in cotton wool and preserved for future examination. For every living human, a thousand million software agents carry information into the farthest corners of the consciousness address space.</p></blockquote><blockquote><p>The pre-election campaign takes approximately three minutes and consumes more bandwidth than the sum of all terrestrial communications channels from prehistory to 2008. Approximately six million ghosts of Amber, individually tailored to fit the profile of the targeted audience, fork across the dark fiber meshwork underpinning of the lily-pad colonies, then out through ultrawideband mesh networks, instantiated in implants and floating dust motes to buttonhole the voters. Many of them fail to reach their audience, and many more hold fruitless discussions; about six actually decide they&apos;ve diverged so far from their original that they constitute separate people and register for independent citizenship, two defect to the other side, and one elopes with a swarm of highly empathic modified African honeybees.</p></blockquote><p>May you live in interesting times.</p><hr><h2 id="h-further-reading" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Further Reading</h2><p><strong>Collective Psyche</strong></p><ul><li><p><strong>Wikipedia</strong>: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Virtual_collective_consciousness#Theoretical_underpinnings_of_VCC">Virtual collective consciousness</a></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.jstor.org/stable/10.1525/jung.2012.6.2.103"><em>The Internet as a Tool for Studying the Collective Unconscious</em></a> (2012) — <strong>Shaikat Hossain</strong></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.researchgate.net/publication/313038738_Revolutionizing_Revolutions_Virtual_Collective_Consciousness_and_the_Arab_Spring"><em>Revolutionizing Revolutions: Virtual Collective Consciousness and the Arab Spring</em></a> (2012) — <strong>Yousri Marzouki</strong></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.mdpi.com/2409-9287/1/3/220"><em>Mind as Medium: Jung, McLuhan and the Archetype</em></a> (2016) — <strong>Adriana Braga</strong></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.tankebanen.no/inscriptions/index.php/inscriptions/article/view/28"><em>The digital conscious: the becoming of the Jungian collective unconscious</em></a> (2019) — <strong>Sharif Abdunnur and Krystle Houiess</strong></p></li></ul><p><strong>Collective Media</strong></p><ul><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.bolognini.org/lectures/GA04.htm"><em>Programmed Machines: Infinity and Identity</em></a> (2004) — <strong>Maurizio Bolognini</strong></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://digitalculturist.com/when-pixels-collide-reddit-place-and-the-creation-of-art-3f9c15cc3d82"><em>When Pixels Collide</em></a> (2017) — <strong>sudoscript</strong></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.matthewball.vc/all/cloudmiles"><em>Cloud Gaming: Why It Matters And The Games It Will Create</em></a> (2020) — <strong>Matthew Ball</strong></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://naavik.co/business-breakdowns/cloud-gaming-the-longest-mile"><em>Cloud Gaming: The Longest MILE</em></a> ****(2021) — <strong>Matt Dion</strong></p></li></ul><p><strong>Buzzword Soup</strong></p><ul><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/AI-DAOs-Part-I-II-5c40f92c597f429ea2fbc7c2814d3875"><em>AI DAOs (Part I, II)</em></a> (2016) — <strong>Trent McConaghy, Simon de la Rouviere</strong></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/Lecture-on-Decentralized-AI-e45533b43ecd4230bdd6494ec4c0b46a"><em>Lecture on Decentralized AI</em></a> (2019) — <strong>Gene Kogan</strong></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/AI-art-and-autonomy-an-introduction-to-the-Abraham-project-76f820b3bb3342128fb166bca295003f"><em>AI, art, and autonomy: an introduction to the Abraham project</em></a> (2019) — <strong>Gene Kogan</strong></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/Artist-in-the-Cloud-ebd1350d340543939a23e409fad530b7">Artist in the Cloud</a> (2019) — <strong>Gene Kogan</strong></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://metaversed.net/into-the-void"><em>Into The Void: Where Crypto Meets The Metaverse</em></a> (2021) — <strong>Piers Kicks</strong></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://andrewsteinwold.substack.com/p/ai-nfts-what-is-an-inft-">AI + NFTs: What is an iNFT?</a> (2021) — Arif Khan</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.artnews.com/list/art-in-america/features/gans-and-nfts-1234594335/robbie-barrat-ai-generated-nude-portrait/"><em>GANs and NFTs: AI Artists in the Crypto Space</em></a> (2021) — <strong>Brian Droitcour</strong></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.coindesk.com/business/2021/10/25/the-coming-convergence-of-nfts-and-artificial-intelligence/"><em>The Coming Convergence of NFTs and Artificial Intelligence</em></a> (2021) — by <strong>Jesus Rodriguez</strong></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/Decentralized-Autonomous-Artists-080d983796744f0b8440997d8123cfab"><em>Decentralized Autonomous Artists</em></a> (2021) — <strong>Simon de la Rouviere</strong></p></li></ul><p><strong>Digital Democracy &amp; the Social Contract</strong></p><ul><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.peopledemocracy.com/"><strong>Augmented Democracy</strong></a></p></li><li><p><strong>Wikipedia</strong>: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Electronic_voting_in_Estonia">Electronic voting in Estonia</a></p></li><li><p><strong>The Computational Democracy Project</strong>: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://compdemocracy.org/Case-studies">Featured Case Studies</a></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/The-Social-Contract-538efb29c4cd418bba23176b04d2edaa"><em>The Social Contract</em></a> (1762) — <strong>Jean-Jacques Rousseau</strong></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://www.astro.sunysb.edu/fwalter/HON301/franchise.pdf"><em>Franchise</em></a> (1955) — <strong>Isaac Asimov</strong></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.notion.so/Anatomy-of-the-State-828279d08a0f4cdbb1421b43883f5130"><em>Anatomy of the State</em></a> (1974) — <strong>Murray Rothbard</strong></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://spectrum.ieee.org/online-voting-isnt-as-flawed-as-you-thinkjust-ask-estonia"><em>Online Voting Isn’t as Flawed as You Think—Just Ask Estonia</em></a> (2016) — <strong>Mike Summers</strong></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.noemamag.com/the-frontiers-of-digital-democracy/"><em>The Frontiers of Digital Democracy</em></a> (2021) — <strong>Nathan Gardels, Audrey Tang</strong></p></li></ul><p><strong>Misc</strong></p><ul><li><p><strong>Wikipedia</strong>: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Virtuality_(philosophy)">Virtuality (philosophy)</a></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://arxiv.org/pdf/1902.01046.pdf"><em>Towards Federated Learning At Scale: System Design</em></a> (2019) — <strong>Bonawitz et al</strong></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://numinousxperience.xyz/2020/12/21/federated-learning/"><em>The Future is Federated (Learning)</em></a> (2020) — <strong>Nicole Ruiz</strong></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.youtube.com/watch?v=y8LzY24ChtQ"><em>Lecture: What is Planetary Scale Computation For?</em></a> (2021) — <strong>Benjamin Bratton</strong></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.strangeloopcanon.com/p/combinatorial-theory-of-progress"><em>A Theory of Progress: Standing On The Shoulders Of Giants</em></a> (2021) — <strong>Rohit Krishnan</strong></p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://mattsclancy.substack.com/p/combinatorial-innovation-and-technological"><em>Combinatorial innovation and technological progress in the very long run</em></a> (2021) — <strong>Matt Clancy</strong></p></li></ul><hr>]]></content:encoded>
            <author>0x125c@newsletter.paragraph.com (0x125c)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/fe86efe70207562684a7cf3abced5b0b4c96540413ec60c73e661d8da1818e31.jpg" length="0" type="image/jpg"/>
        </item>
    </channel>
</rss>