<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>Herndon Dryhurst Studio</title>
        <link>https://paragraph.com/@herndondryhurst</link>
        <description>Notes from the studio of Holly Herndon &amp; Mathew Dryhurst 🦾</description>
        <lastBuildDate>Sun, 12 Apr 2026 18:42:05 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>en</language>
        
        <copyright>All rights reserved</copyright>
        <item>
            <title><![CDATA[Whitney Biennial 2024: xhairymutantx]]></title>
            <link>https://paragraph.com/@herndondryhurst/whitney-biennial-2024-xhairymutantx</link>
            <guid>KjIjJIkG3q0m7C7L5qvE</guid>
            <pubDate>Wed, 13 Mar 2024 14:06:18 GMT</pubDate>
            <description><![CDATA[Contribute to this project at xhairymutantx.whitney.org xhairymutantx began by asking the question: do we get to choose how we are represented on the AI substrate? One of the original promises of the internet was self-determination of identity online. As the famous 1993 New Yorker cartoon put forward; “On the internet, nobody knows you’re a dog”. The promise of a digital existence brought with it the promise of reinvention, of mutable, often multiple, identities and existences. There is a com...]]></description>
            <content:encoded><![CDATA[<p>Contribute to this project at <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://xhairymutantx.whitney.org">xhairymutantx.whitney.org</a></p><p>xhairymutantx began by asking the question: do we get to choose how we are represented on the AI substrate?</p><p>One of the original promises of the internet was self-determination of identity online. As the famous 1993 New Yorker cartoon put forward; <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_you%27re_a_dog">“On the internet, nobody knows you’re a dog”</a>. The promise of a digital existence brought with it the promise of reinvention, of mutable, often multiple, identities and existences.</p><p>There is a common concern that with the proliferation of powerful and opaque AI models, we are entering into a new era of untruth. As such, great efforts are taken by companies training AI models to attempt to align AI outputs with what is a kind of consensus truth. As ChatGPT trains, accounts of historical events are favored from more reputable sources, and text to image models attempt to conjure as close to an objective result as possible.</p><p>This imperative to treat all subject matter as objective has utility; when most people prompt “chair”, it is reasonable to expect an image to generate that resembles what most people believe a chair to be. For more subjective areas, such as “beauty” or “art”, claims to truth become much more complicated, and consequential. The concept of “beauty” understood by <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://openai.com/research/clip">CLIP</a> (the text model that underpins text-to-image systems) reveals a certain kind of truth, albeit a kind of aggregate sampling of whatever images were captioned as “beautiful” in its gargantuan training data. An aggregate reflection of everything posted online, an egregore, an <em>aggregore</em>.</p><p>“Art” is weighted heavily towards what art most appears in the data, which explains perhaps the prevalence of more contemporary fantasy illustration or concept art in generic text-to-image outputs. This attempt to determine objective ground truth makes sense in a journalistic or commercial context, however in realms like art, culture and identity risks tethering us to populist, gameable or impenetrably reductive representations of areas that are crucially fluid. When it comes to an individual, and their representation within AI models, what is considered objective truth about them? We have been fortunate to be able to explore this in depth, with Holly appearing to cross the threshold of notoriety sufficient to have an embedding, or concept of her, present in most AI models. If you prompt “Holly Herndon” in a text to image system, you will be returned an image that resembles Holly. Quite what of Holly is known is something we explored in our AI portraiture project <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://foundation.app/blog/notes-on-holly-herndons-debut-collection-classified">CLASSIFIED</a> (2021), using a technique (later referred to as “<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://arxiv.org/abs/2208.01618">Textual Inversion</a>”) created by our collaborator Jordan Meyer to deeply excavate Holly’s embedding within public AI models.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/dac516a0dd22d32c1dedd4e76ea67f6571e27eb31f6536b7fb86890e70620742.png" alt="Beacon Embedding Study (2021) / Classified x|o 9 (2021)" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Beacon Embedding Study (2021) / Classified x|o 9 (2021)</figcaption></figure><p>It appears that, as for many living artists for which there are many photographs of their likeness available online, Holly’s concept within models is defined by her hair. Not her artwork, as would be the case for historical artists for which we have few documented images other than the work they left us. Barely her unique facial features, as it appears that only the most notorious of us have enough pictures of their face online to be able to be convincingly rendered as a portrait. Rather, the concept of Holly compressed into models has zeroed in on her hair as her quintessential essence. There is a funny truth to this. To stand out in a busy feed we all find ourselves increasingly compressing our arguments and appearances, or having our arguments and appearances compressed, for maximal loudness. If one were to ask a human for a shorthand account of Holly, they might reliably refer to her distinctive hair.</p><p>As such when attempting to reform Holly’s concept within AI models, we came to the conclusion it makes most sense to build upon the conceptual foundations already established, out of the fear that any training data produced that strays too far from what models understand Holly to be being filtered out as inaccurate data. The prospect of “<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://arxiv.org/abs/2302.10149">data poisoning</a>” has recently captured imagination as some effort to assert agency over wanton training practices, however in our own research (with projects such as Spawning’s <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://kudurru.ai/">Kudurru network</a>), we have found that for most well established concepts, poisoning may be a less effective tactic than <em>amplifying</em> the cliche. Reductio ad absurdum. Playing the concept compression game. Cliche amplification in this sense would represent identifying the purported essence of a concept, and amplifying it to ridiculous proportions, sufficient that the model will be primed to accept this new mutant amplification as truth.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/40057baf885d526cbf2040dbbf9756f1ba7079be41c1f48a874cf9df2a9e8989.png" alt="xhairymutantx Training Costume (2023)" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">xhairymutantx Training Costume (2023)</figcaption></figure><p>To this end, we began with the goal of amplifying Holly’s cliche, and constructed a costume (tailored by Franziska Muller and Lenna Stam) in which Holly was overrun by her hair. She takes on mutant, promethean, proportions, and her hair, like kudzu, begins to invade and envelop her. We used images of Holly wearing this costume to fine-tune an image model, and that model was recursively refined to produce a consistent character that is able to be spawned by anyone using the interface provided. This model can produce infinite images of this new character. The images produced by this model will mostly all, in some way, be infected by the hairy mutant.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/c10198b0e53ae620f1f964bea3970f0daa788e6d727af94543e50f16e4c654eb.png" alt="xhairymutantx raw embedding captures / xhairymutantx embedding studies (2024)" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">xhairymutantx raw embedding captures / xhairymutantx embedding studies (2024)</figcaption></figure><p>Offering this model to the public serves two functions. First, it allows anyone to explore this model, the artwork, as a medium. Which is quite fun. Second, it allows for an abundance of new material to be produced, hosted on the reputable Whitney.org site, to be ingested for new AI training. Each image will be captioned and tagged “Holly Herndon”. We hope that what is produced by this model may play a hand in defining the concept of Holly in future models. There is little reason why it would not.</p><p>This dance, of asserting some self determination over identity, and also ceding control of the production of new identity, is consistent with principles we established with our <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.documentjournal.com/2022/01/who-does-your-voice-belong-to-for-musician-holly-herndon-the-answer-is-everyone/">Holly+ project</a>. Better that whatever Holly becomes in future be something active, rather than passive, subjectively deliberated, rather than objectively assigned. We can propose, others will define.</p><p>Given the new territory of offering a text-to-image model as an artwork, we have taken some measures to constrain the output of the model. We anticipate many will attempt to produce incendiary imagery that has nothing to do with the objectives of our project. In which case, kittens will be produced. If Holly ends up metamorphosing into a kitten, it will reflect malicious attempts to weaponize this project for other means. We have also taken measures to omit the usage of living artists styles in the prompts, with a similar reasoning.</p><p>Thanks to the Whitney Biennial curatorial team, Meg Onli, Chrissi Illes, Christiane Paul and Colin Brooks for your contributions, guidance and good humor. Thanks to the Whitney Museum for being generous with your good standing in this experiment.</p>]]></content:encoded>
            <author>herndondryhurst@newsletter.paragraph.com (Herndon Dryhurst Studio)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/94d8a84d906384fe234cc76f0dc65a859ec1e7b8bf21709aa837f97d31871f34.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Readyweights, minting concepts]]></title>
            <link>https://paragraph.com/@herndondryhurst/readyweights-minting-concepts</link>
            <guid>eBIcGI8b2nYCpH0uQvkw</guid>
            <pubDate>Mon, 08 Jan 2024 19:52:38 GMT</pubDate>
            <description><![CDATA[We thought to experiment with a new method for sharing an AI assisted artwork and minting the concepts underlying it. In identifying our original work within the latent potential of an AI model, we introduce the idea of the readyweight (a pun on a readymade. We will explain). https://opensea.io/assets/0x1698E9789d7CB10C90408963F473d59Fc303CCB7/1 On the surface this NFT involves a 3D model of an original sculpture we made, a mutant horse that appears in a larger sculpted relief we were commiss...]]></description>
            <content:encoded><![CDATA[<p>We thought to experiment with a new method for sharing an AI assisted artwork and minting the concepts underlying it. In identifying our original work within the latent potential of an AI model, we introduce the idea of the readyweight (a pun on a readymade. We will explain).</p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://opensea.io/assets/0x1698E9789d7CB10C90408963F473d59Fc303CCB7/1">https://opensea.io/assets/0x1698E9789d7CB10C90408963F473d59Fc303CCB7/1</a></p><p>On the surface this NFT involves a 3D model of an original sculpture we made, a mutant horse that appears in a larger sculpted relief we were commissioned to create for OpenAI back in 2022.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/b026e7ac1f3d5426354fb68bcf508a88e9c2465b9ab785c46f803e6edba37353.png" alt="Original sketch for the relief, inspired by neural network architecture (2022)" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Original sketch for the relief, inspired by neural network architecture (2022)</figcaption></figure><p>If you unarchive the minted 3D model (using 7zip or another tool), it contains within it some AI tools to allow for anyone to reproduce its form in infinite contexts:</p><ul><li><p>Embeddings (concepts) compatible with multiple Stable Diffusion models</p></li><li><p>A LoRA and embedding for use with Stable Diffusion XL</p></li><li><p>Training Data so that one might train your own models on the work</p></li><li><p>a 3D model of the sculpture</p></li></ul><p>These files are also available to play with at the following link: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://mega.nz/folder/ThRQAYKD#zy9hBW5EeMCt3xHXXx51UA">https://mega.nz/folder/ThRQAYKD#zy9hBW5EeMCt3xHXXx51UA</a></p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/560af41f88f7ceb7ef23ec6c695bda8a8318c10a7444c8556d3d5d84918f5904.png" alt="Training images for readyweight" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Training images for readyweight</figcaption></figure><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/ee0361424cc9ab51bdd7f93d5b29d8ba795b86cfa5d7ea3943d461a0b36aa70e.png" alt="AI images spawned from the concept of the readyweight object" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">AI images spawned from the concept of the readyweight object</figcaption></figure><p><strong>Why?</strong></p><p>In our 2022 Essay <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://mirror.xyz/herndondryhurst.eth/eZG6mucl9fqU897XvJs0vUUMnm5OITpSWN8S-6KWamY">&quot;Infinite Images and the latent camera&quot;</a> we discussed the idea of infinity related to media in this new context. Central to our argument is that any artwork now, whether unintentionally or intentionally, can serve as the seed for infinite new works.</p><p>From our perspective this new dimension of artworks, currently referred to as &quot;training data&quot;, is something to lean into and explore deliberately. Once artists learned that distributed 2D photographs of their art were viewed more times than the 3D original, some began to modulate their works to photograph better, implicitly understanding that media recorded of the work was, for better or worse, a part of the work.</p><p>It follows that artists now might take interest in modulating their practices to consciously offer training data of original works they create, or surface and claim these concepts in latent space (more on that later). It is not our first time playing with these themes, but is perhaps the first explicitly exploring an embedding as an artwork. We feel that in the coming years it will feel increasingly familiar to see artists minting concepts in this way.</p><p>The title &quot;<strong>Readyweight θ&quot;</strong> refers to theta, a symbol associated with the weights of an AI model, and a cheeky nod to CC0, the license attached to all files related to this work.</p><p><strong>So what is a Readyweight?</strong></p><p>We have done a lot of work with embeddings over the years. For simplicities sake, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://developers.google.com/machine-learning/crash-course/embeddings/video-lecture">embeddings</a> are numerical representations of concepts (people, things) that exist within AI models.</p><div data-type="embedly" src="https://opensea.io/assets/ethereum/0x91Fba69Ce5071Cf9e828999a0F6006A7F7E2a959/2" data="{&quot;url&quot;:&quot;https://opensea.io/item/ethereum/0x91Fba69Ce5071Cf9e828999a0F6006A7F7E2a959/2&quot;,&quot;provider_url&quot;:&quot;https://opensea.io&quot;,&quot;provider_name&quot;:&quot;Opensea&quot;,&quot;version&quot;:&quot;1.0&quot;,&quot;type&quot;:&quot;link&quot;}" format="small"></div><p>For our collection <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://foundation.app/collection/clsfd">Classified (Foundation, 2021)</a> we attempted to produce portraiture of the embedding &quot;Holly Herndon&quot; within OpenAI&apos;s CLIP model. Self portraiture of what this model understood Holly to be.</p><p>In order to explore this concept rigorously, we enlisted the help of (future <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://spawning.ai/">Spawning</a> cofounders) Jordan Meyer and Patrick Hoepner. Jordan devised a technique that at the time he referred to as &quot;Beacons&quot; that would allow for us to train an embedding in order to hone in on what the model understood Holly to be. This was conceptually interesting, as rather than adding additional data to the model, this was more to be understood as an adventure to dig deeper into the model, an excavation of sorts. It was thrilling to see generations appear that progressively uncovered what the AI model understood of Holly.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/14cff2d5687122b76687d1eb652e35f5dad962d6564439b5e620114e15401566.png" alt="&quot;Holly Herndon&quot; beacon mid-process (2021) / &quot;Readyweight&quot; embedding mid-process (2024)" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">&quot;Holly Herndon&quot; beacon mid-process (2021) / &quot;Readyweight&quot; embedding mid-process (2024)</figcaption></figure><p>That technique was later popularized by a paper released the following year <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://arxiv.org/abs/2208.01618">&quot;An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion&quot;</a> by Rinon Gal et al.</p><p>What is conceptually interesting about Beacons/Textual Inversion is the idea that any original artwork made today may have <em>already existed</em> within the embedding space of an AI model.</p><p>It is worth sitting with that thought for a second as it is rather confounding. If I create an original sculpture, or painting, today, and train an embedding to look for that work within the latent space of a previously existing AI model, there is a strong chance it can be located as having already existed, at least conceptually, somewhere in the near infinite combination of vectors inside these models. This raises some interesting questions. We view this observation as less of a weapon to bring to existing debates over AI and copyright, and more testimony to how peculiar and fascinating this new technological substrate is for our conceptions of creativity and originality.</p><p>So, as we have established, it is possible that a new idea that we as artists have created, like an original sculpture, can be identified as having previously existed in the latent potential of an AI model. It just takes us, as artists, to excavate the model to find it, and share it, so that others might use that concept to make more works like it.</p><p>This process of an artist locating something that already exists, and minting it as an artwork, reminded us of the history of <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Readymades_of_Marcel_Duchamp">Duchamp’s readymades</a>. As such we choose to describe these embedding artworks as readyweights.</p><p>This new sculpture of ours already existed inside embedding space, but we decided to surface and share those weights as an artwork. It is fitting to use the medium of NFTs to do so, establishing the provenance of an artwork or concept with no concern of the media associated it being used liberally by others.</p><p><strong>Credits &amp; Thanks</strong></p><p>Annkathrin Kluss, Andy Rolfes, Hypopo, Jordan Meyer, Patrick Hoepner</p><p>Obviously everything to do with Readyweight θ is <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://creativecommons.org/publicdomain/zero/1.0/">CC0</a>. Do what you will.</p>]]></content:encoded>
            <author>herndondryhurst@newsletter.paragraph.com (Herndon Dryhurst Studio)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/43f9b9b8ea9ae67a30f5f73a074d6a762236c7a2a0a33b8a10d24f08ed847b65.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[D̶i̶g̶i̶t̶a̶l̶ ̶S̶c̶a̶r̶c̶i̶t̶y̶ Feasible Abundance and the Shock of the Nude]]></title>
            <link>https://paragraph.com/@herndondryhurst/d-i-g-i-t-a-l-s-c-a-r-c-i-t-y-feasible-abundance-and-the-shock-of-the-nude</link>
            <guid>NxvXNKhUvHddnu8CvsCT</guid>
            <pubDate>Fri, 03 Jun 2022 13:47:19 GMT</pubDate>
            <description><![CDATA[Robert Doisneau: Woman Registering Shock at a Painting of a Nude in Paris Shop Window, 19487:17PM CET update: People requested to collect this piece of writing so I made it collectible. That is very nice and compliments the piece. <3The "Web 3 arbitrarily introduces scarcity to digital media" critique seems to never go away and I think it is fundamentally flawed. Digital media has always been scarce, because there is a rights holder conferred by law unless that rights holder renounces those r...]]></description>
            <content:encoded><![CDATA[<figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/b80ab096f539b32ecc4cfa1756323d428c479e6e2e0527ec4d1a4195eaaada23.png" alt="Robert Doisneau: Woman Registering Shock at a Painting of a Nude in Paris Shop Window, 1948" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Robert Doisneau: Woman Registering Shock at a Painting of a Nude in Paris Shop Window, 1948</figcaption></figure><p><strong>7:17PM CET update:</strong> People requested to collect this piece of writing so I made it collectible. That is very nice and compliments the piece. &lt;3</p><hr><p>The &quot;Web 3 arbitrarily introduces scarcity to digital media&quot; critique seems to never go away and I think it is fundamentally flawed.</p><p>Digital media has always been scarce, because there is a rights holder conferred by law unless that rights holder renounces those rights with a license.</p><p>In plain terms, this means that at least in the jurisdictions where I am able to comprehend twitter arguments, when you take a digital photograph or create a piece of digital art, the copyright of that art is automatically assigned to the creator. It is scarce. 1 of 1. In the case of infringement of your rights as copyright holder, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.artbusiness.com/register_and_copyright_art_for_artists.html">it can be advantageous to have formally registered the work</a>, however the fact remains that digital media is scarce at the point of creation.</p><p>That these critiques often come from corners who mistakenly characterize crypto as a fraudulent fiction is ironic, as the premise that the digital media we enjoy on previous internets is not scarcely owned by anyone is fictitious. No scarcity is being <em>imposed</em> on media, existing scarcity is being <em>surfaced</em> and experimented with.</p><p>This is the brilliance of the NFT experiment. NFTs represent an elegant proposal to attempt what I would describe as <strong>feasible abundance</strong>. I.e how can we make free media fair for the people creating it?</p><p>Decades deep into the internet experiment we had not yet found a way to reconcile our desire to have as many people as possible enjoy media, while also making life financially feasible for the creators of that work. NFTs create games and value exchange around ownership and collection abstracted above access to the content itself, which ought to be available to everyone. As <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://cryptomedia.wtf/">Jacob.eth articulates</a>, the experiment proposes that the more a work is freely available, shared and memed, the more valuable ownership of that work becomes. As such, it is almost impossible to find a piece of NFT art locked behind a token gated wall. In real terms, <em>meaningful scarcity</em> would mean withholding that access.</p><p>This premise also explains why many of the most celebrated early NFT auctions from within the Ethereum community were held to retroactively compensate popular memes, from <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://foundation.app/@NyanCat/foundation/219">Nyan Cat</a> to <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://foundation.app/@DisasterGirl/foundation/25046">Disaster Girl</a>. The values put forward by many corners of the NFT community are that media ought to be freely available, and that free media ought to be valuable. Information is now free <em>and</em> expensive. To achieve both states to the satisfaction of all parties at this immature stage is a minor miracle and to be celebrated.</p><p>This accomplishment is made possible through experimentation with previously hidden dynamics, and foregrounding them can be startling to those who were not previously aware of their existence. I call this phenomenon <strong>the shock of the nude</strong>.</p><p>Many critics of crypto and NFTs mistakenly suggest that crypto is introducing financialization, inequity or scarcity to our digital lives. This is false. Public blockchains, through making visible latent forces such as financing, unequal returns, or scarce and valuable ownership, are bringing long existing dynamics to the surface to be scrutinized. <strong>These forces are not <em>new</em>, they are <em>nude</em>.</strong></p><p>Our previous internet lives are financialized daily on interfaces most of us do not have access to. Web 2 has concentrated wealth and attention to a handful of players at the top (who often most dutifully serve the objectives and accept the universal terms of a handful of platforms), but few other than professional insiders <em>have seen</em> the full extent to which this is true.</p><p>What often emerges on twitter is people with a casual, idealistic, fan-side relationship with media condescending those with a longstanding commitment and understanding of their respective industries. This gulf in knowledge becomes apparent when critics are invited to propose their own alternatives to the current state of the web for creators, with highly RTable but practically infeasible ideas ranging from <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://blog.mollywhite.net/digital-artists-post-bubble-hopes-for-nfts-dont-need-a-blockchain/">laborious illegible key exchanges on a private database</a> (i.e like a blockchain but way worse) to <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://reallifemag.com/reconnected/">vague allusions to revive failed state based internets from 50 years ago</a>. These ideas sound nice to an equally inexperienced crowd, but are unserious and do not stand to help anyone but their authors. If only things were so easy!</p><p>An unsurprising byproduct of a web modeled to facilitate advertising, it is often quite difficult to discern between solid information and promotional misdirection on Web 2. There is a significant discrepancy between the statistics the public has access to and the hard financial and industrial realities obscured by them.</p><p>In the arts, the followers and stream numbers we see are a very distorted indicator of how well the person in question is doing financially, and say little of how people actually make a living. Due to tired dynamics of self promotion and brand construction encouraged by web 2 platforms, there is little incentive for anyone to disclose that they are struggling or in a precarious position. Crypto’s public ledgers shatter illusions and hold potential of unveiling sobering truths. <strong>Web 2 is the grift.</strong> <strong>Web 3 is a gift</strong>. It is not morally sound to accept that many people are committing their 20s and 30s to participating in a fiction while silently struggling, adrift of access to good information.</p><p>Only once the warts-and-all truth is made visible do we have any hope of addressing how the world actually works. The shock of the nude may be a cold shower, but a dose of reality is far more useful to practicing artists or internet denizens than a fictitious understanding of the web and creative industries. Access to more accurate information outside of the promotional mirage of likes and follows is an uncontroversial good for anyone aspiring to make a living.</p><p>That said, while public ledgers make it quite difficult for many to pretend that things are going better than they actually are, there is no shortage of illusion creation on public blockchains too. For a minority who have the means, it is possible to distort perceptions by illicitly pumping bags, making anonymous acquisitions and the like. <strong>Public ledgers are no panacea</strong>, however I find it far preferable that such coordinated distortions be legible and open to scrutiny. As discussed <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://interdependence.fm/episodes/making-web3-human-legible-constructive-speculation-and-the-shock-of-the-nude-with-context">in our podcast with Context.app</a>, more tools are being built to make on-chain activity human legible.</p><p>It will become harder and harder over time to get away with as new forms of journalism are made possible through the ability to track foul play, even retroactively. As <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://twitter.com/tayvano_/status/1532085029857284101?s=20&amp;t=UeymocOPtyFVEUOwwmhR8A">Taylor Monahan recently suggested on Twitter</a>, so far often the best policing of foul play in crypto has come from within crypto itself, and that accountability will spread as more learn how to read on-chain activity.</p><p>You can follow me <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://twitter.com/matdryhurst">@matdryhurst</a> on twitter</p><p>We discuss matters like this with wonderful guests weekly <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://interdependence.fm/">on the Interdependence podcast</a></p>]]></content:encoded>
            <author>herndondryhurst@newsletter.paragraph.com (Herndon Dryhurst Studio)</author>
        </item>
        <item>
            <title><![CDATA[Infinite Images and the latent camera]]></title>
            <link>https://paragraph.com/@herndondryhurst/infinite-images-and-the-latent-camera</link>
            <guid>6mzvVm8KLDu0JCrZEBcl</guid>
            <pubDate>Thu, 05 May 2022 22:07:08 GMT</pubDate>
            <description><![CDATA[We have had the great pleasure to be working with OpenAI InPaint and DALL·E 1 for many months, and thought it would be constructive to document some techniques, ideas and reflections that have been raised through exploring these remarkable tools in honor of the announcement and warranted public excitement around DALL·E 2.Holly Herndon & Mathew Dryhurst (DALL·E 1 test image)BackstoryIf you are new to this area, it is worth establishing a brief timeline of developments from the past year. In Ja...]]></description>
            <content:encoded><![CDATA[<p>We have had the great pleasure to be working with OpenAI InPaint and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/DALL-E">DALL·E 1</a> for many months, and thought it would be constructive to document some techniques, ideas and reflections that have been raised through exploring these remarkable tools in honor of the announcement and warranted public excitement around <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://openai.com/dall-e-2/">DALL·E 2</a>.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/e24391cab66f29deb1d541ddde16639641a8dac96a31ac3a8ca50a773f498495.png" alt="Holly Herndon &amp; Mathew Dryhurst (DALL·E 1 test image)" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Holly Herndon &amp; Mathew Dryhurst (DALL·E 1 test image)</figcaption></figure><h3 id="h-backstory" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Backstory</h3><p>If you are new to this area, it is worth establishing a brief timeline of developments from the past year. In January last year OpenAI announced <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://openai.com/blog/dall-e/">DALL·E</a>, a transformer model capable of generating convincing artworks from textual descriptions. This was shortly followed by the release of <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://openai.com/blog/clip/">OpenAI CLIP</a> (Contrastive Language Image Pre-Training), a neural network trained on image/description pairs released for public testing.</p><p>What followed was an explosion of experimentation beginning in Spring 2021, with artists such as (but not limited to) <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://twitter.com/advadnoun">Ryan Murdock</a> and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://twitter.com/RiversHaveWings">Rivers Have Wings</a> releasing free and open tools for the public to play with generating artworks by connecting CLIPs capacity to discern textual prompts with open image generation techniques such as Taming Transformer’s <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://compvis.github.io/taming-transformers/">VQGAN</a>, and OpenAI’s own novel <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/openai/guided-diffusion">Guided Diffusion</a> method.</p><p>In lay terms, to create artwork one types what one wants to see, and this combination of generator (Diffusion, VQGAN) and discriminator (CLIP) produces an image that satisfies a desired prompt. In Ryan Murdock’s Latent Visions discord group, the artist <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://twitter.com/_johannezz">Johannez</a> (who has been publishing artworks and tutorials around machine learning imagery and music online for years) coined the term “Promptism” for this astounding new capacity to conjure artworks from telling a neural network what one would like to see.</p><p>While this development is but the latest advancement in a legacy of machine learning in art too long to do justice to in a blog post, this act of conjuring artworks from language <em>feels very very new</em>. Feeling is an important dimension to the act of creating an artwork, as while we have for some years had the capacity to generate art from the laborious process of training GANs, often waiting overnight for results that invite the observer to squint and imagine a future of abundant possibility, a perfect storm in the past 18 months has led to a present in which the promise of co-creation with a machine is realized. It <em>feels</em> like jamming, giving and receiving feedback while refining an idea with an inhuman collaborator, seamlessly <em>art-ing</em>. It intuitively <em>feels</em> like an art making tool.</p><p>Analogies are imperfect, but this resembles the leap from the early electronic music period of manually stitching together pieces of tape to collage together a composition to the introduction of the wired synthesizer studio, or Digital Audio Workstation. In the 20th century this progress in musical tools-at-hand took place over half a century. In contrast, this leap in machine learning generated imaging tools-at-hand has gathered steam over a 3-5 year period.</p><p>This dizzying pace of development encourages reflection!</p><h3 id="h-reflections" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Reflections</h3><p>Since Alexander Mordvintsev released the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/DeepDream">Deep Dream</a> project in 2015, machine learning imagery is often associated with surrealism. It is no doubt surreal to find oneself co-creating art by probing a disembodied cognitive system trained on billions of datapoints, and the images produced can be confounding and psychedelic. However these are characteristics often ascribed to new experience. New experiences are weird, until they are no longer.</p><p>At the advent of DALL·E 2 we find it quite useful to think instead to the history of the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Pictorialism#Overview">Pictorialism</a> movement.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/8f078081a5b1562a67c849778394627c2d146240b996226a35bced9ea254e2b7.jpg" alt="Edward Steichen: Rodin, Le Penseur (1902)" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Edward Steichen: Rodin, Le Penseur (1902)</figcaption></figure><p>In the early to mid 19th century, photography was the domain of a select group of engineers and enthusiasts, as it required a great deal of technical knowledge to realize photographs. As such, the focus of these efforts and gauge of the success of the medium was to create images that most accurately reflected the reality they intended to capture.</p><p>The Pictorialists emerged in the late 19th century as a movement intent not on using photographic techniques to most accurately depict reality, instead opting to use photography as a medium of communicating subjective beauty. This movement progressed photography from a scientific to an expressive medium in parallel with increasing access to cameras that required no technical expertise (the first amateur camera, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://americanhistory.si.edu/collections/search/object/nmah_760118">the Kodak, was released in 1888</a>).</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/bb32fa810a938c97c5ac0f348a677b29c623801d75c585fdff7e9a667701ee90.jpg" alt="Léonard Misonne: By The Mill (1910s)" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Léonard Misonne: By The Mill (1910s)</figcaption></figure><p>Debates raged about whether or not photography could in fact be <em>art</em>, a conversation inevitably dismissed once artists were able to affordably integrate cameras into their practices. The Pictorialists extolled the artistic potential of the camera by perverting its purported function of objectively capturing the world, staging scenes and experimenting with practical effects to create subjective and expressive works. Painters soon began integrating the camera into their workflow, and now most art forms make use of sensor based imaging in one form or another.</p><p>Larry Tesler’s pithy description of AI, “Artificial intelligence is whatever hasn’t been done yet.”, is equally applicable looking back to the birth of photography. Photography had also <em>not been done yet</em>, and was then (a phenomenon described as <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/AI_effect">the AI effect</a>) soon integrated into most artistic and industrial practices. Sensor based imaging remains a discrete focus of research pursuing things that have not been done yet.</p><p>The parallels with this time period are clear. In machine learning imagery, the laborious task to this point has been to legitimize the medium by attempting to accurately reflect the reality of training material. Great efforts have been made to realize a convincing and novel <em>dog</em> picture from training neural networks on <em>dogs</em>. This problem has been solved, and sets the foundation upon which we can begin to be expressive.</p><p>As happened with the Pictorialists, prompt based systems like DALL·E are democratizing the means by which anyone can create AI facilitated subjective art with next to no technical expertise necessary, and we assume that in the coming decades these techniques will be integrated into artistic and industrial practices to a great degree. AI will remain a discrete field of research exploring what has not been done yet, and in 2 years we will think nothing of painters using tools like DALL·E to audition concepts, or companies using tools like DALL·E to audition furniture for their office.</p><p>DALL·E represents a shift from attempts to reflect objective reality to subjective play. Language is the lens by which we reveal the objective reality known to the neural network being explored. It is a latent(hidden) camera, uncovering snapshots of a vast and complex latent space.</p><p>The same debates will rage about whether or not prompt based AI imagery can be considered <em>Art</em>, and will just as inevitably be relegated to history once everyone makes use of these tools to better share what is on their mind.</p><p>The ever evolving pursuit of art is greatly benefitted from reducing any friction in sharing what is on your mind. The observer is the ultimate discriminator, and as with any technological development that makes achieving a particular outcome more frictionless, creating great art that speaks to people in the time that it is made remains an elusive and magical odyssey.</p><p>The easier it is to generate artworks, the more challenging it will be to generate distinction and meaning, as it ever was. Great Art, like AI, is very often what hasn’t been explored yet.</p><h3 id="h-dalle-reflections-and-infinite-images" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">DALL·E, reflections and infinite images</h3><p>The first time we really got excited about DALL·E was in discovering its capacity to produce internally coherent images. Internal coherence can best be described as the ability to create convincing relationships between objects generated within an image.</p><p>To test this capacity, we began to use InPaint to extend images that we uploaded to the system without contributing a linguistic prompt. To achieve this, we would manually shift the viewfinder of InPaint in either direction to extend the scene.</p><p>One particularly successful test involved extending the scene of symbolist painter Charles Guilloux’s work - <em>L&apos;allée d&apos;eau</em> (1895)</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/fd1af719f4677740e40718f1d32ae7f7e310641886f7590f543beca2aee90d5c.jpg" alt="Charles Guilloux - L&apos;allée d&apos;eau (1895)" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Charles Guilloux - L&apos;allée d&apos;eau (1895)</figcaption></figure><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/72f573fbd392ff8a973444ce6981791c54a0d83b88a18236594c406e8a588df4.png" alt="Charles Guilloux - L&apos;allée d&apos;eau (Extended with DALL·E by Holly Herndon &amp; Mathew Dryhurst)" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Charles Guilloux - L&apos;allée d&apos;eau (Extended with DALL·E by Holly Herndon &amp; Mathew Dryhurst)</figcaption></figure><p>As you can see, DALL·E is capable of comprehending the style and subject of the scene, and extending it coherently in all directions. Perhaps most notable is its capacity to produce reflections consistent with objects present within the image. The curvature of the trees and river is successfully reflected in the new water being generated.</p><p>This capacity for internal coherence could only happen within the (then) 512x512 pixel viewfinder of InPaint, however successfully demonstrated a capacity to produce images of potentially infinite scale by producing incremental coherent <em>patches</em> of an image.</p><p>We then extended this technique to produce very large pieces under the same reflective motif. The wall sized artworks below (produced with DALL·E 1 and InPaint) are we believe the largest compositions ever produced with machine learning at their time of creation.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/851d78518809d5e2bdc29ab44be09f1ea093bd983b784dac8b5548ff5b1d09af.png" alt="Holly Herndon &amp; Mathew Dryhurst (DALL·E 1 test image)" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Holly Herndon &amp; Mathew Dryhurst (DALL·E 1 test image)</figcaption></figure><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/1b8c952c4ba3d34153fa556282eea8dd3d3be355ad3478461bb5d75a55d9955b.png" alt=" Holly Herndon &amp; Mathew Dryhurst (DALL·E 1 test image)" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Holly Herndon &amp; Mathew Dryhurst (DALL·E 1 test image)</figcaption></figure><p>This process of creating a patchwork of internally coherent images to form a larger composition was very challenging, quite like attempting to paint a wall sized work from the vantage point of a magnifying glass, and with no master guide to follow. As such, we used techniques like horizon lines to retain coherence.</p><p>There is something poetic to composing in this way. Attempting to extrapolate a bigger picture with only access to a small piece of it at a time feels appropriate in the broader context of AI.</p><p>We have not yet been able to experiment with DALL-E 2 and InPaint together in the same way, however assume that even more coherent and vivid images can be produced using a similar patchwork technique.</p><p>One can imagine efforts being made to increase the coherence of a larger composition by analyzing elements contained within an image outside of the scope of the InPaint viewfinder. With these developments in mind, we expect that these tools will soon contain all the elements necessary to produce limitless resolution compositions guided by language and stylistic prompts.</p><h3 id="h-co-authoring-narrative-worlds" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Co-authoring narrative worlds</h3><p>We began to make a series of infinite images extending horizontally, which allude to the potential for these tools to tell stories in the tradition of tapestries or graphic novels. This sequence below is large enough in resolution to be printed well, and is a narrative that could be extended to infinity.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/f6ee3da69a6d5c2fce3f6dcf20764beea57539d2326eaa0d0fdfcc37b78dbd53.png" alt="Holly Herndon &amp; Mathew Dryhurst (DALL·E 1 test narrative)" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Holly Herndon &amp; Mathew Dryhurst (DALL·E 1 test narrative)</figcaption></figure><p>To return to the original idea of extending a painting to reveal more of the scene, what might it mean to be able to produce infinite worlds from a single painting or photograph? This significantly augments the capacity of what we understand of generative art, when a coherent world, or narrative, can be spawned from a single stylistic or linguistic prompt.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/49d47db0e9b9a1b72397a7edd861596f4000645618f1754c117f475d2d7c7d30.png" alt="Holly Herndon &amp; Mathew Dryhurst (DALL·E 1 test narrative)" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Holly Herndon &amp; Mathew Dryhurst (DALL·E 1 test narrative)</figcaption></figure><p>One can imagine narrative art forms like graphic novels or cinema being impacted from such a development. This is a particularly exciting prospect when considering the more recent development of prompt art bots being used in active Discord communities. The first of these we encountered was developed by <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://wolfbearstudio.com/">Wolf Bear Studio</a> for their Halloween themed art project <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.ghoulsngans.io/">Ghouls ‘n GANs</a>, where discord users were invited to generate artworks via an in-thread bot, a concept more recently being experimented with by <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://twitter.com/midjourney">Midjourney</a> and the artist Glassface’s <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://idreamer.xyz/">iDreamer</a> project (also in collaboration with Wolf Bear).</p><p>Bots such as these, in combination with ideas such as Simon De La Rouviere’s <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.untitledfrontier.studio/">Untitled Frontier</a> experiment in narrative co-authorship, augur a future of co-creation not only with artificial intelligence systems, but also more frictionlessly with one another.</p><p>This speaks to the crux of what DALL·E <em>feels like</em>, a tool for jamming, rapid iteration and potentially co-authored social experiences. One can imagine group storytelling sessions online and IRL that produce vivid narrative art to be replayed later or viewed from afar.</p><p>Once the friction to share what is on your mind has been eliminated, the ability to co-create social narrative art experiences at the dinner table or the theatre seems conceivable and exciting!</p><h3 id="h-dalle-2-and-spawning" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">DALL·E 2 and Spawning</h3><p>We have only been working with DALL·E 2 for a short time, however what is clear is that the system has exponentially improved in terms of generated convincing and internally coherent images guided by language and image prompts.</p><p>We plan to publish more later on what we discover, however our initial experiments have involved further experimentation with the “Holly Herndon” embedding present within OpenAI CLIP. In lay terms, Holly meets the threshold of notoriety online to have characteristic elements of her image be understood by the CLIP language/image pairing network, something we explored last year with our <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://foundation.app/collection/clsfd">CLASSIFIED</a> series of self portraits created to reveal exactly what/who CLIP understands “Holly Herndon” to be.</p><p>We propose a term for this process, <strong>Spawning</strong>, a 21st century corollary to the 20th century process of sampling. If sampling afforded artists the ability to manipulate the artwork of others to collage together something new, spawning affords artists the ability to create entirely new artworks in the style of other people from AI systems trained on their work or likeness. As Holly recently communicated at the TED conference, this opens up the possibility for a new and mind-bending IP era of <em>Identity Play</em>, the ability to create works <em>as other people</em> in a responsible, fair and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.thejaymo.net/2020/11/19/permissive-ips/">permissive IP</a> environment, something that we are exploring with the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://holly.mirror.xyz/54ds2IiOnvthjGFkokFCoaI4EabytH9xjAYy1irHy94">Holly+</a> project.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/b90535fa1fe74b3467de2e0e4cec898d79eddaa77fc7623cd558e8e0fb713440.png" alt="Holly Herndon &amp; Mathew Dryhurst (Holly Herndon Clones in a 3D animation style produced with DALL·E 2) " blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Holly Herndon &amp; Mathew Dryhurst (Holly Herndon Clones in a 3D animation style produced with DALL·E 2)</figcaption></figure><p>Tools like DALL·E 2 and InPaint undoubtedly propel us closer to this eventuality, evidenced by these “Holly Herndon” style memes we were able to generate in early tests.</p><p>Memes feel an appropriate medium for experimentation in this context, as any single meme maintains it’s vitality from its ability to be personalized and perpetually built upon. Memes are cultural embeddings, not dissimilar to the embeddings present within the latent space of a neural network.</p><p>Memes are a distillation of a consensual/archetypical feeling or vibe, in much the way that the “Holly Herndon” embedding with CLIP is a distillation of her characteristic properties (ginger braid and bangs, blue eyes, often photographed with a laptop), or the “Salvador Dali” embedding is a distillation of his unique artistic style.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/8ebc52dd24e5bae794d6f213b8d4e759477cf2d01327ef43387b21466d22578a.png" alt="Holly Herndon &amp; Mathew Dryhurst (Personalized Memes with DALL·E 2)" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Holly Herndon &amp; Mathew Dryhurst (Personalized Memes with DALL·E 2)</figcaption></figure><p>We find personalized applications like this pretty exciting, as while DALL·E 2 (and it’s stunning variations feature we will cover at a later time) unlocks the ability to produce convincing images in art historical styles familiar to it’s training set, we feel that there could be a misconception that its utility is limited to solely recreating artistic expressions of the past.</p><p>Like the introduction of the personal camera, it is easy to imagine a near future scenario in which all amateur and professional workflows across creative industries are augmented by these tools, helping people to more clearly depict what is in their mind through common language and an ever expanding training corpus.</p><p>The 21st century is going to be <em>wild</em>. We will share more as we learn more! 🦾</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/dc69f2eb69fa7810e7a2464f8b91a5c5df0dd4f8b04580415d3ec3c47cf3d835.png" alt="Holly Herndon &amp; Mathew Dryhurst (&quot;A building that looks like Holly Herndon&quot;, made with DALL·E 2)" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Holly Herndon &amp; Mathew Dryhurst (&quot;A building that looks like Holly Herndon&quot;, made with DALL·E 2)</figcaption></figure>]]></content:encoded>
            <author>herndondryhurst@newsletter.paragraph.com (Herndon Dryhurst Studio)</author>
        </item>
    </channel>
</rss>