Subscribe to cdammr.eth
Subscribe to cdammr.eth
<100 subscribers
<100 subscribers
Share Dialog
Share Dialog

"Being around other artists stimulates idea-generation," says Susan Cain, author of Quiet: The Power of Introverts in a World That Can't Stop Talking. "It's part of the reason Andy Warhol created his Factory."
There’s no doubt that collaboration between artists often and reliably results in creations that are more than the sum of their parts. The feedback, the free sharing of ideas, even what could otherwise be throw-away thoughts often lead to sparks of magic that may never have seen the light of day.
But that’s if the relationship between collaborators is a positive one. Sometimes even the most well-intentioned talents can just not mesh, moving to the beat of different drums, and miss every thought the other puts down. And that’s if they can even understand one another at all.
This is all to say that the fundamental truth at the heart of any collaboration is that of conversation, and one that is often non-verbal. The best collaborations can just as easily spring from a wordless action as the worst can be stifled by the wrong word at the wrong time.
What happens when one of those collaborators doesn’t speak the same language? What happens when one of those collaborators isn’t even human?

This has been explored before, though with varying levels of success. Despite many human attempts, animals do not make successful artistic collaborators nor really artists themselves; that leaves us with artificial intelligence, or more specific machine learning algorithms.

Plainly, we aren’t at a stage where AI is eclipsing humans at menial tasks, let alone conceptual higher-brain ones. What we are able to do however is have rudimentary collaborative conversations with algorithms. For example: designing new parts for aircraft is a slow and difficult process; using machine learning algorithms, researchers are able to ask questions about optimal designs and have the algorithm answer by testing out millions of variations. It’s simple, but it is a call-and-response interaction.
Most of these applications involve a singular question and answer: I have parameters xyz, what’s the best way of using them? The beauty of collaboration however is that it’s ongoing— the original questionee then turns around and asks a follow up question of the questioner. How would it look if the algorithm was asking questions of its human counterpart, and the human had to provide answers that the algorithm could never do on its own?

Since 2014, a large body of my artistic work has been through the medium of acrylic ink in water. Using a large (maybe 200 litres?) fish tank in my studio, and a variety of hot lights and backdrops, I’ll drop acrylics of various types and viscosities in to the tank and record the alien, otherworldly results. Through this, I’ve exhibited in art shows, had my work featured in print, sold work for brands, and performed in live AV shows in Canada and England.

I’d be lying if I said I had “control” over how the acrylics behave in water, but over time I can say that I’ve developed an artist-level “feel” over how the ink disperses, and how to manipulate that through volume, force, temperature, etc.
One thing that’s always bothered me however, at least in the long-term as I’ve stuck with this medium, is that of waste: specifically water. Even before upgrading to my current tank, the amount of water needed to load up a tank for a 4-5min shoot only to be drained right back out always felt, well, wrong. I live in a very water-rich area of the world, British Columbia, but I always felt insensitive knowing the drought issues in other areas of the world today and likely where I live in the near future.
That’s where Machine Learning and StyleGAN2 comes in.

The goal initially with this project is not to wholly stop my IRL practice of acrylic in water, but rather augment and vastly reduce how often I’d have to dump 200 litres of water down the drain. Through the recent advent of consumer-accessible no-code machine learning, like RunwayML and Artbreeder, we’ve seen an explosion of this realm of generative art. Previously unbeknownst to me, the immediate question I asked was — if I can teach a machine how to create via my medium, can I reduce my water consumption as a result?
That question was quickly followed up by — what if the machine can do it better?

Ultimately, the suite of imagery that I submit will be my language of communication with the algorithm, just as a playwright submits scripts to their actors.
RunwayML says to train your own model, you should start with an image set size of at least 300 images; most ML academic papers say a minimum of 1000. Naturally, I went with the latter, knowing full well as this project develops that data set will grow in kind.
A limitation of this initial phase is that while I had roughly 1000 images, they were from a total of 10 or so separate photo shoots— meaning the variation is quite a bit smaller than, say, 1000 unique images. More on this later.
When you don’t have control over the code, as I currently do not with Runway, as an artist your “brush” (aka your dataset) becomes your main means of expression. I started with a handful of shoots that I have done over the years in the color palette I chose, and then shot more, and shot more again, to fill out the dataset — with one caveat: I wasn’t being intentional with the ink or even looking at image results after shooting. This was intended to start the conversation blank, to allow the algorithm ample space to react to my intuitions.
I selected a limited color palette in order to restrict and simplify the initial model teaching— it appeared to me that before collaboration could happen, I’d need to teach the model to reproduce the basic cloud forms in a lighting setup that was characteristic to me — a wider gamut of colors could come later.
I started with purple, red, white and black as this was a palette I had the widest amount of work already finished in. Eventually, when I’m confident the algorithm can accurately create new forms reminiscent of my own analog work, I’ll start to introduce new colors. You need to walk before you can run.
The great intangible, the fourth dimension, in all of this. The entire data set was batch-edited to maintain a relatively similar color profile (purple, red, white, black as mentioned above) to maintain a consistent value for the algorithm to learn off of. What is not constant whatsoever is the variation in lighting — placement, temperature, intensity. With lighting anything I was shooting for this initial run I purposefully tried to keep the lighting temperature a similarly warm off-white, but ultimately the placement and intensity of light will change over the course as I collaborate — and I think that’s ok.

The results from the preliminary run were, frankly, astounding to me. For one, they’re not perfect! This was maybe the most pleasant surprise — are they better than perfect reproductions? What’s going on here?
To be real, this was where the idea of an ongoing conversation came to being. I was admittedly expecting poor recreations from the first run: instead, the algorithm spoke back to me in a way that I wasn’t expecting.
The source material, my initial expression, is clear in these images: the forms are there, the dramatic lighting is present, and the colors are (for the most part) following an established pattern. There are also the stylized wisps of StyleGAN2 processing— the hard edges mixed with soft gradients, for example.
But there is something more — saying to me almost as though acrylic ink in water is too real, too tangible, to work as a transportation vehicle. The algorithm is showing what I’m imagining when I speak, but am too reliant on my existing tools to say.

Over the course of creating a number of NFT loops for this initial step, I started to see the limits in the algorithm’s current vocabulary as well. The array above shows the bounds that the algorithm is able to produce — though not at all comprehensive, I’m sure you can spot some general themes.
Another interesting output is the background. The dataset is from an entirely neutral background— but the algorithm is outputting colored backgrounds without the definition of the ink clouds. Is this a result of a couple small outliers having an outsized influence? Is this from images where colored ink takes up the entire frame, resulting in the algorithm thinking that the colors are the background? I’m left with more questions than answers, which is a hallmark of a budding conversation.
The next step for me, of course, is to create more analog work to add to the dataset and update the algorithm. Time to formulate my rebuttal. This is me admitting that I’m going to make different choices as a direct result of the visual output of a machine. Who’s controlling who?
While I’m obviously (and I hope you are too) interested in seeing other colors in the mix, I’m not quite yet ready to take the training wheels off. The uniformity of the algorithm’s output from an arrangement perspective tells me that I don’t have a diverse enough dataset; these next shoots will therefore be me “filling in the blanks.” Black versions of clouds where there are white; ink cloud flourishes in areas where there has been mostly negative space in most output frames; generally a conceptual photonegative of what the first parlances have been so far.
After all, we know how the algorithm feels when it’s first talked to— how will it respond to being responded to?

There’s no question that part of artistic output of the algorithm comes in the form of video-based latent space walks. For those unfamiliar, when a StyleGAN model is given two key output frames, it can generate an infinite number of frames in between to act as a transition between the two. Putting them in to a longer form video piece, it became obvious that it needed a soundtrack.
Thankfully, during editing I had stumbled on Jimmy Whitaker’s Ambient Music Generator that he created via the Magenta Project. Using his model, I was able to output a number of MIDI files that I then arranged in Logic Pro and produced an original piece of music.
To be clear, the algorithm nor the data set are of my own creation: what I did with the MIDI file(s) it output was. That being said, at this early stage the soundtrack felt like an on-theme bandaid. Thematically correct, but certainly not doing anything to advance the conversation between myself and the audience, nor that of myself and the machine.
So for now consider this a bookmark for potential development in the future. Ideally I’d love to see how this two-way conversation between myself and a StyleGAN2 algorithm turns in to a full-fledged band in the future.
To Be Continued.
Admittedly before any response occurs, I may just port the conversation to a different platform.
RunwayML is an absolutely fantastic tool for anyone looking to try out a variety of machine learning algorithms and have something look good right out of the box. That said, because it is a consumer product promising “easy machine learning,” the shotgun blast of effectiveness doesn’t allow for acuity in any one area: it’s akin to listening to my conversation with the algorithm through a muffled wall.
The shift to porting and (hopefully not) retraining the StyleGAN2 model on a more open platform like Google CoLab may delay things, but I’m hoping it will allow for better visual fidelity and a wider vocabulary between the algo and myself.
Ultimately, long-term, I plan create more work to add to the dataset, submit my salvo to the algo, listen to its point of view, journal my thoughts here, and repeat in a call-and-response format. When applicable, I’ll try and introduce a third in the form of music for a menage-a-trois of cross-discipline and cross-intelligence collaboration. Hopefully, over time, we all learn a little about how humans can collaborate with machines, and myself and the algorithm create some cool work to boot.
Buckle up! I hope you’ll come along for this ride :)
I can be found online @cdammr or @cdammr or really all in @cdammr

"Being around other artists stimulates idea-generation," says Susan Cain, author of Quiet: The Power of Introverts in a World That Can't Stop Talking. "It's part of the reason Andy Warhol created his Factory."
There’s no doubt that collaboration between artists often and reliably results in creations that are more than the sum of their parts. The feedback, the free sharing of ideas, even what could otherwise be throw-away thoughts often lead to sparks of magic that may never have seen the light of day.
But that’s if the relationship between collaborators is a positive one. Sometimes even the most well-intentioned talents can just not mesh, moving to the beat of different drums, and miss every thought the other puts down. And that’s if they can even understand one another at all.
This is all to say that the fundamental truth at the heart of any collaboration is that of conversation, and one that is often non-verbal. The best collaborations can just as easily spring from a wordless action as the worst can be stifled by the wrong word at the wrong time.
What happens when one of those collaborators doesn’t speak the same language? What happens when one of those collaborators isn’t even human?

This has been explored before, though with varying levels of success. Despite many human attempts, animals do not make successful artistic collaborators nor really artists themselves; that leaves us with artificial intelligence, or more specific machine learning algorithms.

Plainly, we aren’t at a stage where AI is eclipsing humans at menial tasks, let alone conceptual higher-brain ones. What we are able to do however is have rudimentary collaborative conversations with algorithms. For example: designing new parts for aircraft is a slow and difficult process; using machine learning algorithms, researchers are able to ask questions about optimal designs and have the algorithm answer by testing out millions of variations. It’s simple, but it is a call-and-response interaction.
Most of these applications involve a singular question and answer: I have parameters xyz, what’s the best way of using them? The beauty of collaboration however is that it’s ongoing— the original questionee then turns around and asks a follow up question of the questioner. How would it look if the algorithm was asking questions of its human counterpart, and the human had to provide answers that the algorithm could never do on its own?

Since 2014, a large body of my artistic work has been through the medium of acrylic ink in water. Using a large (maybe 200 litres?) fish tank in my studio, and a variety of hot lights and backdrops, I’ll drop acrylics of various types and viscosities in to the tank and record the alien, otherworldly results. Through this, I’ve exhibited in art shows, had my work featured in print, sold work for brands, and performed in live AV shows in Canada and England.

I’d be lying if I said I had “control” over how the acrylics behave in water, but over time I can say that I’ve developed an artist-level “feel” over how the ink disperses, and how to manipulate that through volume, force, temperature, etc.
One thing that’s always bothered me however, at least in the long-term as I’ve stuck with this medium, is that of waste: specifically water. Even before upgrading to my current tank, the amount of water needed to load up a tank for a 4-5min shoot only to be drained right back out always felt, well, wrong. I live in a very water-rich area of the world, British Columbia, but I always felt insensitive knowing the drought issues in other areas of the world today and likely where I live in the near future.
That’s where Machine Learning and StyleGAN2 comes in.

The goal initially with this project is not to wholly stop my IRL practice of acrylic in water, but rather augment and vastly reduce how often I’d have to dump 200 litres of water down the drain. Through the recent advent of consumer-accessible no-code machine learning, like RunwayML and Artbreeder, we’ve seen an explosion of this realm of generative art. Previously unbeknownst to me, the immediate question I asked was — if I can teach a machine how to create via my medium, can I reduce my water consumption as a result?
That question was quickly followed up by — what if the machine can do it better?

Ultimately, the suite of imagery that I submit will be my language of communication with the algorithm, just as a playwright submits scripts to their actors.
RunwayML says to train your own model, you should start with an image set size of at least 300 images; most ML academic papers say a minimum of 1000. Naturally, I went with the latter, knowing full well as this project develops that data set will grow in kind.
A limitation of this initial phase is that while I had roughly 1000 images, they were from a total of 10 or so separate photo shoots— meaning the variation is quite a bit smaller than, say, 1000 unique images. More on this later.
When you don’t have control over the code, as I currently do not with Runway, as an artist your “brush” (aka your dataset) becomes your main means of expression. I started with a handful of shoots that I have done over the years in the color palette I chose, and then shot more, and shot more again, to fill out the dataset — with one caveat: I wasn’t being intentional with the ink or even looking at image results after shooting. This was intended to start the conversation blank, to allow the algorithm ample space to react to my intuitions.
I selected a limited color palette in order to restrict and simplify the initial model teaching— it appeared to me that before collaboration could happen, I’d need to teach the model to reproduce the basic cloud forms in a lighting setup that was characteristic to me — a wider gamut of colors could come later.
I started with purple, red, white and black as this was a palette I had the widest amount of work already finished in. Eventually, when I’m confident the algorithm can accurately create new forms reminiscent of my own analog work, I’ll start to introduce new colors. You need to walk before you can run.
The great intangible, the fourth dimension, in all of this. The entire data set was batch-edited to maintain a relatively similar color profile (purple, red, white, black as mentioned above) to maintain a consistent value for the algorithm to learn off of. What is not constant whatsoever is the variation in lighting — placement, temperature, intensity. With lighting anything I was shooting for this initial run I purposefully tried to keep the lighting temperature a similarly warm off-white, but ultimately the placement and intensity of light will change over the course as I collaborate — and I think that’s ok.

The results from the preliminary run were, frankly, astounding to me. For one, they’re not perfect! This was maybe the most pleasant surprise — are they better than perfect reproductions? What’s going on here?
To be real, this was where the idea of an ongoing conversation came to being. I was admittedly expecting poor recreations from the first run: instead, the algorithm spoke back to me in a way that I wasn’t expecting.
The source material, my initial expression, is clear in these images: the forms are there, the dramatic lighting is present, and the colors are (for the most part) following an established pattern. There are also the stylized wisps of StyleGAN2 processing— the hard edges mixed with soft gradients, for example.
But there is something more — saying to me almost as though acrylic ink in water is too real, too tangible, to work as a transportation vehicle. The algorithm is showing what I’m imagining when I speak, but am too reliant on my existing tools to say.

Over the course of creating a number of NFT loops for this initial step, I started to see the limits in the algorithm’s current vocabulary as well. The array above shows the bounds that the algorithm is able to produce — though not at all comprehensive, I’m sure you can spot some general themes.
Another interesting output is the background. The dataset is from an entirely neutral background— but the algorithm is outputting colored backgrounds without the definition of the ink clouds. Is this a result of a couple small outliers having an outsized influence? Is this from images where colored ink takes up the entire frame, resulting in the algorithm thinking that the colors are the background? I’m left with more questions than answers, which is a hallmark of a budding conversation.
The next step for me, of course, is to create more analog work to add to the dataset and update the algorithm. Time to formulate my rebuttal. This is me admitting that I’m going to make different choices as a direct result of the visual output of a machine. Who’s controlling who?
While I’m obviously (and I hope you are too) interested in seeing other colors in the mix, I’m not quite yet ready to take the training wheels off. The uniformity of the algorithm’s output from an arrangement perspective tells me that I don’t have a diverse enough dataset; these next shoots will therefore be me “filling in the blanks.” Black versions of clouds where there are white; ink cloud flourishes in areas where there has been mostly negative space in most output frames; generally a conceptual photonegative of what the first parlances have been so far.
After all, we know how the algorithm feels when it’s first talked to— how will it respond to being responded to?

There’s no question that part of artistic output of the algorithm comes in the form of video-based latent space walks. For those unfamiliar, when a StyleGAN model is given two key output frames, it can generate an infinite number of frames in between to act as a transition between the two. Putting them in to a longer form video piece, it became obvious that it needed a soundtrack.
Thankfully, during editing I had stumbled on Jimmy Whitaker’s Ambient Music Generator that he created via the Magenta Project. Using his model, I was able to output a number of MIDI files that I then arranged in Logic Pro and produced an original piece of music.
To be clear, the algorithm nor the data set are of my own creation: what I did with the MIDI file(s) it output was. That being said, at this early stage the soundtrack felt like an on-theme bandaid. Thematically correct, but certainly not doing anything to advance the conversation between myself and the audience, nor that of myself and the machine.
So for now consider this a bookmark for potential development in the future. Ideally I’d love to see how this two-way conversation between myself and a StyleGAN2 algorithm turns in to a full-fledged band in the future.
To Be Continued.
Admittedly before any response occurs, I may just port the conversation to a different platform.
RunwayML is an absolutely fantastic tool for anyone looking to try out a variety of machine learning algorithms and have something look good right out of the box. That said, because it is a consumer product promising “easy machine learning,” the shotgun blast of effectiveness doesn’t allow for acuity in any one area: it’s akin to listening to my conversation with the algorithm through a muffled wall.
The shift to porting and (hopefully not) retraining the StyleGAN2 model on a more open platform like Google CoLab may delay things, but I’m hoping it will allow for better visual fidelity and a wider vocabulary between the algo and myself.
Ultimately, long-term, I plan create more work to add to the dataset, submit my salvo to the algo, listen to its point of view, journal my thoughts here, and repeat in a call-and-response format. When applicable, I’ll try and introduce a third in the form of music for a menage-a-trois of cross-discipline and cross-intelligence collaboration. Hopefully, over time, we all learn a little about how humans can collaborate with machines, and myself and the algorithm create some cool work to boot.
Buckle up! I hope you’ll come along for this ride :)
I can be found online @cdammr or @cdammr or really all in @cdammr
No activity yet