Jarrel Biscocho
The concept of “aura” has been swirling in the cultural mindshare lately. Whether it's auramaxxing, earning aura points, or cultivating one's aura, people are hungry to learn about the energy they give off. We wanted to tap into this trend by sharing a whimsical in-app quiz that lets users discover their aura – and ultimately learn more about themselves.
But why stop at aura analysis? It’d be even cooler to generate a visual representation of what their aura actually looked like, mapped to a vibrant color palette.
In addition to the in-app quiz, we went on a side quest to integrate our aura quiz to Farcaster, which is an open and decentralized social network. Farcaster recently launched Farcaster Frames, which helps developers bring more custom experiences to a user’s feed, and we thought it was a perfect fit for what we built.
With the open and permission-less nature of the Farcaster social graph, we were able to analyze personalities through a user’s historical casts, and match them with others within the network with compatible (or incompatible) auras. We ranked results based on high relevancy score, provided by the Neynar API.
Traditional aura photography typically takes place in a professional studio with the aid of sensors measuring the electricity conducted by your body. A specialized camera then translates the reading into a colorful image or “aura.” This obviously isn’t possible with a phone camera, so the challenge was to represent this physical phenomenon digitally and to generate compelling auras unique to each individual.
This blog post will focus on how we generated the aura effect itself, so if you’re interested in a brief introduction to shaders and image processing, continue reading below!
Users are most likely going to submit selfies in a variety of environments, so we’ll want to isolate the subject from the background.
We initially tried using AI or leveraging segmentation models trained on facial data with varying degrees of success. These solutions worked alright, but their effectiveness tended to vary widely from photo to photo (segmentation was bad at isolating people wearing hats, for example) or just didn’t scale well (we can tune for one person, but it won’t work for many people).
We decided to pay for quality since its crucial to have an accurate mask in order to really sell the aura effect. We went with a service aptly named remove.bg. We didn’t need a high resolution version of the foreground mask, so we were able to cut costs by using a low-res output of the mask.
The foreground mask lets us blend the aura around the subject more naturally. To further improve the effect, we also provided a Signed Distance Field (SDF) texture. This stores the distance of each pixel to the nearest edge of some shape, with positive values meaning that the pixel is outside the shape and negative values meaning that the pixel is in the interior of the shape. In our case, a negative distance means that a pixel is part of the foreground, i.e. the person instead of the background.
For example, an SDF for a circle is the magnitude or length of a pixel’s coordinates minus the radius of the circle.
// SDF of a circle for a pixel
sqrt(x^2 + y^2) - radius
This is well described for simple shapes, but how can we create an SDF for irregular shapes and shapes we won’t know ahead of time like a person’s silhouette? One way to do this is with the Jump Flooding Algorithm (JFA). The JFA works by spreading distance information across an image in big jumps at first, then smaller and smaller jumps to refine the result. One benefit in using the JFA is its efficiency, making it well suited for our use case. Python’s Pillow and Scipy libraries make it easy to create a distance mask.
import scipy.ndimage as ndi
from PIL import Image, ImageChops, ImageFilter
def compute_distance_mask(mask: Image.Image, size: tuple[int, int]) -> Image.Image:
# Get the alpha channel and invert it so foreground is black and the background is white
mask = ImageChops.invert(mask.split()[3])
df = ndi.distance_transform_edt(np.array(mask))
# Normalize and convert to 8-bit grayscale
df_norm = (df / np.max(df) * 255).astype(np.uint8)
return Image.fromarray(df_norm)
And here is the resulting distance field.
One limitation of the JFA is that it tends to produce artifacts when a background pixel is surrounded by many neighboring foreground pixels. You can see an example of this in the result above where there are diagonal artifacts to the left and right of the silhouette. This pinching usually happens where the person’s head meets their shoulders and upper torso. An easy fix is to blur the resulting mask a bit.
df_blur = ndi.gaussian_filter(df_norm, sigma=BLUR_SIGMA)
Now let’s build the actual aura effect. We decided to use shaders to give us fine-grain control over the final output. But let’s take a step back. What is a shader in the first place? Shaders are programs that generally run on your GPU, making them incredibly efficient for image processing and for our use case. At their core, shaders determine how each pixel on the screen should be colored. While the concept is straightforward, shaders can be used to create stunning complex visual effects.
There are primarily two types of shaders we care about: vertex shaders, which handle the position of points in 3D space, and fragment shaders, which determine the color of each pixel. We'll focus on the latter since we're interested in 2D color manipulation.
Auras aren’t just simple gradients, they also have interesting variation and depth to them. We need a way to introduce natural randomness to achieve this effect. That’s where noise comes in. There are many different types of noise each with its own characteristics. In our case, we’ll use a type of noise known as Fractal Brownian Motion or FBM for short.
One of the great things about shaders is that even though the math behind them can get pretty complex, you usually don’t need to write everything from scratch. This is a good moment to bring up LYGIA, an open source shader library with useful utilities and functions that we’ll be using in our examples.
Here’s the basic implementation of the fragment shader:
#ifdef GL_ES
precision mediump float;
#endif
uniform vec2 u_resolution;
#include "lygia/generative/fbm.glsl"
void main() {
vec2 pixel = 1.0 / u_resolution.xy;
vec2 st = gl_FragCoord.xy * pixel;
float noise = fbm(vec2(st)) * 0.5 + 0.5;
vec3 color = vec3(noise);
gl_FragColor = vec4(color, 1.);
}
First we get the relative size of the pixel by taking the inverse of the screen resolution and multiplying it by the fragment’s position to get its position within the screen. We then use it as input to fbm
to get our noise value for that pixel. Since fbm
returns values in the range [-1, 1] we remap it to [0, 1] by using this transform:
noise = noise * 0.5 + 0.5
Finally, we create a grayscale color based on the noise value and assign it to gl_FragColor
to determine the color of the current pixel. One interesting characteristic of how fbm
works is that we can change the complexity of the noise by combining multiple octaves of simpler noise functions.
#define FBM_OCTAVES 4
#include "lygia/generative/fbm.glsl"
This is already looking quite aura-like. In fact, if we add some color by multiplying the noise with red then we have our basic aura effect.
vec3 aura_color = vec3(noise) * vec3(1., 0., 0.); // color red
Since we’re only mixing the noise with a single color the result looks somewhat flat and uninteresting. We can add more variation by mixing in additional colors. LYGIA includes a handy palette helper that makes it easy to generate color palettes. By combining noise with a few randomized parameters that influence how the palette is built, we can create some cool aura-like effects. You can think of these parameters as controls for tweaking each color channel.
#include "lygia/color/palette.glsl"
vec3 aura_color = palette(
noise,
vec3(0.575, 0.413, 0.407),
vec3(0.199, 0.560, 0.539),
vec3(1.258, 1.516, 1.475),
vec3(0.399, 0.850, 0.665)
);
vec3 color = aura_color;
Auras typically feature a dominant color along with a few accents, so let’s use the distance texture we created earlier to introduce more color variation. We sample u_distance_texture
and use its value to interpolate between base_color
and aura_color
. The result is that the farther a pixel is from the subject, the more its color shifts.
uniform sampler2D u_distance_texture;
float distance_to_subject = texture(uDistanceField, uv).r;
vec3 aura_color = mix(base_color, aura_color, distance_to_subject);
Now let’s bring in the original photo. We pass in tex_coord
, which gives us the texture position for each fragment. This extra input is important because the photo’s resolution doesn’t match the output resolution—and we want the aura to better fit the composition by surrounding the subject with more aura for a dramatic effect. Finally, we blend the subject with the aura using the texture’s alpha channel, so anything outside the foreground gets replaced by the aura.
uniform sampler2D u_texture;
in vec2 tex_coord;
float distance_to_subject = texture(u_distance_texture, uv).r;
vec3 aura_color = mix(base_color, aura_color, distance_to_subject);
vec4 tex = texture(tex_coord, u_texture);
vec3 color = mix(aura_color, tex.rgb, tex.a);
The photo is overlaid with the aura, but the two aren’t blending together very naturally. This is where blend modes come into play. Traditional aura photography often uses a technique called double exposure, where two separate images are layered into one. We can recreate this digitally by blending the subject and the aura using a method beyond just the texture’s transparency.
We’ll use screen blend, which has the look of projecting multiple photographic slides on top of each other. In the example above, screen blend brightens areas where the bottom layer is light in color which helps create the misty, fog effect that we are after. Again, LYGIA makes this easy.
#include "lygia/color/blend/screen.glsl"
vec3 color = blendScreen(tex, aura_color);
The screen blend mode has the added benefit of making darker colors more transparent, creating a natural-looking fade where the aura meets the subject. In the final filter, we also darken the edges of the photo to enhance the blend even further—but it’s already looking pretty good, so we’ll leave it as is. Finally, a touch of subtle grain, a soft vignette, and wrapping the image in a clean polaroid frame help complete the film-inspired look we’re going for.
LYGIA offers some utilities for generating grain, but it’s also fun to experiment with your own techniques. For the vignette, we used different blend modes—specifically color burn and color dodge—which help emulate the look of traditional cameras, where less light reaches the edges of the film.
This covers the basic idea behind the shader, but there’s plenty of room for creative iteration. For instance, adjusting the photo’s color values—like midtones, black point, and white point—can push the vintage film aesthetic even further. You can also explore ways for the aura to interact with features of the subject’s face, like making it more intense in darker areas to create a more dramatic effect.
Using shaders—despite the initial learning curve—helped us iterate quickly and arrive at a visually pleasing result that served as a fun and satisfying payoff for users who completed the quiz.
We wanted each user to have a unique aura that represented them, and shaders were a natural fit for this generative concept. Creating a new aura was as simple as sampling a different area of noise and tweaking a few shader parameters, such as the number of colors, how quickly the colors shift, the complexity of the aura, how much it blends with the subject, and more. With these creative levers, we were able to generate a wide range of auras that felt unique yet cohesive within the overall collection.
That said, the project wasn’t without its challenges. One major limitation was that the quality of the final effect depended heavily on the quality of the input photo. Users uploaded all sorts of images—some poorly lit, some too large or too small. We frequently encountered photos where the top of the head was cut off, or the face was zoomed in too closely. Often, fixing one case would make another look worse, leading to a lot of back-and-forth adjustments. It really did feel like this at times:
There’s still more work to be done in handling these cases, but we chose to focus primarily on the “ideal” scenario and making sure the final result looked great in that context. Overall, we’re pretty happy with how it turned out.
We hope this post was helpful and maybe even inspired you to explore shaders in your own projects. If you’re curious to see the effect in action, head over to moshi.cam and get your aura read!
Thousands of aura generations later, here's some BTS on how we made it happen! Courtesy of @jarrel (ദ്ദി ✪`ᴗ´✪) ✧ https://paragraph.com/@moshicam/digitizing-your-aura
would appreciate a boost on this! twitter hates links 🥲🙏 https://x.com/swaglord__420/status/1907161087864152101
in-frame reader https://paragraph.com/@moshicam/digitizing-your-aura
this is fucking AWESOME thank you for taking the time to write it
@jarrel deserves all the credit!
Done 🙌🏻
happy sunday! if you're in the mood for nerding out, here's a technical dive into @jarrel's magic work on our aura shader 🌈 ✨ TL;DR • remove.bg for isolating subject + mask • signed distance fields for spatial blending • jump flooding algorithm for computing SDF around silhouette • gaussian blur for smoothing out artifacts along edges • fractal brownian motion for swirling, organic noise • screen blend for layering, photography effect https://paragraph.xyz/@moshicam/digitizing-your-aura