# Yem Jam (Vol. 2)

By [Test pub - SS Import 2](https://paragraph.com/@test-ss-import-2) · 2021-06-04

---

Hello friends,

Welcome to volume 2 of Yem Jam - a weekly summary of progress, learnings, and ideas as we build Yem. We hope everyone is enjoying MDW. 🇺🇸 🌭 🏖

**The Only Thing That Matters**

We're now working with 5 newsletters. Even with a small group, we're getting great feedback and learnings.

**The only thing that matters at this point is creating something customers want.** We're searching for the "killer use case". A point at which customers would pay for Yem, or maybe even miss us if we weren't around.

A dashboard-only product doesn't seem to be that killer use case. It's helpful, as is the benchmarking and insights, and we've received encouraging feedback on the forecast model. But it doesn't seem to be compelling enough...

Analytics can be helpful in identifying areas of opportunity. For creators and "solopreneurs", this can almost be painful. It leaves them more aware they could be growing faster, but with limited bandwidth to see it through.

So it seems Yem must go further. We need to help creators test into areas of opportunity and realize tangible growth. A challenge we're excited to work on.

**Snowflakes & Shared Challenges**

Another finding - it's been surprising to see the _uniqueness_ of each newsletter.

But while each is unique, there's shared challenges faced by all. One area that's especially exciting is **data-driven user communication**. This was a huge part of what we did at Hulu & Crunchyroll, relying heavily on email. For newsletters, where email is the core product, this could have a material impact. Some areas we're excited to test into:

*   New user onboarding
    
*   Re-engaging dormant subscribers
    
*   Upselling free subscribers to paid subscriptions
    
*   Preventing at-risk paid subs from cancelling
    

To help with the above, we may have found a way to access user-level **_event_** data for Substack newsletters. Where "events" are common behaviors, like opening an email, clicking a link, or viewing a web page. This helps us understand which users have read a particular article, and when they did it. User-level event data opens up a world of opportunity in **personalization** and **discovery**.

For example, let's imagine a food-related newsletter called Gjelina's Truffle. When someone signs up, Gjelina surfaces three top articles from the past month. Each article comes from a different category or theme: one on how to make pasta, one on cocktails, and the last on cupcakes. If the user chooses to read the cocktails post, we could use that info to guide their onboarding series. On day 3, they receive our best post on how to make bourbon drinks. Day 6, martinis. And so on. This removes a ton of friction for the user as they don't have to scour the archives for relevant content.

And how great is this for Gjelina! It takes the Gjelina team a ton of effort to produce high-quality content. Often, most of the value (engagement, new subs, etc.) happens within a few days of publishing. But with capabilities like the above, they're able to resurface older content at the right _time_, for the right _person_.

That's it for this week - let us know what you think of any of the above!

---

*Originally published on [Test pub - SS Import 2](https://paragraph.com/@test-ss-import-2/yem-jam-vol-2)*
