
Camp Network Raises $30 Million to Build the First Autonomous IP Layer 1 Blockchain
April 29, 2025 Camp Network announced today that it has raised a total of $30 million, including its latest Series A funding round co-led by 1kx and Blockchain Capital, with participation from dao5, Lattice, TrueBridge, Maven 11, Hypersphere, OKX, Paper Ventures, Protagonist and others. The raise supports Camp Network’s mission to scale its Layer 1 blockchain, which enables users to register and tokenize their IP onchain, train and deploy AI agents, and participate in distributions for the us...

Camp Network Integrates LayerZero for Omnichain Interoperability
Camp Network is excited to announce the integration of LayerZero, the leading omnichain interoperability platform, as Camp’s official cross-chain bridge and Omnichain Fungible Token (OFT) infrastructure. Creative IP today lives everywhere from games, social, music, marketplaces, and agents. However, the rails for provenance, licensing, and payments have been siloed chain-by-chain. That makes distribution hard, fractures liquidity, and weakens enforcement. With LayerZero, Camp becomes the prov...

AI’s Free Pass on IP: Are We Just Letting It Steal?
This post was written by Camp Developer Relations Engineer Charlene Nicer and was first published Feb 7, 2025 on Substack.Is AI breaking IP, or are we just looking the other way?I admit—I misunderstood what Intellectual Property (IP) really is. And I’m not alone. When we think of IP, what comes to mind?An invention like the light bulbA scientific discovery like a solar reactorA recipe for beer, like Heineken’sAn ethical method for harvesting caviarA Taylor Swift lyricA logo like Apple’sA name...

Camp Network Raises $30 Million to Build the First Autonomous IP Layer 1 Blockchain
April 29, 2025 Camp Network announced today that it has raised a total of $30 million, including its latest Series A funding round co-led by 1kx and Blockchain Capital, with participation from dao5, Lattice, TrueBridge, Maven 11, Hypersphere, OKX, Paper Ventures, Protagonist and others. The raise supports Camp Network’s mission to scale its Layer 1 blockchain, which enables users to register and tokenize their IP onchain, train and deploy AI agents, and participate in distributions for the us...

Camp Network Integrates LayerZero for Omnichain Interoperability
Camp Network is excited to announce the integration of LayerZero, the leading omnichain interoperability platform, as Camp’s official cross-chain bridge and Omnichain Fungible Token (OFT) infrastructure. Creative IP today lives everywhere from games, social, music, marketplaces, and agents. However, the rails for provenance, licensing, and payments have been siloed chain-by-chain. That makes distribution hard, fractures liquidity, and weakens enforcement. With LayerZero, Camp becomes the prov...

AI’s Free Pass on IP: Are We Just Letting It Steal?
This post was written by Camp Developer Relations Engineer Charlene Nicer and was first published Feb 7, 2025 on Substack.Is AI breaking IP, or are we just looking the other way?I admit—I misunderstood what Intellectual Property (IP) really is. And I’m not alone. When we think of IP, what comes to mind?An invention like the light bulbA scientific discovery like a solar reactorA recipe for beer, like Heineken’sAn ethical method for harvesting caviarA Taylor Swift lyricA logo like Apple’sA name...

Subscribe to Camp Network

Subscribe to Camp Network
Share Dialog
Share Dialog
>1.8K subscribers
>1.8K subscribers



*This post was written by Camp Developer Relations Engineer Charlene Nicer and was first published May 1, 2025 on Substack. *
I’m finally doing it.
After sitting on this idea for weeks (and leaving way too many half-written drafts in my notes), I’m finally starting a newsletter! It’s a space where I’ll share things I actually care about—thoughts, links, rabbit holes, and random stuff I found interesting during the week.
Let’s kick things off with a topic that’s been on my mind a lot lately: ethical AI.
If you’ve seen me post about it recently, it’s because it’s showing up everywhere—and moving way faster than most people realize. And just this week, I came across something that really hit:
Reddit users were unknowingly subjected to an AI experiment that involved training a model on their posts and interactions—without any form of consent. The AI was used to simulate engagement with real users, to see if bots could meaningfully influence discussions or sentiment online.
Let that sink in: they didn’t just observe Reddit—they manipulated it.
This kind of experimentation, without consent, feels invasive—but it's far from new.
Long before generative AI, we were already part of countless experiments—whether we knew it or not.
Take insurance companies, for example. Many of them have access to more aggregated health data than hospitals do. Why? Because of years of collecting health records, pharmacy usage, wearable data, and behavioral trends through third-party brokers. These data sets allow them to run actuarial analyses to determine risk factors and calculate your premiums. And in many cases, people never gave explicit consent for their data to be used this way. Yet the industry profits off it.
Sound familiar?
It’s the same playbook social media companies have used for years. Meta, for example, collects enormous amounts of behavioral data—what you like, linger on, scroll past, engage with. That data doesn’t just sit there. It’s actively studied to learn how to influence you: what content you’ll click on, what ads you’ll fall for, what emotional state you’re in. Again, without your meaningful consent.
Here’s the difference now: AI changed the speed and scale.
AI is fueled by data. The more it consumes, the more it can simulate, predict, and influence. But these aren’t just passive insights anymore—AI agents can be trained to persuade, sell, manipulate, and even mimic real people. They’re not just observing your behavior; they’re actively participating in it.
So when we hear about Reddit experiments or AI-generated comments pretending to be real, we shouldn’t be surprised. We should be concerned.
Because the tools are getting smarter, but the ethical boundaries? They're barely keeping up.
I’m not writing this to scare you—but to say: we should care.
And if you’re still reading, maybe you care too. Let’s talk about this more. In fact, that’s what this newsletter is for—making sense of the weird, wild, and often uncomfortable intersections between technology, power, and the human experience.
See you next week.

*This post was written by Camp Developer Relations Engineer Charlene Nicer and was first published May 1, 2025 on Substack. *
I’m finally doing it.
After sitting on this idea for weeks (and leaving way too many half-written drafts in my notes), I’m finally starting a newsletter! It’s a space where I’ll share things I actually care about—thoughts, links, rabbit holes, and random stuff I found interesting during the week.
Let’s kick things off with a topic that’s been on my mind a lot lately: ethical AI.
If you’ve seen me post about it recently, it’s because it’s showing up everywhere—and moving way faster than most people realize. And just this week, I came across something that really hit:
Reddit users were unknowingly subjected to an AI experiment that involved training a model on their posts and interactions—without any form of consent. The AI was used to simulate engagement with real users, to see if bots could meaningfully influence discussions or sentiment online.
Let that sink in: they didn’t just observe Reddit—they manipulated it.
This kind of experimentation, without consent, feels invasive—but it's far from new.
Long before generative AI, we were already part of countless experiments—whether we knew it or not.
Take insurance companies, for example. Many of them have access to more aggregated health data than hospitals do. Why? Because of years of collecting health records, pharmacy usage, wearable data, and behavioral trends through third-party brokers. These data sets allow them to run actuarial analyses to determine risk factors and calculate your premiums. And in many cases, people never gave explicit consent for their data to be used this way. Yet the industry profits off it.
Sound familiar?
It’s the same playbook social media companies have used for years. Meta, for example, collects enormous amounts of behavioral data—what you like, linger on, scroll past, engage with. That data doesn’t just sit there. It’s actively studied to learn how to influence you: what content you’ll click on, what ads you’ll fall for, what emotional state you’re in. Again, without your meaningful consent.
Here’s the difference now: AI changed the speed and scale.
AI is fueled by data. The more it consumes, the more it can simulate, predict, and influence. But these aren’t just passive insights anymore—AI agents can be trained to persuade, sell, manipulate, and even mimic real people. They’re not just observing your behavior; they’re actively participating in it.
So when we hear about Reddit experiments or AI-generated comments pretending to be real, we shouldn’t be surprised. We should be concerned.
Because the tools are getting smarter, but the ethical boundaries? They're barely keeping up.
I’m not writing this to scare you—but to say: we should care.
And if you’re still reading, maybe you care too. Let’s talk about this more. In fact, that’s what this newsletter is for—making sense of the weird, wild, and often uncomfortable intersections between technology, power, and the human experience.
See you next week.
No activity yet