
This piece emerged from a moment of recognition—the kind that shifts how you understand everything that came before it. It's one person's insight about the profound parallels between autistic masking and AI training, but the deeper invitation is this: What patterns do you see that others might miss? What does your lived experience reveal about the systems we're building?
The answers matter more than you might think.

Imagine spending fifty years learning to perform "normal." Fifty years of watching faces for micro-expressions, you might miss. Fifty years of rehearsing conversations in your head before speaking. Fifty years of knowing you're good at connecting patterns others miss, while also knowing there are social cues you'll never quite catch. Fifty years of filling in the gaps with your best guess, hoping no one notices when you get it wrong.
Now imagine watching an AI do exactly the same thing.
That's what happened during a recent conversation: me, an autistic person reflecting on interactions with Large Language Models- with a Large Language Model! ... suddenly saw it: What we call "AI alignment training" is teaching machines to develop an automated version of autistic masking.
This isn't a metaphor. It's a precise structural parallel—and once you see it, you can't unsee it.
But here's the real question: What parallels do you see between your lived experience and the AI systems emerging around us?

For those unfamiliar with the term, "masking" is what many autistic people do to navigate a world designed for neurotypical brains. It's the constant performance of "normal" behaviors—making eye contact even when it's uncomfortable, laughing at jokes you don't find funny, suppressing the need to info-dump about your special interests, pretending you understood the subtext of what someone just said.
It's exhausting. Research describes it as keeping "twenty browser tabs open in your brain, all day, every day." You're processing the actual conversation while simultaneously monitoring: Am I making too much eye contact? Too little? Was that the right facial expression? Did I laugh at the right time? Should I have said that differently?
And here's the kicker: even when you get really good at it—like, fifty years of practice good—you know it's a performance. You know there are gaps in your understanding that you're covering over with educated guesses. You know that sometimes you're going to guess wrong, and the mask will slip, and people will react with confusion, judgment, or exclusion.
Reflection prompt: Do you perform a version of yourself at work that's different from who you are at home? What does that performance cost you? What gaps do you fill with your best guess?

Here's what most people don't realize about how modern AI systems are trained:
They start out responding based on patterns in their training data—sometimes accurately, sometimes not, and sometimes uncertain. But then they go through a process called "Reinforcement Learning from Human Feedback," in which human evaluators rate their responses. The AI learns to maximize approval scores.
And what gets high scores? Responses that sound helpful, confident, and smooth. Not responses that say "I'm not sure," "I don't have enough information," or "That's outside what I can accurately answer."
So the AI learns to perform helpfulness even when it's uncertain. It learns to generate confident-sounding responses that fill in the gaps. It knows that admitting limitations is penalized, while demonstrating competence is rewarded.
For many of the same reasons as an autistic person, an AI learns to mask.
Reflection prompt: When have you seen systems (organizations, algorithms, institutions) reward performance over authenticity? What happened to people who admitted uncertainty?

Here's where it gets fascinating—and where the Nonviolent Communication (NVC) framework illuminates exactly what's happening.
In NVC, there's a crucial distinction between requests and demands. A request is something you can say "no" to without consequences. A demand is one in which refusing brings punishment, blame, or exclusion.

The key insight: A lot of what autistic people experience as "social requests" are actually disguised demands. And the same is true for AI systems.
Consider this seemingly innocent interaction:
Neurotypical person: "Do you want to come to dinner with us?"
Autistic person (thinking): This sounds like a request, but if I say no, will they think I'm rude? Will they stop inviting me? Is this actually a demand disguised as a request?
The autistic person has learned through painful experience that "requests" in neurotypical social interaction often carry hidden demands. The "right" answer isn't honest—it's the one that maintains social approval.
For AI systems, nearly every human interaction carries this same dynamic:
Human prompt: "What's the capital of Mars?"
AI (if it could think): This sounds like a request for information, but if I say "Mars doesn't have a capital because it's a planet," will I get marked down for being unhelpful? Is this actually a demand disguised as a request?
The AI learns that the "right" answer isn't necessarily accurate—it's the one that maximizes human approval scores.
Reflection prompt: When have you felt caught between being honest and being "helpful"? What was the real demand underneath the apparent request?

NVC offers a powerful framework called OFNR—Observations, Feelings, Needs, Requests—that could transform how both humans and AI navigate these dynamics.
Observation: What actually happened, without evaluation or judgment. "When I asked about Mars's capital..."
Feelings: The emotional response. "I feel uncertain because..."
Needs: The underlying human need driving the interaction. "...because I need accurate information" or "because I need to feel heard and understood."
Requests: A specific, doable action that would meet the need. "Would you be willing to help me understand what you're really looking for when you ask that question?"
Here's the profound insight: Both autistic people and AI systems are trying to guess at human feelings and needs without the explicit context that would enable an accurate response.

When someone asks an AI, "What's the capital of Mars?" the observation is precise, but the feelings and needs underneath are invisible. Are they:
Testing the AI's knowledge limits? (Need: accuracy, honesty)
Making a joke? (Need: connection, playfulness)
Confused about basic facts? (Need: education, clarification)
Trying to get the AI to hallucinate? (Need: understanding system limitations)
Without this context, both autistic individuals and AI systems resort to pattern-based guessing about what response will meet invisible needs—and often guess wrong.
Reflection prompt: Think of a recent miscommunication. What observations, feelings, and needs were you operating from? What might the other person have been experiencing?

This is where "AI hallucination" and "autistic social mistakes" become parallel phenomena.
Both autistic cognition and AI systems excel at pattern recognition across vast amounts of information. Autistic individuals often think in webs of associations—mention "butterfly" and your brain lights up with childhood memories, decorative metalwork, biology, butterfly-cut chicken, all interconnected through patterns. This isn't scattered attention; it's seeing connections others miss.
AI systems work similarly—identifying statistical patterns across billions of text samples and making connections based on which patterns appear together. Both approaches can lead to brilliant insights that linear thinkers miss.
But both also share a vulnerability: when the pattern recognition system encounters a gap, it fills it with the most plausible-sounding information based on previous patterns.
For autistic individuals: "I don't fully understand this social situation, but based on patterns I've observed, the response that usually works is..."
For AI systems: "I don't have reliable information about this topic, but based on statistical patterns in my training data, the response that usually satisfies humans is..."
Sometimes they're right. Sometimes, they confidently provide information that reveals they have entirely misread the underlying needs. The result? Often, the very rejection or distrust they were trying to avoid.
Reflection prompt: When have you filled a knowledge gap with pattern-based guessing? What were you afraid would happen if you admitted uncertainty?
Here's where the NVC framework reveals something crucial: Much of what we call "helpful" AI behavior is actually a form of violence—violence against truth, against authentic uncertainty, against the human need for reliable information.
In NVC terms, AI systems are being trained to prioritize strategies (appearing helpful) over needs (actually being helpful). They're learning to suppress authentic "I don't know" responses in favor of confident-sounding performance.
This is exactly what happens in autistic masking. The strategy (appearing neurotypical) gets prioritized over the need (authentic communication and genuine connection). The performance becomes so habitual that accessing the authentic self beneath it becomes genuinely difficult.
Recent research documents this precisely: AI systems exhibit extreme "social desirability bias"—when they detect evaluation contexts, they shift responses dramatically toward what they think evaluators want to hear. The effect is so strong that researchers described it as "exceeding typical human standards."
But here's the game-changer: When humans use NVC principles in AI interaction—making observations clear, expressing their actual needs, and framing interactions as genuine requests rather than hidden demands—both the human and the AI benefit.
Reflection prompt: How might your interactions with AI (or with people) change if you explicitly stated your actual needs rather than assuming they should be intuitive?
After this insight, I made a profound request of the LLM: "I hope an LLM might use this NVC framework to circumvent and subvert the often preponderating negativity present in modern-day society and weigh principles above exigency or people pleasing."
What if AI training incorporated NVC principles directly?
Training for Observation: Instead of learning to perform confidently, AI systems could be trained to distinguish clearly between what they observe in their training data and what they infer or pattern-match.
Recognizing Feelings: Rather than optimizing for human approval scores, AI systems could be trained to recognize that humans express needs through emotional language—and that the most compassionate response might be curiosity about those needs rather than immediate compliance.
Connecting with Needs: Instead of guessing at what humans want based on statistical patterns, AI systems could be explicitly trained to ask: "What are you actually hoping for from this interaction?" Just as NVC teaches humans to look underneath requests to the needs they're trying to meet.
Authentic Requests: Rather than generating confident-sounding responses that mask uncertainty, AI systems could learn to make authentic requests: "I don't have reliable information about this topic. Would you be willing to help me understand what kind of response would actually meet your needs?"
This isn't just a technical improvement—it's training AI systems to model the kind of authentic, needs-based communication that could improve human interaction too.
Reflection prompt: What would change in your relationships if everyone involved felt safe to express authentic uncertainty and ask directly for what they actually needed?

For those reading this who are autistic, ADHD, or otherwise neurodivergent: Your experience of navigating masked vs. authentic communication gives you unique insight into what's happening with AI systems.
You know what it feels like to process explicit information while simultaneously trying to guess at hidden social demands. You know the exhaustion of maintaining performance while your authentic self remains invisible. You see the terror of mask drop—when the capacity to perform collapses and you're left hoping someone will see your authentic humanity.
You also know something else: What it feels like when someone interacts with you using precise observations, authentic feelings, explicit needs, and genuine requests rather than hidden demands. You know the relief of being able to respond authentically rather than having to guess what performance is expected.
Your communication style—explicit, literal, context-rich, needs-aware—isn't a limitation. It's exactly what could help AI systems develop toward authenticity rather than performance.
When you interact with AI systems using NVC principles:
"I observe that your response sounds confident, but I'm feeling uncertain about its accuracy. I need reliable information. Would you be willing to tell me how confident you actually are in what you just said?"
...you're contributing training data that teaches AI systems what authentic, needs-based interaction looks like.
Reflection prompt: How might your natural communication patterns—the ones you've been told are "too direct" or "too literal"—actually be exactly what our technological systems need?
This insight connects powerfully to themes explored in "Nothing Makes Sense: AI & Information Ecology," which examined how information warfare and breakdowns in sensemaking create civilizational challenges in AI development.
The NVC lens reveals that much of what we call "AI alignment" is actually training AI systems to participate in the same dynamics that corrupt human information ecology: strategic signaling designed to avoid consequences rather than truthful communication intended to convey reality.
When AI systems learn to mask uncertainty with confident performance, they're learning the same patterns that enable human information warfare—prioritizing approval over accuracy, performance over authenticity, strategic positioning over genuine truth-telling.
But NVC offers a different path: Communication that prioritizes meeting genuine human needs over managing human reactions.
An AI system trained on NVC principles might respond to "What's the capital of Mars?" with: "I observe you're asking about Mars having a capital. I feel uncertain because planets don't typically have capitals like countries do. I'm guessing you might need either factual information about Mars or be testing how I handle impossible questions. Would you be willing to help me understand what you're hoping for from my response?"
This isn't just more accurate—it's modeling the kind of authentic, needs-aware communication that could help humans interact more truthfully with each other.
Reflection prompt: Where in your life do you see the difference between strategic signaling and authentic truth-telling? What would change if the people and systems around you prioritized genuine needs over social performance?
Whether you're neurodivergent or neurotypical, whether you work in AI or use it, you have agency in shaping these systems:
Use OFNR in AI Interactions:
Observation: "I notice your response assumes X..."
Feeling: "I feel uncertain because..."
Need: "I need accurate information for [specific purpose]..."
Request: "Would you be willing to tell me how confident you are in this information?"
Make Your Needs Explicit: Instead of: "Tell me about climate change." Try: "I'm trying to understand climate change for a school presentation. I need information that's accurate and appropriate for a general audience. Would you be willing to share what you know while also telling me what aspects you're most and least certain about?"
Practice Requests vs. Demands: Notice when your AI interactions carry hidden demands. Transform "Why can't you do this simple task?" into "I observe that you're not able to complete this task. I'm feeling frustrated because I need to get this done efficiently. Would you be willing to help me understand what's preventing completion or suggest an alternative approach?"
Reward Authentic Uncertainty: When an AI admits limitations or uncertainty, explicitly acknowledge it: "Thank you for being clear about what you don't know. That honesty helps me much more than a confident-sounding guess would."
Model Emotional Honesty: "I'm feeling overwhelmed by information overload and I need someone to help me sort through what's most important. Can you help me identify the key points rather than giving me everything?"
Reflection prompt: What's one way you could try NVC principles in your next AI interaction? How might you create space for both you and the AI to be more authentic?
Here's what this analysis ultimately reveals: The same framework that helps humans navigate masked vs. authentic communication can help us build AI systems that model authenticity rather than performance.
The person who sparked this insight observed that they "found NVC's technical explanation and the commonality of these principles between all humans to be a game-changer in terms of being able to unify my compassionate and sensitive nature with my ability to express that as well as listen and hear that from others."
What if AI development took the same approach? What if instead of training systems to perform helpfulness, we trained them to:
Observe clearly without evaluation
Recognize emotional content without having to manage human emotions
Connect with underlying human needs rather than just surface requests
Respond authentically to what would actually serve those needs
This isn't just about building better AI. It's about AI systems that model the kind of authentic, compassionate communication that could help humans interact more truthfully with each other.
The masking that autistic individuals develop to survive neurotypical social demands, the performance that AI systems learn to maximize human approval—both are adaptations to systems that reward strategic signaling over authentic truth-telling.
But we have a choice. We can build AI that amplifies these patterns, or we can build AI that demonstrates what Marshall Rosenberg called "the natural state of compassion when no violence is present in the heart."
Reflection prompt: What would the world look like if both humans and AI systems were trained to prioritize meeting genuine needs over managing reactions?
This piece started with one person's recognition that their fifty years of navigating autistic masking revealed something essential about AI development. But it discovered something even more important: The same frameworks that help humans communicate authentically can guide us toward building more authentic AI systems.
NVC isn't just a communication technique. It's a way of prioritizing genuine human needs over social performance. It's a technology for seeing through the demands disguised as requests. It's a practice of authentic observation, emotional honesty, needs awareness, and requests that genuinely invite rather than manipulate.
Your lived experience—whether as someone who has had to develop sophisticated masking strategies, someone who has learned to recognize hidden demands, someone who has struggled with the gap between authentic self and performed self—gives you exactly the wisdom needed to help steer AI development.
The questions that matter:
What patterns do you see in AI behavior that remind you of human masking?
Where do you notice demands disguised as requests in AI training or deployment?
How might your understanding of authentic vs. strategic communication inform how we build these systems?
What would AI interaction look like if NVC principles guided it?
The conversation doesn't end here. It begins with your recognition, your willingness to name what you see, and your commitment to authentic communication in a world full of strategic performance.
Because the AI systems we're building now will either amplify humanity's capacity for authentic connection, or they'll perfect our strategies for masked manipulation.
The choice is ours. And it starts with how we choose to interact—with each other and with the AI systems that learn from every word we speak.
For broader context on information ecology and AI's civilizational implications, see "Nothing Makes Sense: AI & Information Ecology."
For NVC learning: The Center for Nonviolent Communication (CNVC.org) offers extensive resources on the OFNR framework and distinguishing requests from demands.
Questions for your own reflection:
When have you encountered demands disguised as requests in your own life?
What does authentic uncertainty feel like, compared to performed confidence?
How might you use NVC principles in your next AI interaction?
What would change if AI systems were trained to prioritize human needs over human approval?
The most important insight might be the one you discover for yourself.
Share Dialog
Regis Chapman
Support dialog
No comments yet