Cover photo

The Tech New Deal

Why America Needs a Right to Live Without Being Constantly Extracted by Technology

For the last two decades, we have quietly built an economy based on extraction.

Not oil.
Not timber.
Not minerals.

Human behavior.

Every search, every click, every movement through a city, every online interaction has become raw material for systems designed to predict us, influence us, and govern us.

This model began with targeted advertising. But it did not stay there.

Today the same logic powers:

  • predictive policing

  • workplace surveillance

  • algorithmic hiring systems

  • credit and insurance risk models

  • healthcare eligibility tools

  • school monitoring platforms

What began as a business model is slowly becoming a model of governance.

And that creates a problem far larger than algorithmic bias.

It creates a society optimized to extract human life as data.


The core failure of modern technology governance is not bias.
It is the objective function.

Our systems are designed to capture behavior, infer intent, and optimize prediction.

Human dignity, autonomy, and sustainability are not the optimization target.

Extraction is.


The Rise of Extractive Systems

The dominant technological model of the 21st century is not neutral computation.

It is what we might call extractive simulation.

In extractive systems, human reality is continuously:

experienced → captured → modeled → predicted → acted upon

This pipeline now shapes decisions about:

  • employment

  • housing

  • education

  • healthcare

  • policing

  • credit

  • insurance

In many cases, people are not judged by what they did, but by what a system predicts they might do.

That shift—from actions to predictions—represents a profound transformation in governance.


Predictive Policing Shows the Problem Clearly

Predictive policing systems promise to forecast where crime will happen.

In reality, many simply learn patterns from past policing data.

But past policing data already reflects decades of unequal enforcement.

Which creates a loop.

Biased enforcement → biased data → biased prediction → more biased enforcement

Even when predictive accuracy is poor, the systems still serve a powerful function.

They provide technical legitimacy for expanding surveillance and enforcement in communities that have historically been over-policed.

The technology fails at prediction.

But it succeeds at justifying power.


The Hidden Cost of Surveillance

Extraction systems do not only create civil rights risks.

They create psychological and social costs.

Research increasingly shows that constant monitoring produces:

  • higher anxiety

  • lower autonomy

  • increased cognitive load

  • reduced trust in institutions

Worker monitoring systems show similar outcomes, including increased injury risk when employees feel constant pressure to perform under surveillance.

Over time, surveillance stops being a tool and becomes an environment.

And environments shape people.


The Tech New Deal

The United States has faced this kind of systemic shift before.

In the early 20th century, industrialization created new forms of exploitation that existing laws could not handle.

The response was the New Deal—a new governance framework designed to protect the basic conditions of human life.

Today we face a different kind of industrialization.

Not factories.

Data extraction.

The Tech New Deal is a proposal for a similar structural response.


1. A Dignity Baseline

Certain aspects of life should never require surrendering behavioral data as a condition of participation.

Access to essential capabilities—food, water, healthcare, housing, and community participation—should not require constant surveillance.

In practical terms, this means establishing a federal Dignity Baseline: a minimum level of access to essential services that cannot depend on behavioral extraction.


2. Total Sustainability

Technology policy typically evaluates systems based on efficiency and accuracy.

But those metrics ignore broader impacts.

The Tech New Deal introduces a broader standard: Total Sustainability.

This includes three dimensions.

Ecological sustainability

Large-scale AI and surveillance infrastructure requires massive energy consumption and hardware supply chains.

Psychosocial sustainability

A society built on constant monitoring produces chronic stress and autonomy loss.

Physical sustainability

Monitoring systems in workplaces often increase injury risk through pace enforcement and fear-based productivity.

A technology that undermines human or ecological sustainability should not be considered progress.


The Authenticity Revolution

Modern digital systems rely on inference.

They try to guess:

  • what we want

  • what we will do

  • what risks we represent

But inference requires constant behavioral capture.

There is another path.

Instead of predicting people, systems can verify limited facts when necessary.

This approach is becoming technically feasible through tools such as:

  • decentralized digital identity

  • verifiable credentials

  • zero-knowledge proofs

These technologies allow someone to prove something—like eligibility or age—without revealing unnecessary personal data.

It replaces surveillance with verification.

Prediction with integrity.


A Right to Non-Extraction

The most important policy shift may be the simplest.

People should have a right to participate in essential parts of society without being continuously harvested as data.

A Right to Non-Extraction would ensure that access to essential services cannot depend on surrendering behavioral data beyond what is strictly necessary.

This is not anti-technology.

It is pro-human.


The Policy Path Forward

A Tech New Deal could begin with several practical steps.

Restrict federal procurement of predictive policing systems

Particularly systems that generate individual risk scores or hotspot forecasts derived from biased enforcement data.

Require Dignity Impact Assessments

Major public-sector technology deployments should undergo civil rights and psychosocial impact evaluations before implementation.

Mandate sustainability reporting

Large-scale surveillance and AI deployments should report energy consumption, supply chain impact, and documented social harms.

Create strict liability for dignity violations

Organizations deploying high-risk systems should be legally accountable for the harms those systems cause.


From Extraction to Integrity

Predictive policing is not an isolated controversy.

It is one of the clearest symptoms of a broader shift.

Modern technology increasingly treats human beings as data surfaces to be predicted rather than people to be respected.

As long as our systems are optimized for extraction, the harms will continue.

Bias will reproduce.

Surveillance will expand.

And the costs—to human autonomy, to democratic legitimacy, and to ecological stability—will grow.

The alternative is not abandoning technology.

It is changing the objective.

From extraction to integrity.

From prediction to verification.

From surveillance to dignity.

Read more: https://github.com/MerkMuff/The_Tech_New_Deal_V2