Xuewu Liu’s Substack

Xuewu Liu’s Substack

Share this post

Xuewu Liu’s Substack
Xuewu Liu’s Substack
From Cancer to AI Hallucinations: The Principle of Predictable Intervention
The Principle of Predictable Intervention

From Cancer to AI Hallucinations: The Principle of Predictable Intervention

Xuewu Liu's avatar
Xuewu Liu
Jul 12, 2025
∙ Paid
4

Share this post

Xuewu Liu’s Substack
Xuewu Liu’s Substack
From Cancer to AI Hallucinations: The Principle of Predictable Intervention
2
Share

I hold a bachelor's degree in meteorology and a master's degree in economics. These disciplines formed the foundation for my successful invention of intra-tumoral chlorine dioxide therapy. But more crucially, I possess a philosophical tool that has guided both my scientific and technological insights: the Principle of Predictable Intervention (PPI): https://doi.org/10.5281/zenodo.15861785

This idea didn’t arise suddenly. It evolved quietly during the years I spent developing this cancer treatment. The principle of PPI was implicit in my thought process long before I formally articulated it. It influenced how I interpreted intervention, feedback, and complexity in biological systems.

My invention journey involved chance factors. But through these contingencies, I was gifted something rare—a philosophical breakthrough. I realized that traditional cancer therapies fail because they attempt violent interventions at levels where predictability is absent. Researchers launch biochemical and radiological attacks without understanding that intervention on an unpredictable layer can never achieve therapeutic stability.

Although my therapy is still in its early stages, I am confident that it can eventually replace conventional cancer treatments—not just because it has shown strong efficacy and minimal toxicity in early animal and human trials, but because it was born from, and remains fully consistent with, my philosophical theory. More precisely: PPI did not follow the invention—PPI emerged from the very logic that made the invention work.

Yesterday, I published the first written formulation of this philosophy on the preprint platform Zenodo. Today, I am thinking about what it would mean to apply this same idea—the Principle of Predictable Intervention—to the development of modern artificial intelligence.


AI Is Following the Same Mistaken Path as Traditional Cancer Therapy: Brutal Force Against Unpredictability

Let’s begin with a bold analogy. The way mainstream AI models are trained today is strikingly similar to how mainstream oncology treats tumors:

  • Cancer therapies lack causal models of tumors and rely on chemotherapy, radiation, or molecular toxins to forcibly disrupt cellular systems.

  • AI development lacks causal clarity in data and relies on massive scale—more data, more parameters, more feedback—to drive learning.

The core assumption shared by both:

  • That complex systems can be “solved” by resource accumulation and brute force.

  • That feedback loops can be ignored or overridden.

  • That intervention at any layer is fair game, regardless of the feedback stability at that layer.

And the results?

  • In cancer: systemic collapse, immune failure, recurrence.

  • In AI: hallucination, semantic drift, data contamination, exploding costs.

This is not a coincidence. It’s a methodological failure.


The Principle of Predictable Intervention (PPI): The Only Sustainable Approach to Complex Systems

The core assertion of PPI is simple but profound:

"When outcomes are unpredictable relative to inputs, intervention is no longer epistemically valid."

This rule draws a firm boundary: not all system layers are eligible for intervention.

In any complex system—tumor, economy, neural net—there are three kinds of feedback regions:

  1. Zone A: Predictable Zone
    Intervention is valid. Causality is clear. Feedback is stable and verifiable.

  2. Zone B: Chaotic Zone
    Intervention yields unstable feedback. Effects are nonlinear, probabilistic, or inverted.

  3. Zone C: Uncoupled Zone
    Intervention has no observable effect. There is no measurable feedback loop.

PPI is not anti-intervention—it’s pro-structure. It says: intervene only in Zone A. Doing otherwise is to burn energy and trust in the dark.


AI Training Today: A Chaotic Mix of Zone B and C

Now let’s examine the behavior of modern LLM development:

  • Web-scale corpora (containing gossip, opinion, lies, idle chatter) are indiscriminately scraped.

  • These are used to train models on co-occurrence patterns and token probabilities.

  • Then RLHF and prompt tuning are used to patch the outputs.

But what’s missed is this:

  • The majority of this data belongs to Zone B (chaotic) or Zone C (uncoupled).

  • There is no stable causal structure between input and output.

  • The model is mimicking illusions, not extracting structure.

This is akin to chemotherapy: targeting everything in sight, hoping the system will comply. But it won’t. It collapses. It confuses.


This article introduces an original systems philosophy framework. Full access reserved for supporting readers.

Keep reading with a 7-day free trial

Subscribe to Xuewu Liu’s Substack to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Xuewu Liu
Publisher Privacy
Substack
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share