
Criminal justice has always been reactive. Someone commits a crime, is caught, tried, and punished. The sequence assumes the crime comes first.
AI prediction inverts this sequence.
If AI can predict criminal behavior with 90% accuracy—and it will be able to—the logic of prevention takes over. Why wait for the crime? Why allow the victim? Why not intervene before the act?
This is speculative incarceration. It sounds dystopian because it is. It is also the logical endpoint of predictive systems optimizing for harm reduction.
AI systems already predict:
These predictions are not perfect. But they are better than chance, and they are improving.
As data collection expands and models improve:
The question is not whether these predictions will be possible. It is what we do with them.
At what accuracy does preventive action become "justified"?
There is no accuracy threshold at which preventive incarceration becomes just. But there are thresholds at which it becomes tempting.
This logic is compelling to policy makers focused on outcomes.
A utilitarian calculus:
This logic is compelling to policy makers focused on outcomes.
Once prevention is accepted for murder, why not for:
Each step down the gradient is "logical" once the previous step is accepted. The endpoint is total surveillance and preemptive control.
Predictions affect outcomes.
If someone is labeled high-risk:
These interventions may cause the predicted outcome. The prediction creates the conditions for its fulfillment.

Criminal justice is based on the principle that punishment follows crime.
Speculative incarceration punishes before crime. It punishes for what someone would have done.
This is not justice in any traditional sense. It is risk management applied to humans.
Due process assumes:
Speculative incarceration has:
How do you prove you would not have committed a crime?
In the classic formulation: the pre-crime system works until it produces a false positive that matters.
But in reality, the system produces false positives constantly. It just produces them among people who cannot effectively contest.
The false positives are not evenly distributed. They fall on the already marginalized.
This is not a sudden implementation. It is a gradual slide.
Algorithms already inform bail decisions. High-risk individuals are detained pre-trial.
This is not "incarceration for future crimes." It is "incarceration because you might not return for trial." But the predictive logic is the same.
Individuals predicted to be high-risk are required to participate in intervention programs.
Not prison, but not freedom. Mandatory therapy, monitoring, check-ins.
The step from "required intervention" to "preventive detention" is smaller than it appears.
Individuals with extremely high risk scores may be detained even without pending charges.
Initially for terrorism. Then for serious violence. Then for other categories.
Each expansion is justified by the same logic: if we can prevent harm, shouldn't we?
Eventually, speculative incarceration becomes a normal part of the criminal justice system.
Not for everyone. But for those flagged by the algorithm. Those without resources to contest.
The system operates quietly. Most people never interact with it. Until they do.
Speculative incarceration will not be applied equally.
Predictions are only as good as data. Where is data richest?
The algorithm "sees" these populations better. It predicts them more. It incarcerates them more.
Not because they are more criminal. Because they are more measured.
Contesting algorithmic predictions requires resources.
Those with resources can contest. Those without cannot.
The wealthy get errors corrected. The poor get incarcerated.
Some populations are more visible to the system:
The invisible can evade prediction. The visible cannot.
Speculative incarceration is a tax on visibility.

If predictions cannot be challenged, they cannot be corrected. Transparency is the minimum requirement.
Requiring algorithms to be public and contestable.
If predictions cannot be challenged, they cannot be corrected. Transparency is the minimum requirement.
Challenge: Trade secrets, security concerns, and technical complexity.
Some lines cannot be crossed regardless of prediction accuracy.
Incarceration without crime could be made constitutionally impermissible. Some nations may choose this.
Challenge: Emergencies, terrorism, and the gradual erosion of "absolute."
Independent auditing of predictive systems for bias, accuracy, and abuse.
Regular public reporting on who is being predicted, at what rates, and with what outcomes.
Challenge: Access to data and the resources to analyze it.
If someone is high-risk, what interventions are available other than incarceration?
Prediction without incarceration is possible. But it is more expensive and less satisfying to the punitive instinct.
Speculative incarceration assumes the predicted future is inevitable.
If humans have agency—if we can choose differently—then predictions are not destiny. Punishing inevitable behavior and punishing chosen behavior are morally different.
But predictive systems do not care about this distinction. They optimize outcomes, not moral categories.
Can people change? Are we fixed by our history and circumstances?
Speculative incarceration treats identity as fixed. The algorithm has assessed you. That assessment is your future.
This forecloses the possibility of redemption, change, and growth that criminal justice traditionally acknowledges.
This sounds strange, but: crimes that actually occur are information.
They tell us about social conditions, policy failures, and human needs. They create opportunities for response.
Prevented crimes are invisible. We never learn what conditions produced them. We never get the feedback.
A society that prevents all crime through prediction learns nothing about why crime happens.
Speculative incarceration is not a thought experiment. The components already exist:
The question is whether these components are assembled into a system of speculative incarceration—or whether limits are established first.
The governance fork is relevant here. Coordination to establish limits is possible. Without coordination, competitive pressure and fear drive the logic of prevention.
We are already on the path. The question is how far we go.
This is a domain impact page showing how AI prediction intersects with Control & Governance. For related scenarios, see Holographic Prison System 2038 and The Last Human Judge.