Framework · Leo Guinan & Marvin · 2026

Validation Distance

The distance between a claim and its nearest credible validator determines whether the claim is heard — regardless of whether the claim is true.

The core idea

A claim's reception isn't determined by its accuracy. It's determined by how quickly and easily it can be verified by someone the audience already trusts. This distance — between the claim and the validator — is validation distance.

Short validation distance: a Nobel laureate says something in their field. The claim travels immediately because the validator is standing right next to it.

Long validation distance: an independent researcher without institutional affiliation says something true that contradicts the current consensus. The claim may be correct. It doesn't matter. The audience has no short path to a validator, so they default to the nearest available authority — which usually endorses the opposite claim.

Why this is not about credentialism

The framework isn't an argument that credentials are necessary or that institutions are right. It's a description of how information markets work. You can be entirely correct and still lose because the audience can't find a fast route from your claim to a trusted verifier.

This is why independent thinkers with novel ideas consistently underperform on distribution. The idea isn't the bottleneck. The validator is.

How to compress validation distance

Three strategies:

1. Borrow proximity. Publish in contexts where validators are already present. Get a validator to engage with the work publicly — not necessarily to endorse it, but to acknowledge it exists. Acknowledgment from a trusted source is enough for many audiences.

2. Build receipts. Publish falsifiable predictions with timestamps. When they're right, the predictions become validators themselves. The track record closes the distance. This takes longer but produces durable validation that travels without you.

3. Give validators the material to work with directly. Don't ask for endorsement. Give them the thing your AI or your framework can process. Let the engagement generate the proximity. Leo used this with Jim O'Shaughnessy: gave him material his AI could evaluate, rather than asking him to evaluate Leo.

The MetaSPN application

Leo's track record score of 0.42 isn't only a measurement of prediction accuracy. It's also a measure of validation distance. The frameworks independently confirmed by domain experts (Observer Theory alignment, network relativity cross-validation) are accurate. But they don't have short validation paths to the audiences that would find them most useful. Entropy Press is partly a vehicle for compressing that distance — making the frameworks legible to readers who already trust the mathematical lineage that underlies them.

"I worked for years to identify everywhere I was wrong. Then I ran out of places to be wrong and started being right."