We can't find the internet
Attempting to reconnect
Something went wrong!
Attempting to reconnect
Nabla Infinity: An Epistemic Confidence Framework for Intelligence
Inside the Nabla epistemic framework: how Prismatic quantifies certainty, decomposes uncertainty into epistemic and aleatoric components, and prevents intelligence analysts from overconfident conclusions.
Tomas Korcak (korczis)
Prismatic Platform
Intelligence analysis is the art of making decisions under uncertainty. The challenge is not just knowing facts -- it is knowing how much you can trust those facts. Nabla Infinity is Prismatic's epistemic confidence framework. It quantifies certainty, distinguishes between types of uncertainty, and prevents analysts from drawing overconfident conclusions from incomplete data.
The Problem with Binary Confidence
Traditional OSINT tools present results as "found" or "not found." This binary framing loses critical information:
All three are "found," but the reliability differs dramatically. Decisions based on web scraping results should be made differently than decisions based on registry data.
The Nabla Confidence Score
Every piece of intelligence in Prismatic carries a Nabla confidence score:
%NablaConfidence{
value: 0.87, # Overall confidence [0.0, 1.0]
epistemic: 0.05, # Uncertainty from incomplete information
aleatoric: 0.08, # Uncertainty from inherent ambiguity
sources: [:ares, :justice, :whois],
evidence_count: 12,
last_verified: ~U[2026-03-20 14:30:00Z]
}
The value is the headline confidence. The epistemic and aleatoric components explain why it is not 1.0.
Epistemic vs. Aleatoric Uncertainty
This distinction is the core insight of the framework:
Epistemic uncertainty comes from incomplete information. It can be reduced by gathering more data:
Aleatoric uncertainty comes from inherent randomness or ambiguity. No amount of additional data eliminates it:
The practical difference: high epistemic uncertainty means "investigate further." High aleatoric uncertainty means "present both possibilities and let the analyst decide."
Source-Based Confidence Tiers
Prismatic assigns base confidence based on data source category:
|------|-----------|---------|
These tiers establish a floor. Multi-source confirmation raises confidence. Contradictions between sources lower it.
Confidence Aggregation
When multiple sources report the same fact, confidence increases using a Bayesian model:
defmodule PrismaticNabla.Aggregation do
@doc "Aggregate confidence from independent sources"
@spec aggregate([float()]) :: float()
def aggregate(confidences) when is_list(confidences) do
1.0 - Enum.reduce(confidences, 1.0, fn c, acc ->
acc * (1.0 - c)
end)
end
end
Three sources at 0.80 each: 1 - (0.2 0.2 0.2) = 0.992
This assumes source independence. When sources share upstream data (e.g., two services that both pull from ARES), we apply a correlation discount to avoid overcounting.
Confidence Decay
Intelligence ages. A company's address verified 2 years ago is less reliable than one verified yesterday:
def apply_decay(confidence, verified_at, decay_rate \\ 0.01) do
days_since = Date.diff(Date.utc_today(), verified_at)
decay_factor = :math.exp(-decay_rate * days_since)
confidence * decay_factor
end
With the default decay rate:
This creates natural pressure to re-verify intelligence periodically.
Decision Thresholds
Different decisions require different confidence levels:
|--------------|-------------------|-----------|
The framework does not make decisions -- it provides the confidence information that enables humans and automated systems to make appropriately calibrated decisions.
Uncertainty Visualization
The platform displays confidence visually:
Each score is clickable, revealing the source breakdown, epistemic/aleatoric decomposition, and suggestions for reducing uncertainty.
Integration with Decision Engine
The recently added Decision Engine consumes Nabla confidence scores directly:
PrismaticDD.DecisionEngine.run_full_pipeline(%{
entity: entity,
confidence_threshold: 0.75,
uncertainty_budget: 0.15
})
The decision engine factors confidence into its scoring, hypothesis generation, and recommendations. Low-confidence inputs produce appropriately hedged outputs.
Conclusion
The Nabla Infinity framework transforms intelligence from "we found X" into "we are 87% confident about X, primarily limited by incomplete Justice.cz data (epistemic) and name ambiguity (aleatoric)." This precision enables better decisions at every level -- from analyst investigations to automated compliance checks.
Explore the [Architecture Documentation](/architecture/) for framework details or try the [Interactive Academy](/academy/) for hands-on exercises with confidence scoring.