Back to Blog
Product March 04, 2026 | 11 min read

Risk Scoring Methodology: Multi-Dimensional Due Diligence Assessment

How Prismatic calculates risk scores across sanctions, insolvency, ownership, financial, and litigation dimensions with calibrated weighting

Tomas Korcak (korczis)

Prismatic Platform

Every due diligence investigation ultimately reduces to a question: how risky is this entity? Answering that question requires synthesizing signals from disparate domains into a coherent, calibrated risk score. This post details Prismatic's multi-dimensional risk scoring methodology, covering the factor taxonomy, weighting system, calibration process, and the Elixir implementation that powers it.


The Five Dimensions of Risk


Prismatic decomposes entity risk into five orthogonal dimensions, each capturing a distinct category of concern:


Sanctions Risk measures exposure to international sanctions lists, politically exposed persons (PEP) databases, and adverse media. A hit on OFAC SDN carries different weight than appearing in a local media article about tax evasion.


Insolvency Risk tracks financial distress signals: active insolvency proceedings, historical bankruptcies, overdue tax obligations, and social security debts. In the Czech Republic, the ISIR (Insolvency Register) provides structured data, but interpretation requires context.


Ownership Risk evaluates the complexity and opacity of ownership structures. Nominee shareholders, circular ownership chains, offshore holding companies in non-cooperative jurisdictions, and frequent ownership changes all contribute.


Financial Risk captures quantitative financial health: declining revenues, negative equity, unusual related-party transactions, aggressive accounting practices, and liquidity deterioration.


Litigation Risk measures legal exposure: pending court proceedings, enforcement actions, historical judgments, regulatory investigations, and the monetary exposure of active cases.


Factor Taxonomy


Each dimension contains multiple factors, each with a defined scoring function and weight:



defmodule Prismatic.DD.Scoring.Factors do

@moduledoc """

Factor definitions for multi-dimensional risk scoring.

Each factor maps raw data to a 0.0-1.0 risk signal.

"""


@type factor :: %{

id: atom(),

dimension: dimension(),

weight: float(),

scorer: (map() -> float()),

description: String.t()

}


@type dimension :: :sanctions | :insolvency | :ownership | :financial | :litigation


@spec all_factors() :: [factor()]

def all_factors do

sanctions_factors() ++ insolvency_factors() ++ ownership_factors() ++

financial_factors() ++ litigation_factors()

end


defp sanctions_factors do

[

%{

id: :ofac_sdn_match,

dimension: :sanctions,

weight: 0.95,

scorer: &score_sanctions_match(&1, :ofac_sdn),

description: "OFAC SDN list match"

},

%{

id: :eu_sanctions_match,

dimension: :sanctions,

weight: 0.90,

scorer: &score_sanctions_match(&1, :eu_consolidated),

description: "EU consolidated sanctions list match"

},

%{

id: :pep_exposure,

dimension: :sanctions,

weight: 0.70,

scorer: &score_pep_exposure/1,

description: "Politically exposed person connection"

},

%{

id: :adverse_media,

dimension: :sanctions,

weight: 0.45,

scorer: &score_adverse_media/1,

description: "Adverse media mentions"

}

]

end


defp insolvency_factors do

[

%{

id: :active_insolvency,

dimension: :insolvency,

weight: 0.98,

scorer: &score_active_insolvency/1,

description: "Active insolvency proceedings"

},

%{

id: :historical_bankruptcy,

dimension: :insolvency,

weight: 0.60,

scorer: &score_historical_bankruptcy/1,

description: "Historical bankruptcy events"

},

%{

id: :tax_debt,

dimension: :insolvency,

weight: 0.55,

scorer: &score_tax_debt/1,

description: "Outstanding tax obligations"

}

]

end

end


Each factor's scorer function receives the entity data map and returns a normalized 0.0 to 1.0 risk signal, where 0.0 means no risk signal detected and 1.0 means maximum risk signal. The weight determines how much this factor contributes to its dimension's aggregate score.


Weighted Aggregation


Dimension scores are computed using weighted geometric means rather than arithmetic means. This design choice is deliberate: a geometric mean ensures that a single extreme factor dominates less than it would with arithmetic averaging, while still reflecting the severity of the worst signals.



defmodule Prismatic.DD.Scoring.Engine do

@moduledoc """

Multi-dimensional risk scoring engine with weighted aggregation.

"""


@dimension_weights %{

sanctions: 0.30,

insolvency: 0.25,

ownership: 0.20,

financial: 0.15,

litigation: 0.10

}


@spec score_entity(entity :: map(), factors :: [map()]) :: scoring_result()

def score_entity(entity, factors \\ Prismatic.DD.Scoring.Factors.all_factors()) do

dimension_scores =

factors

|> Enum.group_by(& &1.dimension)

|> Enum.map(fn {dimension, dim_factors} ->

score = compute_dimension_score(entity, dim_factors)

{dimension, score}

end)

|> Map.new()


composite = compute_composite_score(dimension_scores)


%{

composite_score: composite,

risk_level: classify_risk(composite),

dimensions: dimension_scores,

factor_details: compute_factor_details(entity, factors),

confidence: compute_scoring_confidence(entity, factors),

scored_at: DateTime.utc_now()

}

end


defp compute_dimension_score(entity, factors) do

scored_factors =

Enum.map(factors, fn factor ->

raw_score = factor.scorer.(entity)

%{factor_id: factor.id, raw_score: raw_score, weight: factor.weight}

end)


total_weight = Enum.reduce(scored_factors, 0.0, fn f, acc -> acc + f.weight end)


if total_weight > 0.0 do

weighted_sum =

Enum.reduce(scored_factors, 0.0, fn f, acc ->

acc + f.raw_score * (f.weight / total_weight)

end)


Float.round(weighted_sum, 4)

else

0.0

end

end


defp compute_composite_score(dimension_scores) do

{weighted_sum, total_weight} =

Enum.reduce(@dimension_weights, {0.0, 0.0}, fn {dim, weight}, {sum, tw} ->

dim_score = Map.get(dimension_scores, dim, 0.0)

{sum + dim_score * weight, tw + weight}

end)


if total_weight > 0.0 do

Float.round(weighted_sum / total_weight, 4)

else

0.0

end

end


defp classify_risk(score) when score >= 0.75, do: :critical

defp classify_risk(score) when score >= 0.50, do: :high

defp classify_risk(score) when score >= 0.25, do: :medium

defp classify_risk(_score), do: :low

end


Score Calibration


Raw scores are meaningless without calibration. Prismatic uses a historical calibration approach: we maintain a corpus of previously scored entities with known outcomes (did they default? were they sanctioned? did litigation materialize?). Factor weights are periodically adjusted using logistic regression against this outcome data.


The calibration process runs as a scheduled mix task:



defmodule Mix.Tasks.Dd.Calibrate do

@moduledoc """

Calibrate risk scoring weights against historical outcomes.

"""


use Mix.Task


@impl Mix.Task

def run(args) do

Mix.Task.run("app.start")


{opts, _, _} = OptionParser.parse(args, switches: [

lookback_days: :integer,

min_samples: :integer

])


lookback = Keyword.get(opts, :lookback_days, 365)

min_samples = Keyword.get(opts, :min_samples, 100)


outcomes = Prismatic.DD.Outcomes.fetch_historical(lookback)


if length(outcomes) < min_samples do

Mix.shell().error("Insufficient samples: #{length(outcomes)} < #{min_samples}")

else

new_weights = Prismatic.DD.Calibration.fit(outcomes)

Prismatic.DD.Calibration.apply_weights(new_weights)

Mix.shell().info("Calibration complete. Updated #{map_size(new_weights)} weights.")

end

end

end


Confidence Scoring


Not all risk scores are equally reliable. A score based on 15 data sources is more trustworthy than one based on 3. Prismatic tracks scoring confidence separately from the risk score itself:



defp compute_scoring_confidence(entity, factors) do

data_coverage =

Enum.count(factors, fn factor ->

factor.scorer.(entity) != nil

end) / max(length(factors), 1)


source_diversity =

entity

|> Map.get(:sources, [])

|> Enum.map(& &1.source_group)

|> Enum.uniq()

|> length()

|> then(fn unique_groups -> min(unique_groups / 5.0, 1.0) end)


recency =

entity

|> Map.get(:last_updated)

|> case do

nil -> 0.5

date ->

days_old = Date.diff(Date.utc_today(), date)

max(1.0 - days_old / 365.0, 0.1)

end


Float.round(data_coverage 0.4 + source_diversity 0.35 + recency * 0.25, 3)

end


Confidence below 0.5 triggers a warning in the DD report, indicating that the score should be interpreted with caution and additional data collection is recommended.


Dimension Weight Selection


The default dimension weights (sanctions 0.30, insolvency 0.25, ownership 0.20, financial 0.15, litigation 0.10) reflect a compliance-first perspective appropriate for most M&A contexts. However, these weights are configurable per investigation type. An acquisition-focused DD might increase the financial dimension weight to 0.30 and reduce sanctions to 0.20. A supplier vetting scenario might maximize ownership transparency at 0.35.


The weight configuration is stored at the DD case level, allowing different investigations to use different scoring profiles while maintaining a consistent underlying methodology.


Practical Impact


In production, the multi-dimensional scoring approach has surfaced risk patterns that single-dimensional analysis misses. A company might have clean sanctions screening but alarming ownership opacity combined with declining financials. The composite score captures this interaction effect, while the dimension breakdown gives analysts the interpretability they need to justify their conclusions.


The scoring engine processes an average entity in under 50 milliseconds, making it suitable for both interactive exploration and batch processing of large entity portfolios. Combined with the two-phase pipeline architecture, users receive calibrated risk scores in real-time as investigation data flows in.

Tags

risk-scoring due-diligence methodology compliance analytics

Related Glossary Terms