Get ready to take action! Registration for Axonius Adapt26 in NYC is Open!

Register Now

Risk score fatigue: Which one actually works?

Lucas Zaichkowsky

Principal Field Architect, Axonius

Frederico Hakamine

Technical Evangelist Director, Axonius

Every security vendor has a risk score. And every vendor will tell you theirs is the best.

If you’re a security practitioner doing vulnerability and exposure management, you’ve probably felt the fatigue setting in. Each tool presents one (or a few) risk scores to choose, along with compelling arguments for why they are the best answer to quantify risk for your organization. It’s reaching a point where the term itself is starting to feel like an eye-roll-inducing buzzword.

Let’s shed some light on what actually makes a risk score useful, where it belongs in your program, and how to implement one that drives real outcomes.

Risk_Score_Fatigue_(3)_copy.png

Start with the “Why” (and unpack it!)

When thinking about risk scores, or anything, really, it’s a good idea to start with the “Why.” Why do you need a risk score in the first place? The simplest answer for this question would be “to focus on security risks that truly matter to our organization.” But this answer needs to be unpacked. 

Organizations have multiple stakeholders with competing priorities. While all stakeholders understand a CVSS score of 10 is higher than 8, they care a lot more about:

  • The security risk versus the risk of delaying or not delivering on other tasks: i.e., roll out a new system in production or invest in other areas of the business.

  • The direct association with their responsibilities, environment, and business impact: i.e., a vulnerability is on a system that's missing our endpoint protection and houses Personally Identifiable Information (PII) used by data scientists for training.

As soon as you unpack the why, you discover a key requirement that greatly helps you ditch the fatigue: Does this pass the bar of my stakeholders? If not, it’s not delivering focus to our organization.

Next comes the “How”

Now that we know what the bar is for a good risk score, let's  understand the modern vulnerability management concepts to get us there: 

Unified Vulnerability Management (UVM) consolidates security findings from multiple tools into a single operational layer. Most organizations have overlapping tools producing findings, and trying to operationalize each one separately creates duplicative effort and inconsistent workflows.

Risk-Based Vulnerability Management (RBVM) goes beyond raw vulnerability severity to quantify risk using additional factors. Common examples include threat intelligence feeds, business criticality, internet exposure, and mitigating controls. For example, a high-severity vulnerability known to be actively exploited on an internet-facing server is far riskier than a critical-severity vulnerability on a non-persistent VDI.

Continuous Threat Exposure Management (CTEM) expands on UVM and RBVM with additional processes to scope all assets you need to protect, includes simulated pentests to validate vulnerabilities as exploitable, and finds ways to automate resolution (rather than overrelying on tickets). TL;DR, it wraps automated, orchestrated processes around UVM and RBVM, turning prioritization into ongoing and operationalized programs rather than one-time exercises.

What makes a good risk score

To get to meaningful prioritization, you have to unify three types of context:

  • Security context from your security tools, threat intelligence sources, and breach attack simulations.

  • Asset context such as criticality, accessibility from the internet, mitigating controls like endpoint protection, and whether the system is even turned on.

  • Business context, which tells you how much damage a compromise would actually cause. 

Unified_security_context_diagram.png

A risk score that draws from only one of these categories, no matter how sophisticated, is giving you an incomplete picture (aka: it doesn't pass the prioritization threshold of all stakeholders in your company). The magic happens when all three converge.

Where risk scoring belongs (and where it doesn’t)

The place where the risk score belongs must be the place where all the ingredients to automatically calculate a good risk score (security, business, asset context) are readily available. If those are not available, you cannot calculate a meaningful risk score, period.

In other words, where you implement it matters just as much as how you build it.

Many organizations implement risk scoring inside the individual tools that produce their security findings. Vendor X gives you findings, you prioritize them there, but what happens when you add Vendor Y? Then Vendor Z? You end up building and maintaining separate scoring models, separate workflows, and separate SLAs in every tool. And then, to top it off, each tool is missing the critical components that make the risk that will actually move the needle in your org.

Although migrating risk scoring and workflows to a single layer might sound painful, there are solid reasons the effort has ROI:

  • CTEM programs are complex to build. Think of all the work it takes to set up automations that route findings to the right owners and track SLAs. If you do that for each source separately, the effort multiplies fast compared to doing it once at a unified layer that they all feed into. And implementing CTEM in separate tools will undoubtedly lead to inconsistencies that cause confusion and erode trust.

  • Individual tools lack the full picture. The vendors producing security findings typically don’t have your asset and business context. Without those inputs, RBVM is effectively out the window.

  • Proprietary scores break unification. Vendor-specific risk scores may be more nuanced than raw CVSS, but they still lack asset and business context. Worse, when multiple tools identify the same issue, such as a vulnerability scanner and an endpoint protection product, the proprietary score only applies to one of them. I see organizations often ignore findings from secondary sources for this reason, even when their primary source has a visibility gap. That should set off alarms for anyone paying attention. Think about the liability of deliberately choosing to ignore findings.

  • Your tool stack will change. The number of sources producing security findings keeps growing. Think of how difficult it is to rebuild risk scoring and CTEM workflows every time you add or remove a tool. Implementing at the right layer frees you to swap vendors, absorb acquisitions, introduce new types of findings, and keep moving without rearchitecting your program. 

What the right risk scoring solution looks like

If it’s not on RBVM or in siloed tools: What is the ideal solution for risk scoring?

With any evolution in the security industry, a few terms are defining this solution: EAP (Exposure Assessment Platforms) from Gartner, PSP (Proactive Security Platforms) from Forrester, Exposure Management solutions, and CTEM solutions. Because of the acronym confusion, it's important to keep yourself grounded in the primitives that make a good risk score so you can evaluate solutions effectively:

The ideal solution is the one that aggregates your entire attack surface (assets) and security findings (CVEs, non-CVEs) in a single place, with their business, asset, and security context altogether to prioritize risk resolution. It connects to the investments you have, can be tailored to your specific characteristics, and doesn't require you to do work manually to compensate for its gaps. It can orchestrate and automate remediation tasks or assign tickets to the correct owners if manual steps are necessary.

What's a good initial risk score formula?

What does a good risk score look like in practice? The best advice I can give is to keep it simple and don’t over-engineer it. 

Remember to ask “Why.” What’s really going to change if something is 7.5 instead of 7.3? Is that going to get the finding remediated faster or more effectively? When you send a list of findings to remediation owners with scores attached, how much of a difference is a decimal point actually making? 

In the real world, what works is having standard SLAs alongside risk-elevated SLAs for the issues that exceed a threshold. Do that first and congratulate yourself once it’s implemented and the highest-risk issues in your organization are getting resolved faster. One exception to this might be tracking quantified organizational risk over time for reporting purposes, but I’d argue you can get there with less complexity.

My recommendation is to start with one or two inputs from each of the three context categories. Once you have a successful implementation, you can subdivide into more granular inputs with smaller weights. Just keep asking yourself if adding an input is really going to change an outcome. If multiple inputs tell you the same thing, like whether a vulnerability is being actively exploited in the wild, aggregate them into one.

Example: A simple risk score weighting Threshold: 50%+ triggers a risk-elevated SLA

Input

Weight

Critical device (business context)

30%

Internet-facing (asset context)

20%

EPP missing or unhealthy (asset context)

20%

CISA KEV (security context)

15%

CVSS severity high or critical (security context)

15%

So … which risk score is best?

There is no single answer. As much as vendors try to prove that their special risk score is the one to rule them all, that’s just not how it works. 

The best risk score is the one that focuses on security risks that truly matter to your organization and actually reduce risk. It combines security finding context, business context, and asset context. It drives CTEM programs regardless of which tool produced a finding. And it's operationalized in a way that drives real outcomes, like risk-elevated SLAs that get the highest-priority issues resolved faster.

Don’t chase the perfect score. Build one that works, keep it simple, and iterate from there.


Want to go deeper? Everything I've outlined here comes down to turning risk scores into real action. My team at Axonius asked over 600 security leaders how they're actually making that shift, from consolidating data into a single source of truth to automating remediation at scale.

Read the 2026 Axonius Actionability Report.

Categories

  • Threats & Vulnerabilities
Get Started

Get Started

Discover what’s achievable with a product demo, or talk to an Axonius representative.

  • Request a demo
  • Speak with sales