TL;DR
Only 158 out of 39,000+ vulnerabilities are actually exploited in the wild, yet most teams waste time patching based on CVSS scores alone—ignoring the Medium/Low CVEs attackers really use.
Key Takeaways:
- CVSS does not equal the risk in the real world. The severity assesses the extent of the issue and not how probable it is that it will be exploited.
- There is targeted exploitation at Medium and Low CVE levels. Attackers go for reachable and simple things, not just “Critical.”
- You cannot patch everything. The number of CVEs ensures that there will always be a backlog, with some fixes taking longer than others.
- By prioritizing severity, there are some things that may be missed. Teams correct non-exploitable Criticals at the expense of vulnerable Mediums.
- Hackers like vulnerabilities that are easy to access through the network and require little effort. Privileges and user interaction are more important than rankings.
- Context is better than points. Exploitation, exposure, and asset criticality offer better decisions.
- SSVC facilitates action and not chasing scores. Decision trees help in determining when to patch, postpone, or escalate.
- Context-based VM decreases noise and MTTR. Fewer findings, quicker remediation, lower risk.
Introduction: The Failure of the "Patch Everything" Model
For decades, vulnerability management (VM) programs have operated on a compliance-centric model: detect a vulnerability, check its CVSS score, and patch if the score exceeds a certain threshold (typically 7.0 or 9.0).
The traditional way of managing vulnerabilities using CVSS scores is no longer practical because it does not consider the actual risk involved.
A study on over 39,000 CVEs revealed that there is a significant exploitation of medium and low level vulnerabilities by attackers, while a lot of the high and critical CVEs are left unutilized.
Looking at severity alone does not take into consideration if a vulnerability is being exploited, how many people/things are at risk, and whether the asset affected is important or not; this results in areas that can’t be seen and that make it difficult for patch teams.
By using threat intelligence (KEV, ExploitDB, EPSS), exposure in the environment and criticality of the asset, this kind of context-based vulnerability management facilitates prioritized decision-making which cuts through irrelevant data, reduces MTTR by 45-55%, and concentrates remediation efforts on exploitable, exposed vulnerabilities affecting critical assets.
Why Severity Scores No Longer Reflect Real Risk
The core issue is the divergence between technical severity and actual risk. Recent analysis of 39,000+ CVEs indicates that while many are labeled "critical" (CVSS 9.0+), only 158 appear in CISA's KEV catalog and 124 have public exploits in ExploitDB—meaning the vast majority pose no immediate threat to specific organizations due to environmental factors, lack of exploit availability, or inapplicability to the attack surface.
Our analysis of 39,000+ vulnerabilities disclosed in 2024, KEV, and ExploitDB data provides a powerful insight: our traditional severity-based approach (fix Critical and High first, park Medium/Low) is no longer aligned with real-world exploitation patterns. The data exposes two uncomfortable truths:
- Exploitation is not limited to Critical and High vulnerabilities.
- The volume of disclosed CVEs is far too large for a “patch everything” strategy.
Let’s break down the insights.
1. Real-World Exploitation Is Not Severity-Based
The analysis shows:
- 158 CVEs appear in CISA KEV (Known Exploited Vulnerabilities)
- 124 have public exploits on ExploitDB
- 108 CVEs overlap between KEV and ExploitDB (i.e., actively exploited and weaponized publicly)
This is important because KEV does not care about CVSS scores—it is purely exploitation-in-the-wild.
Key insights:
Attackers exploit what is exploitable, not what is “Critical.”
From the comparative analysis:
KEV contains Medium and even Low CVEs
- These lower-severity CVEs are also found in ExploitDB, meaning attackers do use them
- Many Medium CVEs have:
- Network attack vector
- No user interaction
- Low privileges required
These are the same conditions found in typical Critical RCEs.
Therefore:
A Medium severity CVE with public exploit code + network reachability is higher risk than a High severity CVE that requires local access or complex interaction.
2. Blind Spots Created by Current Prioritization
The current approach focuses on:
- Fix all Critical
- Fix all High
- Park Medium and Low
But analysis proves several problems:
Medium/Low CVEs are being actively exploited
The data shows active exploitation patterns across all severity categories.
Example pattern from the data:
- Network-based Medium CVEs in KEV
- Low-privilege Medium CVEs in ExploitDB
This means attackers can use these weaknesses for initial access, lateral movement, or privilege escalation.
Some “High/Critical” CVEs are NOT exploited
The analysis shows 39,000+ CVEs that do not appear in KEV or ExploitDB.
So blindly focusing on all High/Critical CVEs means:
- Spending time patching vulnerabilities that are NOT exploited
- Meanwhile ignoring Medium/Low ones that actually ARE used by attackers
This is how critical blind spots emerge—teams patch non-exploited Criticals while internet-facing Mediums with public exploits remain unaddressed.
3. Resource Exhaustion: You Cannot Patch Everything
NVD lists 39,355 CVEs in the dataset.
Of those:
- Only 158 are KEV
- Only 124 are ExploitDB
- Only 108 are in both (the truly dangerous ones)
Meaning:
If you try to patch all CVEs:
- You will drown in volume
- Patch cycles will become longer
- Many assets will remain unpatched due to overload
- Your MTTR will increase
- Some high-risk assets may get skipped due to backlog
The data supports that “fix everything” is a mathematically impossible strategy.
4. Medium & Low Severity CVEs Can Be Just as Dangerous
Data clearly shows:
- Medium CVEs appear in KEV
- Medium CVEs appear in ExploitDB
- Many Medium CVEs have:
- NETWORK attack vector
- NONE or LOW privileges required
- NO user interaction required
These conditions are exactly what attackers need.
Important:
CVSS severity ≠ exploitation likelihood.
Severity tells you impact if exploited, not whether exploitation is likely.
KEV and ExploitDB data fill that gap.
5. Attack Vector Patterns: Network-Based CVEs Dominate
Both KEV and ExploitDB show concentration in:
- NETWORK attack vector
- LOW or NONE privileges required
- NO user interaction
These attributes show attacker preference:
Attackers go for vulnerabilities that are easy, remote, and require no privileges—regardless of severity.
This pattern is consistent across Critical, High, and Medium CVEs.
6. Why We Need a Better Prioritization Model
The data exposes the limitations of severity-only prioritization:
Severity-only model
- Ignores exploit availability
- Ignores exploitation-in-the-wild
- Fails to consider external threat intelligence
- Does not factor asset importance or exposure
- Overloads patch teams
- Creates patching delays
- Introduces blind spots on internet-facing assets
Recommended Modern Prioritization Strategy
A better approach should prioritize based on:
- KEV presence
- ExploitDB / public exploit availability
- Network exposure of the asset
- Privilege requirements (PR)
- User interaction (UI)
- Attack vector (AV)
- Criticality of the asset / business impact
This is essentially risk-based vulnerability management, not severity-based patching.
Final Thoughts
The data leads to an unavoidable conclusion:
- Severity is not a reliable predictor of real-world risk.
- Medium and Low vulnerabilities are actively exploited and publicly weaponized.
- Focusing only on Critical/High leaves exploitable gaps.
- Attempting to patch everything is resource‑prohibitive and leads to patch delays.
- A risk-based, intelligence-driven prioritization model is necessary.
Theoretical Frameworks for Contextualization
To address these limitations, the industry is shifting toward decision-centric frameworks that prioritize action based on context.
Risk-Based Vulnerability Management (RBVM)
RBVM represents a philosophy that prioritizes vulnerabilities based on the risk they pose to the specific organization rather than generic severity. Unlike traditional VM, which measures success by patch counts, RBVM focuses on risk reduction and resilience. It requires the integration of threat intelligence, asset criticality, and exposure analysis to filter out noise.
Stakeholder-Specific Vulnerability Categorization (SSVC)
Developed by the Software Engineering Institute at Carnegie Mellon University, SSVC operationalizes RBVM by rejecting the "one-size-fits-all" approach. SSVC asserts that vulnerability management is a decision-making process, not a math problem. It avoids summing disparate metrics into a single number, instead using decision trees to guide stakeholders (Suppliers, Deployers, and Coordinators) toward a qualitative action: Defer, Scheduled, Out-of-Cycle, or Immediate.
The Three Pillars of Context
Context-driven VM relies on three primary inputs to triage vulnerabilities effectively: Threat Context, Environmental Context, and Asset Context.
Threat Context: Exploitation and Utility
The most critical filter for prioritization is the state of exploitation. SSVC proposes a decision point called Exploitation, which categorizes threats as: None: No evidence of exploitation, PoC: Proof of concept exists (exploit code published but not weaponized) and Active: Reliable evidence of attacks in the wild (confirmed by threat intelligence).
- Active Threats: Tools like CISA's Known Exploited Vulnerabilities (KEV) catalog allow teams to focus on vulnerabilities confirmed to be weaponized in real-world attacks.
- Predictive Scoring: The Exploit Prediction Scoring System (EPSS) estimates the likelihood of future exploitation, allowing defenders to anticipate threats rather than just reacting to them.
- Utility: SSVC also evaluates the "Utility" of a vulnerability to an adversary, assessing whether the attack is Automatable and the Value Density of the target. A vulnerability that is easy to automate against high-value targets represents a "Super Effective" utility, demanding higher priority.
Environmental Context: Path and Exposure
A vulnerability in a software library is only a risk if the vulnerable code path is actually executed and the system is accessible to attackers.
- Attack Path Analysis: This technique determines if a vulnerability can be triggered via a real attack path in your specific environment. By analyzing whether the vulnerable code is reachable, whether required conditions (network access, privileges, user interaction) are present, and whether the attack vector aligns with actual exposure, teams can filter out non-exploitable findings. Case studies show attack path analysis reducing vulnerability noise by up to 90%—for example, a scan flagging 500+ vulnerabilities was reduced to just 50 actionable findings.
- System Exposure: SSVC defines "System Exposure" as a key decision node, categorizing assets as:
- Small: Isolated/local systems with no network access
- Controlled: Networked systems with access restrictions (VPN, firewall rules)
- Open: Internet-accessible systems with public exposure
A "critical" vulnerability on an isolated system (Small exposure) often requires a less urgent response than a moderate vulnerability on an internet-facing web server (Open exposure).
Asset Context: Business and Human Impact
The final layer of context is the value of the asset to the organization.
- Asset Criticality: Metrics must account for business process dependencies, data sensitivity, and operational impact. A vulnerability affecting a "mission-essential function" (MEF)—such as payment processing, patient care systems, or critical infrastructure—demands immediate attention compared to non-essential systems like test environments or archived databases.
- Human Impact: SSVC Version 2 explicitly incorporates Safety Impact (physical, economic, and psychological well-being) and Mission Impact. For example, a vulnerability affecting a medical device or financial grid (Hazardous/Catastrophic impact) triggers an "Immediate" or "Out-of-Cycle" response regardless of CVSS score or other technical factors.
Operationalizing Context: Decision Trees and Maturity
Implementing context requires moving from ad-hoc decisions to structured logic.
The Decision Tree Approach
Instead of relying on opaque algorithms, SSVC uses transparent decision trees that show exactly how prioritization decisions are made. For a "Deployer" (an organization patching systems), the tree combines the values of Exploitation, Exposure, Utility, and Human Impact to derive a priority—making the logic auditable and explainable.
- Example: If Exploitation is "Active" and System Exposure is "Open," the decision leads to Immediate action, regardless of other factors.
- Example: If Exploitation is "None" and Utility is "Laborious," the decision defaults to Defer or Scheduled.
Program Maturity
The transition to context-driven VM is a journey described by the Vulnerability Management Maturity Model (VMMM). Organizations evolve from Level 1 (Reactive)—focusing on basic patching and compliance—to Level 5 (Adaptive), which utilizes real-time risk scoring, predictive analytics, and automated reachability analysis. Pilot studies of organizations moving to higher maturity levels demonstrated significant efficiency gains, including a 40% reduction in average time-to-remediate and a 35% decrease in the overall backlog of vulnerabilities.
FAQs
Why isn’t CVSS enough for vulnerability prioritization?
▼
CVSS measures theoretical impact if a vulnerability is exploited, not whether it will be exploited. It ignores exploit availability, attacker behavior, asset exposure, and business impact—leading to misaligned priorities.
Are Medium and Low severity vulnerabilities really dangerous?
▼
Yes. KEV and ExploitDB data show Medium and even Low CVEs are actively exploited, especially when they are network-accessible, require low privileges, and need no user interaction.
What is context-driven vulnerability management?
▼
It’s an approach that prioritizes vulnerabilities using real-world context—threat intelligence, exposure, attack paths, and asset importance—rather than static severity scores.
How does SSVC differ from traditional scoring models?
▼
SSVC avoids reducing risk to a single number. Instead, it uses decision trees to guide stakeholders toward actions like Immediate, Out-of-Cycle, Scheduled, or Defer based on context.
What role does attack path analysis play?
▼
Attack path analysis determines whether a vulnerability is actually reachable in your environment. This alone can reduce vulnerability noise by up to 90%.
Does context-driven VM ignore Critical vulnerabilities?
▼
No. It ensures Critical vulnerabilities are addressed when they are exploitable, exposed, and relevant, rather than patched blindly.
Conclusion
The way organizations manage vulnerabilities is changing, and static severity scores are no longer enough. Real-world data shows that only a small subset of so-called “critical” vulnerabilities are actually exploited or even reachable in most environments, making the traditional approach increasingly ineffective.
This reality forces a rethink. By using decision-centric frameworks like SSVC and applying attack path analysis, security teams can cut through the noise—often eliminating up to 95% of findings that pose no real risk.
This shift doesn’t mean ignoring high-severity issues. Instead, it allows teams to focus their limited engineering time where it truly matters: on vulnerabilities tied to active threats, exposed systems, and business-critical assets. When vulnerability management is driven by context rather than static scores, it stops being a patching chore and becomes a practical way to strengthen long-term resilience.