You’ve been told that there’s a vulnerability in your software application. You’ve spent hours trying to find the problem. You’ve double-checked the alert. You’ve double-checked the software. No patches are available.
Finally, after ruling out every other option, you’re forced to conclude that there was never a vulnerability in the first place. You’ve just wasted your whole afternoon trying to fix a problem that wasn’t there.
This is the problem with false positives in software security.
False positives are false alarms—and each one can send you on a wild goose chase that costs your organization time, money, and emotional overhead.
That’s why it’s important to invest in security tools that reduce false positives. The more accurate your security software, the more easily you can mitigate the very real costs that false positives represent to your business.
We wanted to understand how false positives affect security teams, so we surveyed cybersecurity professionals on the subject. Twenty-nine individuals shared their thoughts on how false positives impact organizations.
Here’s what we’re going to cover in this article:
We surveyed software security professionals on the costs of false positives, and the results indicate that false alarms are not only economically costly but also have long-term negative effects on the teams that deal with them.
In fact, given their businesses’ resources and constraints, most of our respondents would rather reduce false positives than increase true positives. False positives are a big problem in software security today.
This report will dig into these findings one by one, but here’s a summary of what our survey found:
A false positive occurs when your tool detects a vulnerability that isn’t really present. If you’re not familiar with statistical classification, it might be helpful to take a moment to get an idea of what we mean by “false positives.” (If you already know the difference between a false positive and a true positive, feel free to skip this section.)
You monitor the security of your software using various scanning tools, which check for vulnerabilities in your product. These tools might include static application security testing (SAST), dynamic application security testing (DAST), and/or software composition analysis (SCA). If a tool doesn’t find any vulnerabilities, it reports a negative—but if a tool detects a vulnerability, it reports a positive.
However, that’s just what the tool says. There’s the tool report, and there’s the actual truth: there either is a vulnerability (positive), or there isn’t (a negative).
In an ideal world, our tools would always match reality, but tools can make mistakes. If the tool report matches reality, then the report is true. If the tool doesn’t match reality, then the report is false.
To illustrate this mismatch, programmers use what’s (appropriately) called a confusion matrix. This compares the tool reports to the actual condition of your software, and it looks like this:
So there are two kinds of positives your software security tools are going to give you: true positives and false positives. True positives alert you to real vulnerabilities in your software. A false positive is a false alarm.
We queried software security professionals on this matter, and the responses indicated a consistent sentiment: false positives waste valuable dev time and damage team morale.
When a scanner detects a vulnerability, the security team gets to work fixing it. If it’s a true positive, the issue is resolved when the vulnerability is patched.
However, if it’s a false positive, the issue can only be resolved when the security team can demonstrate that there’s no real vulnerability present. In other words, a false positive is only resolved once you can prove it’s a false positive.
We asked software security professionals which task, on average, takes more time to resolve: true positives or false positives. Of the people we surveyed, 58% said false positives take more time to resolve than true positives.
It follows that false positives generally damage team productivity. Because resolving false positives detracts from more useful activities (like building your software or fixing true vulnerabilities), false positives can put a good deal of drag on team productivity.
Our survey responses reflect this: seventy-two percent of respondents agreed with the statement that “False positives damage team productivity”—including twenty-eight percent who said they “strongly agree.”
Reducing false positives frees up dev time for things that actually contribute to your business.
In addition to the economic costs of false positives discussed above, false positives come with several soft, hidden costs. Our survey brought several examples of this to the surface—one of which was the sentiment that false positives erode team morale.
While this sentiment isn’t as pronounced as the sentiments discussed earlier, responses do skew toward agreement with the statement, “False positives damage overall team morale.”
This erosion of morale can lead to “vulnerability fatigue” or “patching fatigue,”—which happens when a team becomes desensitized to vulnerability alerts (whether they’re true or false positives doesn’t matter). An abundance of false positives can weaken even the most zealous team’s motivation to patch vulnerabilities.
And once that attitude works its way into your operations, it can be very, very difficult to reverse.
A slight majority of respondents agreed that false positives damage relationships between their teams and other teams within their organizations.
From an internal political perspective, false positives can easily undermine a security team’s credibility.
Security teams are already fighting an uphill political battle:
Unfortunately, because false positives take more time to resolve than true positives, other teams are more likely to remember false positives. Even if most alerts turn out to be true positives, it doesn’t take many false positives for a security individual or team to get labeled as the proverbial boy who cried wolf in the office.
False positives are an unfortunate side effect of vigilant tools, but you can reduce false positives by investing in high-accuracy vulnerability scanners.
At Finite State, we’ve built our software composition analysis tool with accuracy in mind. When customers switch to us, they commonly report a significant decrease in false positives and an increase in true positives.
For example, many of our customers are former users of OWASP Dependency-Check, a free SCA solution. Dependency-Check was built to be a super-vigilant vulnerability checker and, therefore, has a bias toward positives—including false positives. When companies upgrade to Finite State (a premium SCA), they see an immediate decrease in false positives.
If you compare Finite State’s performance against OWASP Dependency-Check’s on a set of libraries pre-seeded with publicly-known vulnerabilities (like this one), the difference in false positives (and true positives) becomes abundantly clear: Finite State catches more vulnerabilities while reducing false positives.
Tool: | Vulns. (true positives) |
Misses (false negatives) |
False Positives |
Dependency-Check | 301 | 214 | 421 |
Finite State | 515 | 0 | 1 |
We’ve built a tool that catches more vulnerabilities and sounds fewer false alarms when it comes to scanning your software supply chain—and if you’d like to see how it can help mitigate the costs of false positives for your team, try Finite State now!