security

How to prioritize open-source risk with susceptibility analysis – TechBeacon


It’s rare today to find an application that isn’t built on open source. Using open-source components reduces time-to-market. It allows you to focus on what you do well and not worry about what you don’t do well. And it lets you take advantage of the skills of many, sometimes thousands, of developers who have built a common component. You’d have to be insane to write all your code yourself.

Not only is more open source code being used, but that code is also more complex, making open-source frameworks and libraries enormous. While all that has enabled developers to offer more fully featured applications, it’s also made the software development cycle much more challenging, especially from a security perspective.

In a perfect world, when you encounter a vulnerability in an open-source component, you should be able to upgrade to a vulnerability-free version of it, plug it into your application, and distribute a new release of your app with the clean component.

But we don’t live in a perfect world. For example, there isn’t always an upgrade that gets rid of a vulnerability. Moreover, open-source components run the gamut from small widgets that perform minor tasks to large frameworks that have millions of lines of code and can take months to upgrade. That’s why it’s not always realistic to upgrade components that you’re using to the next non-vulnerable version.

Susceptibility analysis can help you to address those problems by prioritizing the risk posed to a project by open-source components.

How susceptible is your software?

Susceptibility analysis, which looks at your code and determines the impact a vulnerability might have on it, gives you the truest picture of actual risk. That can be a time-saver for developers because it can identify false positives produced by static analysis security tools.

A static tool will identify the vulnerability, but if the vulnerable function isn’t being used by the application, it poses little risk to it. Those kinds of false positives can be identified with susceptibility analysis, which is an evolution of open-source analysis. It provides the next level of analytics and data for making smarter decisions about addressing vulnerabilities.

Susceptibility analysis is a nascent technology, with the first commercial product appearing only about 18 months ago. Because it’s so new, it isn’t in widespread use yet, and it’s limited to certain computer languages. Libraries of CVE signatures for susceptibility analysis programs are also in the process of being built.

For example, at Micro Focus, our susceptibility analysis product is focused on Java. We’ve created signatures for 25,000 Java vulnerabilities. Signatures are composed of the library for the open-source component and the function calls made by a developer to that library.

The signature looks for a function name and determines if a developer has called that function, either directly or indirectly, and if that call makes the code susceptible to attack. That’s because you can call a function with a vulnerability in it, but due to the way the code is written, an attacker may not be able to access that vulnerability.

You can implement all sorts of logic that prevents a vulnerability from being implemented even though it still exists in the open-source component. For example, you might be using a component that, in order to be exploited, needs an attacker to feed it JavaScript. However, when you use the component, you add input limitations—as you would do, for instance, if you expected the input to be a credit card number and nothing else. That would keep the vulnerability from being exploited, but not from being the source of a false positive.

Half of all known vulnerabilities in open-source components can be triaged as false positives because of the specific use of the components by custom code. It’s very unlikely that you’re leveraging 100% of a component. You might be using a calendar widget because you need a timestamp for your application, for example, but you don’t need to access the scheduling functionality. However, if that’s where a vulnerability exists, it shouldn’t matter to your app.

False positives demand automation

As with every issue turned up by a code scan, you have to examine those false positives. Done manually, that can take hours. Susceptibility can shave some hours off the process by using automation to triage vulnerabilities that are irrelevant to the operation of an application.

If you have 20 known vulnerabilities in the components you’ve used to build your software, you’ve got a problem. Without susceptibility analysis, you’ll need a developer to look at each of the vulnerabilities and figure out whether your code is susceptible to each vulnerability. When that review is finished, you may have five vulnerabilities you need to worry about, while you can forget about the other 15.

An even worse scenario may occur. Instead of investigating the vulnerabilities, your developers may recommend upgrading to eliminate the flaws, a move that could delay the next release of your software for months. So your software schedule gets skewed because you wasted time taking care of vulnerabilities that did not affect the security of your app and upgrading your component before it was necessary to do so.

Both scenarios could have been avoided with susceptibility analysis, eliminating the need for an investigation or upgrade of open-source components. You would have been able to immediately know which issues have to be fixed and not worry about those that didn’t need fixing.

Susceptibility analysis: Onward and upward

Right now, susceptibility analysis is tied to static application security testing (SAST), but in the future it could be used with dynamic application security testing (DAST), too. SAST is the first foray into susceptibility analysis. It does what it does well: finding functions and following code flow path to determine whether invalidated data reaches functions or methods.

As you look for other edge cases and other vulnerabilities that can’t be found unless an application is running, you’ll need dynamic testing. That’s why the evolution of susceptibility analysis will eventually include DAST. You’ll inevitably need dynamic analysis to validate certain classes of vulnerabilities or use it in conjunction with static analysis to reduce false positives.

As the use of open-source components continues to grow, so does the need to identify the impact of publicly known vulnerabilities on the custom code you create. The most efficient way to meet that need is through susceptibility analysis, which can save you time and money that you’d otherwise spend scrutinizing false positives and upgrading component libraries, with no security benefits.

I recently presented on susceptibility analysis at SecureGuild with my talk, “Do You Know How to Prioritize Your Open-Source Findings?” Registrants have full access the recorded session.

Keep learning



READ SOURCE

Leave a Reply