
A key problem of automated ABAP code analysis is that it does not reliably work. As a consequence, developers and QA teams often waste a considerable amount of time with false alerts. This blog post discusses the resulting effects.
As already explained in some technical detail in our blog post "ABAP code scanners : True shields or false confidence?", code scanners operate with a certain degree of vagueness in their algorithms. Their results are not precise. Neither technically nor semantically.
Not technically, because they cannot reliably determine whether a given sequence of commands actually causes a vulnerability.
Not semantically, because they cannot reliably determine whether misuse of a given vulnerability actually poses a business risk.
Consequently they report a huge number of potential security risks that may or may not be dangerous. For the reader not familiar with code scanning results for ABAP: we are talking about tens of thousands of potential security risks in a mid-sized corporate SAP landscape.
Why are potential issues problematic?
First reason
Once a code scan is done, there is a report. Financial auditors love code scan reports. Since their expertise in ABAP security is usually not expert level, they cannot tell which of the reported issues are actual risks and which are not. They may therefore conclude that the company needs to fix all of these issues. Including the ones that pose no risk. Such a generous decision can lead to unexpected, extensive and expensive mitigation projects. It therefore makes perfect sense to consider whether such kinds of reports provide value to a company.
Second reason
When companies buy a code scanner, they sooner or later are tempted to start a „get clean“ project that deals with all existing findings. Unfortunately such projects require to mitigate all findings above a certain criticality - based on the tool’s judgement. And this judgment includes the potentials, i.e. the false positives. We have been engaged in projects with more than 70% of the tool findings either false or irrelevant.
Next reason
Oftentimes companies perform automated code checks when code changes should be transported to a QA system for functional analysis. The idea of this "best" practice is to ensure that only code with a sufficient level of security can reach a QA system. Vendor marketing calls this „stay clean“. While this approach sounds good in theory, it has certain side effects. Developers have lots of quality aspects to consider when creating code. And we all know, development projects rarely come with plenty of time. Many code changes therefore happen under pressure. If you want to release a new transport under time pressure, the last thing you need is a code scanner preventing you from doing so. Especially if the flagged code practices are actually not real security risks. Such events do not necessarily contribute to the happiness of developers and the trust in the security process. They cause stress, trigger discussions, start an exemption process and frequently also lead to complaints and escalations. We already discussed the side effects of exemptions in our blog post „Exemptions are backdoors“
Another reason
In some instances, developers get so frustrated with their time being wasted by false positives that they start patching the affected programs with placebo solutions. Such corrections are not intended to mitigate the reported security vulnerability. Instead, they are designed to trick the scanner into thinking that the vulnerability has been mitigated. This works by placing validation functions in the code that the scanner „recognizes“ as an actual security measure even though the mitigation does actually nothing. If this becomes a habit, real security issues will also be cloaked by useless mitigation techniques.
Summarizing the above, vague scan results and especially false positives are demotivating, time-consuming, expensive and draw attention to the wrong places.
But how to do it better?
The root cause of this issue is that raw code scanner results are directly turned into reports or sent to developers and QA teams. A better approach would be to leverage the skills of experts to analyze the results first and then report those that pose a relevant risk to companies. While this approach may not be fully scaleable, it nonetheless prevents all of the negative effects discussed above.
This is the third article in our series related to a secure ABAP coding process, that provides you with insights into issues with tool-driven security initiatives.
If you'd like to know more about optimizing your secure coding process, please contact us.