I believe that Whitebox testing is more effective and represents better value for money than Blackbox testing, and in this article, let me explain why.
There are different ways to gain assurance that a product or service is secure, and each has different strengths and weaknesses. Deciding between them can be difficult, but the decision should always start by understanding the threat.
Threat modelling is the process by which you can identify your primary sources and types of attacks, their objectives, and possible points of entry into a secure system. It can help with gauging the likelihood of attacks being successful, and, most importantly, where you can apply controls to most effectively manage risk.
While CODA can support you through the process of threat modelling, a detailed discussion of this is beyond the scope of this article.
Once you have established your threat models, you have the following main options for testing.
Your primary threat is low-skill, low-motivation, opportunistic attack. Generally, this is from the Internet, and by groups such as script-kiddies.
Typically, this can be simulated effectively by vulnerability scanning and assessment, and the testing team gets very little information.
I consider this to be a constrained form of Whitebox testing, but with a reduction in the amount of information shared. My personal view is that this is rarely beneficial outside of the scenarios themselves, and can be more effectively demonstrated by careful scoping than by restricting information.
You have identified a range of potential threats, including differing levels of skill, motivation and resources. Attacks may come from anywhere, but are generally based on readily available tooling and are focused on information theft or financial gain.
Typically this can be simulated effectively by penetration testing and red-team engagements, and as much information as possible is shared.
As well as the threats above, you have identified a realistic threat from targeted, well-resourced groups. Alternatively, you want to make sure your system is robust and secure beyond the level required for compliance with typical good practice, or you’re operating in a constrained environment with devices that must remain operable for years, e.g. Telecoms, IoT and SCADA.
Targeted security research seeks to identify more profound or more insidious flaws within a system that a spot-check point-in-time assessment is likely to miss.
Based on my own experience, I’ve found that Whitebox testing offers a sensible balance between effective assurance testing and value-for-money. Delivery of a practical Whitebox assessment requires careful planning, however.
Benefits of Whitebox Testing
It’s not possible to know everything, but the more we do know, the less time we spend on areas that are unlikely to yield meaningful results. This increased efficiency ultimately means that we’re saving you money.
It does not mean that Blackbox testing is ineffective, merely that it is less efficient. The following example demonstrates this.
A Blackbox assessment has identified that a web page is highly likely to be vulnerable to the very dangerous flaw, SQL injection. To determine how dangerous, we either need to exploit the bug, or precisely what the page (and more specifically the underlying database query) does.
Let’s imagine that the page does one of the following:
Something innocuous such as,
SELECT * FROM users WHERE 'admin';
Or something dangerous like,
DELETE FROM users WHERE id = 'admin';.
In the first instance, entering a typical attack string such as
'OR 1=1; -- could print the entire user database, the second might wipe it out.
Through Blackbox testing, I can infer, which is more likely, but I’d need to move carefully and iteratively to minimise potential harm. Through Whitebox testing, I can quickly look at the source code and immediately determine which is correct.
Reduced Risk to Service Availability
Another benefit of Whitebox testing, and one which relates to the above example, is the significantly reduced likelihood of causing system damage or outages. Blackbox testing relies heavily on inference and triggering of characteristic behaviour.
The most common approach is fuzzing of parameters and services to trigger unintended actions. These further investigated to refine the attacks, but this is not an exact science.
When attempting to identify flaws such as memory corruption or buffer overflow, there is a significant risk that a service outage or data corruption may occur. Whitebox testing rarely has this problem as we can review the suspect system components directly.
Closely related to the efficiency savings above, Whitebox testing typically requires less time than Blackbox testing, although these time-savings are dependent on the actual scope of the assessment.
An excellent example of time saved is with simple port scanning. If we know what services are active (for example with a locally produced list from a target system), we can target port scanning at those, rather than the entire 65535 possible ports.
Another is WiFi security, especially password policies. Typically, a WiFi penetration test begins with the arrival on-site, followed by capturing of a WPA2 handshake, followed by attempts to crack it.
Maybe we’re successful, maybe not. It could be because your password isn’t in our wordlist or you have excellent password security.
From a Blackbox perspective, it’s impossible to know without allowing an extremely long time-window. From a Whitebox perspective, a review of your device and password configuration is typically all that’s required.
A typical objection when discussing testing options is that the information isn’t available.
Ideally, I’d like as much information as possible. Still, I’m a realist — no one person or organisation knows everything about an entire system down to the transistors used in a chip. You might if you’re Intel, but most of us aren’t, and there are a few laws of physics that debate how much they can honestly know.
Below are a couple of examples of what might be useful to us.
- Source code and build configurations
- Shift changes for security guards
- Switch configurations and passwords
- And a lot, but this depends on your threat model
Does knowing this devalue a test or add organisational risk? No, not at all. I believe it only adds value while reducing risk.
Another objection is that the information is sensitive, possibly from national security or defence perspectives, but more often from a commercial perspective. I believe that the benefits of sharing the information far outweigh the potential risk, but, regardless, I also believe that this risk is far lower than typically believed.
My reasoning is as follows:
- There’s a good chance we’ll figure this information out eventually
- We have an NDA with you
- We are security cleared
- You’re already trusting us with, often very high-levels of, access to your systems, and
- We have agreed, and objectively assessed, security controls in place governing our use of this information
Here are some personal examples I’ve dealt with that could have benefited from a WhiteBox approach.
- A manufacturer that provided no debug information about their device. We had source code, and we had potential vulnerabilities, but still spent 3 months trying to gain debug access to it for a test.
- A cryptographic system using the same key everywhere, to us, it looked like the data was random and encrypted — primarily because of initialisation vectors. But a glance at the system configuration would have shown that all keys were hardcoded. It was tough for us to figure out but very easy for a disgruntled employee you’ve just let go to abuse.
- Spent several days writing exploit code for a system and found out at the last minute that it wasn’t supposed to be on the network. At least the device was removed, but a cursory look at the network diagram would have gotten us there far more quickly.
- We often spend significant time brute forcing web directories in case something is left lying around. It doesn’t test for vulnerabilities, only the quality of our directory list and your ability to look in a folder. A simple directory listing is far more efficient.
By becoming more open to sharing internals, primarily source code, network diagrams and credentials with the people assessing the security of their systems, companies gain a far higher level of assurance.
Also, testing and assessment activities are more efficient and more cost-effective.
Rigorous use of Blackbox testing may be a more accurate reflection of an external attack within the confines of the time allowed. Still, a targeted attack won’t stick to a defined time-window, is unlikely to be concerned with causing system outages, and can afford to spend days, weeks, or months gathering sensitive information through other means.
We can’t. So give us the edge. We can do a much better at protecting your business if you give us the information upfront.