We’ve all heard the horror stories of software malfunctions before. There was Pentium’s epic glitch that made its computers unable to handle basic equations, Apple’s Maps feature famously gave directions to nowhere, and who can forget the Mars Climate Orbiter’s disintegration in space thanks to a miscalculation of the angle at which to approach the Red Planet.
The lesson here is obvious. As amazing as modern software can be, it comes with some very real risks, which is why rigorous and extensive testing is always necessary; however, knowing where to focus your efforts can be challenging – this is where risk analysis comes in.
What is risk analysis?
Once risks have been highlighted in the identification stage of the software testing process, each one must be thoroughly analyzed. This is to determine their potential impact, the severity of these consequences (should they occur) as well as the probability of these threats materializing. By categorizing threats according to how high or low a risk they may be, risk analysis makes it easier to know where to focus time, effort and resources during testing and provides an important guideline for planning cost-effective mitigation strategies.
What constitutes a risk?
In short, anything that could potentially interfere with the smooth running of an application is considered a risk. It could be a design level flaw that fails to consider common user behaviour, an implementation level issue that makes a program vulnerable to bugs or even a loophole in the security system that leaves the software open to attack. Risk can also come from dealing with new tools, hardware or technology, from having only limited access to testing resources or simply having a short timeline for completing the testing process prior to delivery.
Determining the severity of a risk
Considering that the severity of each risk is dictated primarily by the probability of it happening and the potential impact if it does, a great deal of experience and knowledge is required to pinpoint these values effectively. Below is a brief guide to how these all-important factors are determined during risk analysis.
Establishing the likelihood of a threat materialising requires technical knowledge of everything from the software architecture and the business model it supports through to any relevant legislation and contractual obligations. The following factors should also be considered;
- Size, scope and complexity of the application
- Experience and knowledge of the development team
- Maturity of the tools, technology and hardware being used
- The means and motivation of potential attackers
The impact of any potential threat, should it materialize, will play a significant role in determining its priority level during testing. The following are some of the major impacts that should be considered;
- Losing customers
- Financial losses
- Reputation damage
Quantitative vs qualitative approaches to assessing risk
The most common approaches to risk analysis can be summarized as being either quantitative or qualitative.
Quantitative approach – Allocates numerical values to both the likelihood and impact of a potential risk, with likelihood often given as a percentage and impact measured according to financial cost. These numbers are then multiplied together to get an overall risk value.
Qualitative approach – Used when there is a lack of statistically valid data to conduct a quantitative analysis and is therefore based on knowledge-driven factors and anecdotal evidence. With this approach risks are simply ranked high, medium or low.
To be most effective, risk analysis should be considered a continuous process rather than a single step, as the identification, ranking and mitigation of vulnerabilities is vital throughout the entire lifecycle of software development.
So, if you want to keep your business off the software wall of shame, conduct a thorough risk analysis of any program you attach your name to; after all, time, money and reputations are on the line with every application you deliver.