Desktop version

Home arrow Computer Science arrow Securing Systems Applied Security Architecture and Threat Models

Rating with Incomplete Information

It would be extraordinarily helpful if the standard insurance risk equation could be calculated for information security risks.

Probability Annualized Loss = Risk

However, this equation requires data that simply are not available in sufficient quantities for a statistical analysis comparable to actuarial data that are used by insurance companies to calculate risk. In order to calculate probability, one must have enough statistical data on mathematically comparable events. Unfortunately, generally speaking, few security incidents in the computer realm are particularly mathematically similar. Given multivariate, multidimensional events generated by adaptive human agents, perhaps it wouldn’t be too far a stretch to claim that no two events are precisely the same? Given the absence of actuarial data, what can a poor security architect do?

Gut Feeling and Mental Arithmetic

Security architects generate risk ratings on a regular basis, perhaps every day, depending on job duties. Certainly, most practitioners make risk ratings repeatedly and regularly. There’s a kind of mental arithmetic that includes some or all of the elements of risk, factors in organizational preferences and capabilities, the intentions of the system, and so on, and then outputs a sense of high, medium, or low risk. For many situations referencing many different types of systems and attack vectors, a high, medium, or low rating may be quite sufficient. But how does one achieve consistency between practitioners, between systems, and over time?

This is a thorny and nontrivial problem.

I don’t mean to suggest that one cannot approach risk calculation with enough data and appropriate mathematics. Jack Jones, the author of Factor Analysis of Information Risk (FAIR), told the author that the branch of mathematics known as casino math may be employed to calculate probability in the absence of sufficient, longitudinal actuarial data. Should you want it, the FAIR “computational engine” is a part of the standards For our purposes, it is sufficient to note that the probability term is difficult to calculate without significant mathematical investment. It isn’t too hard to calculate likely loss in dollars, in some situations, although it may be tricky to annualize an information security loss.

Annualizing loss may not be an appropriate impact calculation for information security, anyway. Annualizing a loss obviously works quite well for buying insurance. It’s also useful for comparing and prioritizing disparate risks. But we’re not talking about

1

an ability to directly insure the risk of a computer system attack in this book.[1] In order to create some risk treatment, some protection, periodic premiums are usually paid to a trusted third party, the insurer.

Generally, the cost of security technology occurs one time (with perhaps maintenance fees). It may be difficult to predict the useful life of technology purchases.1. The personnel needed to run the technology are a “sunk cost.” Unless services are entirely contracted, people are typically hired permanently, not on a yearly subscription basis (though some security firms are attempting a subscription model). In any event, this book is about applying computer security to systems such that they will be well-enough defended. The insurance, such as it may be, comprises the capabilities and defenses that protect the systems. Although logically analogous to insurance, the outlay structures for preventative system security follow a different cost model than paying for insurance.

The problem with sophisticated math is that it may take too long to calculate for the purpose of assessment. Generally, during an analysis, the security assessor must repeatedly calculate risk over multiple dimensions and over multiple items. If there are numerous attack surfaces, then there are numerous risk calculations. It would be onerous if each of these calculations required considerable effort, and worse if the calculations each took an extended period. Any extended time period beyond minutes may be too long. If there’s significant residual risk[1] left over after current protections and future treatments are factored into the calculation, then the assessor must come up with an overall risk statement based upon the collection of risk items associated with the system. Taking a few hours to produce the overall risk is usually acceptable. But taking weeks or even months to perform the calculations is usually far too long.

Experienced security architects do these “back of the napkin” calculations fairly rapidly. They’ve seen dozens, perhaps hundreds, of systems. Having rated risk for hundreds or perhaps many more attack vectors, they get very comfortable delivering risk pronouncements consistently. With experience comes a gut feeling, perhaps an intuitive grasp, of the organization’s risk posture. Intimacy with the infrastructure and security capabilities allows the assessor to understand the relative risk of any particular vulnerability or attack vector. This is especially true if the vulnerability and attack vector are well understood by the assessor. But what if one hasn’t seen hundreds of systems? What does one do when just starting out?

Indeed, the computer security language around risk can be quite muddled. Vulnerabilities are treated as risks in and of themselves. Tool vendors often do this in order to produce some kind of rating based upon vulnerability scans. Typically, the worst possible scenario and impact is taken as the likely scenario and impact. And unfortunately, in real systems, this is far from true.

As an example, take a cross-site scripting (XSS) error that lies within a web page. As was noted previously, depending upon the business model, an organization may or may not care whether its website contains XSS errors.[3] But how dangerous is a XSS error that lies within an administrative interface that is only exposed to a highly restricted management network? Even further, if that XSS can only be exercised against highly trained and highly trusted, reasonably sophisticated system administrators, how likely are they to fall for an email message from an unknown source with their own administrative interface as the URL? Indeed, the attacker must know that the organization uses the vulnerable web interface. They have to somehow induce technically savvy staff to click a faulty URL and further hope that staff are logged into the interface such that the XSS will fire. Indeed, on many administrative networks, external websites are restricted such that even if the XSS error did get exercised, the user couldn’t be redirected to a malicious page because of network and application restrictions. Since no URL payload can be delivered via the XSS, the exploit will most likely fail. In this scenario, there exist a number of hoops through which an attacker must jump before the attacker’s goal can be achieved. What’s the payoff for the attacker? Is surmounting all the obstacles worth attempting the attack? Is all of this attacker effort worthwhile when literally millions of public websites reaching hundreds of millions of potential targets are riddled with XSS errors that can be exploited easily?

I would argue that a XSS error occurring in a highly restricted and well-protected administrative interface offers considerably less risk due to factors beyond the actual vulnerability: exposure, attack value, and difficulty in deriving an impact. However, I have seen vulnerability scanning tools that rate every occurrence of a particular variation of XSS precisely the same, based upon the worst scenario that the variation can produce. Without all the components of a risk calculation, the authors of the software don’t have a complete enough picture to calculate risk; they are working with only the vulnerability and taking the easiest road in the face of incomplete information: assume the worst.

In fact, what I’ve described above are basic parts of a information security risk calculation: threat agent, motivation, capability, exposure, and vulnerability. These must come together to deliver an impact to the organization in order to have risk.

A threat is not a risk by itself. A vulnerability is not a risk in and of itself. An exploitation method is not a risk when the exploit exists in the absence of a threat agent that is capable and motivated to exercise the exploit and in the presence of a vulnerability that has been exposed to that particular threat agent’s methodology. This linking of dependencies is critical to the ability of a successful attack. And, thus, understanding the dependency of each of the qualities involved becomes key to rating the risk of occurrence. But even the combination so far described is still not a risk. There can be no risk unless exploitation incurs an impact to the owners or users of a computer system. “Credible attack vector” was defined in Chapter 2:

Credible attack vector: A credible threat exercising an exploit on an exposed vulnerability.

  • [1] Residual risk” is that risk that remains after all treatments and/or mitigations have beenapplied. It is the risk that cannot or will not be adequately mitigated.
  • [2] Residual risk” is that risk that remains after all treatments and/or mitigations have beenapplied. It is the risk that cannot or will not be adequately mitigated.
  • [3] An attack against the session depends upon other vulnerabilities being present (it’s a combination attack) or it depends upon the compromise of the administrator’s browser and/ormachine, which are further hurdles not relevant to this example.
 
Source
< Prev   CONTENTS   Source   Next >

Related topics