Desktop version

Home arrow Computer Science arrow Securing Systems Applied Security Architecture and Threat Models

How Much Risk to Tolerate?

As we have seen, different threat agents have different risk tolerances. Some attempt near perfect secrecy, some need anonymity, and some require immediate attention for success. In the same way, different organizations have different organizational risk postures. Some businesses are inherently risky; the rewards need to be commensurate with the risk. Some organizations need to minimize risk as much as possible. And, some organizations have sophisticated risk management processes. One only needs to consider an insurance business or any loan-making enterprise. Each of these makes a profit through the sophisticated calculation of risk. An insurance company’s management of its risk will, necessarily, be a key activity for a successful business. On the other hand, an entrepreneurial start-up run by previously successful businesspeople may be able to tolerate a great deal of risk. That, in fact, may be a joy for the entrepreneur.

Since there is no perfect security, and there are no guarantees that a successful attack will always be prevented, especially in computer security, risk is always inherent in the application of security to a system. And, since there are no guarantees, how much security is enough? This is ultimately the question that must be answered before the appropriate set of security controls can be applied to any system.

I remind the reader of a definition from the Introduction:

Securing systems is the art and craft of applying information security principles, design

imperatives, and available controls in order to achieve a particular security posture. [1]

it under the “Application Service Provider Policy.” That policy and standard was very clear: All third parties handling the organization’s data were required to go through an extensive assessment of their security practices. Since the proposed system was to be exposed to the Internet, it also fell under standards and policies related to protection of applications and equipment exposed to the Public Internet. Typically, application service provider reviews took two or three months to complete, sometimes considerably longer. If the third party didn’t see the value in participating or was resistive for any other reason, the review would languish waiting for their responses. And, oftentimes the responses would be incomplete or indicate a misunderstanding of one or more of the review questions. Though unusual, a review could take as long as a year to complete.

The Web standards called for the use of network restrictions and firewalls between the various components, as they change function from Web to application to data (multi-tier protections). This is common in web architectures. Further, since the organization putting forth the standards deployed huge, revenue-producing server farms, its standards were geared to large implementations, extensive staff, and very mature processes. These standards would be overwhelming for a small, nimble, poorly capitalized company to implement.

When the project manager driving the project was told about all the requirements that would be necessary and the likely time delays that meeting the requirements would entail, she was shocked. She worked in a division that had little contact with the web security team and, thus, had not encountered these policies and standards previously. She then explained that the company was willing to lose all the money to be expended on this project: The effort was an experiment in a new business model. That’s why they were using a third party. They wanted to be able to cut loose from the effort and the application on a moment’s notice. The company’s brand name was not going to be associated with this effort. So there was little danger of a brand impact should the system be successfully breached. Further, there was no sensitive data: All the data was eminently discardable. This application was to be a tentative experiment. The goal was simply to see if there was interest for this type of application. In today’s lexicon, the company for which I worked was searching for the “right product,” rather than trying to build the product “right.”

Any system connected to the Internet, of course, must have some self-protection against the omnipresent level of attack it must face. But the kind of protections that we would normally have put on a web system were simply too much for this particular project. The required risk posture was quite low. In this case, we granted exceptions to the policies so that the project could go forward quickly and easily. The controls that we actually implemented were just sufficient to stave off typical, omnipresent web attack. It was a business decision to forgo a more protective security posture.

The primary business requirements for information security are business-specific. They will usually be expressed in terms of protecting the availability, integrity, authenticity and confidentiality of business information, and providing accountability and auditability in information systems.11

There are two risk tolerances that need to be understood before going into a system security assessment.

  • • What is the general risk tolerance of the owners of the system?
  • • What is the risk tolerance for this particular system?

Systems critical to the functioning of an organization will necessarily have far less risk tolerance and a far higher security posture than systems that are peripheral. If a business can continue despite the loss of a system or its data, then that system is not nearly as important as a system whose functioning is key. It should be noted that in a shared environment, even the least critical application within the shared environment may open a hole that degrades the posture of the entire environment. If the environment is critical, then the security of each component, no matter how peripheral, must meet the standards of the entire environment. In the example above, the system under assessment was both peripheral and entirely separate. Therefore, that system’s loss could not have significant impact on the whole. On the other hand, an application on that organization’s shared web infrastructure with a vulnerability that breached the tiered protections could open a disastrous hole, even if completely insignificant. (I did prevent an application from doing exactly that in another, unrelated, review.)

It should be apparent that organizations willing to take a great deal of risk as a general part of their approach will necessarily be willing to lose systems. A security architect providing security controls for systems being deployed by such an organization needs to understand what risks the organization is willing to take. I offer as an example a business model that typically interacts with its customers exactly one single time. In such a model, the business may not care if customers are harmed through their business systems. Cross-site scripting (XSS) is typically an attack through a web system against the users of the system. In this business model, the owners of the system may not care that some percentage of their customers get attacked, since the organization won’t interact with these customers again; they have no need for customer loyalty.[2]

On the other hand, if the business model requires the retention, loyalty, and goodwill of as many customers as possible, then having one’s customers get attacked because of flaws in one’s commerce systems is probably not a risk worth taking. I use these two polar examples to illustrate how the organization’s operational model influences its risk stance. And, the risk tolerance of the organization significantly influences how much security is required to protect its systems.

How does one uncover the risk tolerance of an organization? The obvious answer is to simply ask. In organizations that have sophisticated and/or mature risk management practices, it may be a matter of simply asking the right team or group. However, for any organization that doesn’t have this information readily available, some investigation is required. As in the case with the project manager whose project was purely experimental and easily lost, simply asking, “What is the net effect of losing the data in the system?” may be sufficient. But in situations where the development team hasn’t thought about this issue, the most likely people to understand the question in the broader organizational sense will be those who are responsible and accountable. In a commercial organization, this may be senior management, for instance, a general manager for a division, and others in similar positions. In organizations with less hierarchy, this may be a discussion among all the leaders—technical, management, whoever’s responsible, or whoever takes responsibility for the success of the organization.

Although organizational risk assessment is beyond the scope of this book, one can get a good feel simply by asking pointed questions:

  • • How much are we willing to lose?
  • • What loss would mean the end of the organization?
  • • What losses can this organization sustain? And for how long?
  • • What data and systems are key to delivering the organizational mission?
  • • Could we make up for the loss of key systems through alternate means? For how long can we exist using alternate means?

These and similar questions are likely to seed informative conversations that will give the analyst a better sense of just how much risk and of what sort the organization is willing to tolerate.

As an example, for a long time, an organization at which I worked was willing to tolerate accumulating risk through its thousands of web applications. For most of these applications, loss of any particular one of them would not degrade the overall enterprise significantly. While the aggregate risk continued to increase, each risk owner, usually a director or vice president, was willing to tolerate this isolated risk for their particular function. No one in senior management was willing to think about the aggregate risk that was being accumulated. Then, a nasty compromise and breach occurred. This highlighted the pile of unmitigated risk that had accumulated. At this point, executive management decided that the accumulated risk pile needed to be addressed; we were carrying too much technology debt above and beyond the risk tolerance of the organization. Sometimes, it takes a crisis in order to fully understand the implications for the organization. As quoted earlier, in Chapter 1, “Never waste a crisis.”12 The short of it is, it’s hard to build the right security if you don’t know what “secure enough” is. Time spent fact finding can be very enlightening.

With security posture and risk tolerance of the overall organization in hand, specific questions about specific systems can be placed within that overall tolerance. The questions are more or less the same as listed above. One can simply change the word “organization” to “system under discussion.”

There is one additional question that should be added to our list: “What is the highest sensitivity of the data handled by the system?” Most organizations with any security maturity at all will have developed a data-sensitivity classification policy and scale. These usually run from public (available to the world) to secret (need-to-know basis only). There are many variations on these policies and systems, from only two classifications to as many as six or seven. An important element for protecting the organization’s data is to understand how restricted the access to particular data within a particular system needs to be. It is useful to ask for the highest sensitivity of data since controls will have to be fit for that, irrespective of other, lower classification data that is processed or stored.

Different systems require different levels of security. A “one-size-fits-all” approach is likely to lead to over specifying some systems. Or it may lead to under specifying most systems, especially key, critical systems. Understanding the system risk tolerance and the sensitivity of the data being held are key to building the correct security.

For large information technology (IT) organizations, economies of scale are typically achieved by treating as many systems as possible in the same way, with the same processes, with the same infrastructure, with as few barriers between information flow as possible. In the “good old days” of information security, when network restrictions ruled all, this approach may have made some sense. Many of the attacks of the time were at the network and the endpoint. Sophisticated application attacks, combination attacks, persistent attacks, and the like were extremely rare. The castle walls and the perimeter controls were strong enough. Security could be served by enclosing and isolating the entire network. Information within the “castle” could flow freely. There were only a few tightly controlled ingress and egress points.

Those days are long gone. Most organizations are so highly cross-connected that we live in an age of information ecosystems rather than isolated castles and digital city- states. I don’t mean to suggest that perimeter controls are useless or passe. They are one part of a defense-in-depth. But in large organizations, certainly, there are likely to be several, if not many, connections to third parties, some of whom maintain radically different security postures. And, on any particular day, there are quite likely to be any number of people whose interests are not the same as the organization’s but who’ve been given internal access of one kind or another.

Added to highly cross-connected organizations, many people own many connecting devices. The “consumerization” of IT has opened the trusted network to devices that are owned and not at all controlled by the IT security department. Hence, we don’t know what applications are running on what devices that may be connecting (through open exchanges like HTTP/HTML) to what applications. We can authenticate and authorize the user. But from how safe a device is the user connecting? Generally, today, it is safer to assume that some number of the devices accessing the organization’s network and resources are already compromised. That is a very different picture from the highly restricted networks of the past.

National Cyber Security Award winner Michelle Guel has been touting “islands of security” for years now. Place the security around that which needs it rather than trusting the entire castle. As I wrote above, it’s pretty simple: Different systems require different security postures. Remember, always, that one system’s security posture affects all the other systems’ security posture in any shared environment.

What is a security posture?

Security posture is the overall capability of the security organization to assess its unique risk areas and to implement security measures that wouldprotect against exploitation.13

If we replace “organization” with “system,” we are close to a definition of a system’s security posture. According to Michael Fey’s definition, quoted above, an architecture analysis for security is a part of the security posture of the system (replacing “organization” with “system”). But is the analysis to determine system posture a part of that posture? I would argue, “No.” At least within the context of this book, the analysis is outside the posture. If the analysis is to be taken as a part of the posture, then simply performing the analysis will change the posture of the system. And our working approach is that the point of the analysis is to determine the current posture of the system and then to bring the system’s posture to a desired, intended state. If we then rework the definition, we have something like the following:

System security posture: The unique risk areas of a system against which to implement security measures that will protect against exploitation of the system.

Notice that our working definition includes both risk areas and security measures. It is the sum total of these that constitute a “security posture.” A posture includes both risk and protection. Once again, “no risk” doesn’t exist. Neither does “no protection,” as most modern operating environments have some protections in-built. Thus, posture must include the risks, the risk mitigations, and any residual risk that remains unprotected. The point of an ARA—the point of securing systems—is to bring a system to an intended security posture, the security posture that matches the risk tolerance of the organization and protects against those threats that are relevant to that system and its data.

Hence, one must ascertain what’s needed for the system that’s under analysis. The answers that you will collect to the risk questions posed above point in the right direction. An analysis aims to discover the existing security posture of a system and to calculate through some risk-based method, the likely threats and attack scenarios. It then requires those controls that will bring the system to the intended security posture.

The business model (or similar mission of system owners) is deeply tied into the desired risk posture. Let’s explore some more real-life examples. We’ve already examined a system that was meant to be temporary and experimental. Let’s find a polar opposite, a system that handles financial data for a business that must retain customer loyalty.

In the world of banking, there are many offerings, and competition for customers is fierce. With the growth of online banking services, customers need significant reasons to bank with the local institution, even if there is only a single bank in town. A friend of mine is a bank manager in a small town of four thousand people, in central California. Even in that town, there are several brick and mortar banks. She vies for the loyalty of her customers with personal services and through paying close attention to individual needs and the town’s overall economic concerns.

Obviously, a front-end banking system available to the Internet may not be able to offer the human touch that my friend can tender to her customers. Hopefully, you still agree that loyalty is won, not guaranteed? Part of that loyalty will be the demonstration, over time, that deposits are safely held, that each customer’s information is secure.

Beyond the customer-retention imperative, in most countries, banks are subject to a host of regulations, some of which require and specify security. The regulatory picture will influence the business’ risk posture, alongside its business imperatives. Any system deployed by the bank for its customers will have to have a security posture sufficient for customer confidence and that meets jurisdictional regulations, as well.[3]

As we have noted, any system connected to the Public Internet is guaranteed to be attacked, to be severely tested continuously. Financial institutions, as we have already examined, will be targeted by cyber criminals. This gives us our first posture clue: The system will have to have sufficient defense to resist this constant level of attack, some of which will be targeted and perhaps sophisticated.

But we also know that our customers are targets and their deposits are targeted. These are two separate goals: to gain, through our system, the customers’ equipment and data (on their endpoint). And, at the same time, some attackers will be targeting the funds held in trust. Hence, this system must do all that it can to prevent its use to attack our customers. And, we must protect the customers’ funds and data; an ideal would be to protect “like a safety deposit box.”

Security requirements for an online bank might include demilitarized zone (DMZ) hardening, administration restrictions, protective firewall tiers between HTTP terminations, application code and the databases to support the application, robust authentication and authorization systems (which mustn’t be exposed to the Internet, but only to the systems that need to authenticate), input validation (to prevent input validation errors), stored procedures (to prevent SQL injection errors), and so forth. As you can see, the list is quite extensive. And I have not listed everything that I would expect for this system, only the most obvious.

If the bank chose to outsource the system and its operations, then the chosen vendor would have to demonstrate all of the above and more, not just once, but repeatedly through time.

Given these different types of systems, perhaps you are beginning to comprehend why the analysis can only move forward successfully with both the organization posture and the system posture understood? The bank’s internal company portal through which employees get the current company news and access various employee services, would, however, have a different security posture. The human resources (HR) system may have significant security needs, but the press release feed may have significantly less. Certainly, the company will prefer not to have fake news posted. Fake company news postings may have a much less significant impact on the bank than losing the account holdings of 30% of the banks customers?

Before analysis, one needs to have a good understanding of the shared services that are available, and how a security posture may be shared across systems in any particular environment. With the required system risk posture and risk tolerance in hand, one may proceed with the next steps of the system analysis.

  • [1] have emphasized “a particular security posture.” Some security postures will betoo little to resist the attacks that are most likely to come. On the other hand, deep,rigorous, pervasive information security is expensive and time consuming. The classicexample is the situation where the security controls cost more than the expected returnon investment for the system. It should be obvious that such an expensive securityposture would then be too much? Security is typically only one of many attributes thatcontribute to the success of a particular system, which then contributes to the success ofthe organization. When resources are limited (and aren’t they always?), difficult choicesneed to be made. In my experience, it’s a great deal easier to make these difficult choices when one hasa firm grasp on what is needed. A system that I had to assess was subject to a number ofthe organization’s standards. The system was to be run by a third party, which brought
  • [2] I do not mean to suggest that ignoring your customers’ safety is a particularly moral stance.My own code entreats me to “do no harm.” However, I can readily imagine types of businessesthat don’t require the continuing goodwill of their customers.
  • [3] I don’t mean to reduce banking to two imperatives. I’m not a banking security expert. And,online banking is beyond our scope. I’ve reduced the complexity, as an example.
 
Source
< Prev   CONTENTS   Source   Next >

Related topics