Who Consumes Requirements?
Depending upon the skill of the receivers and implementers, it may be enough to write a requirement that says something on the order of, “Traffic to the application server must only be allowed from the web termination server to the application server. Only allow HTTP traffic. Disallow all other traffic from the web termination network to the application server.” In situations in which changes to the exposed networks have to go through significant architecture and design functions within the networking team, a very high-level requirement may be all that’s necessary. If the networking function already has significant skill and investment, good pre-existing networking architectures and equipment, the requirement can assume those capabilities. On the other hand, if web layers are an entirely new concept, more specificity may be required, even down to the particular equipment that will manage the layering.
The maxim for getting requirements to the right level of specificity is, “just enough to deliver an implementation that will meet the security goals.” In this example, the security architect is not really concerned so much with how the restrictions are implemented but rather that it will be difficult for an attacker to use the terminating network
(DMZ) as a beachhead to attack the application server. The security architect is interested in preventing a loss of control of the bastion network (for whatever reason) to cause a loss of the entire environment, starting with the application server. That means traffic to the application server must be restricted to only those systems that should be communicating with it, with traffic originating from termination to application server, never the other way around. That’s the goal. Any networking method employed to achieve the goal is sufficient. Assuming people who understand how to achieve these kinds of security goals, the requirement can be written at a reasonably high level, as I did at the beginning of this paragraph.
There’s another reason not to get too specific with security requirements, unless it is absolutely necessary. In situations in which security requirements have a good deal of organizational empowerment, perhaps even force, a highly specific requirement can be a technical straitjacket.
Technical capabilities change. I don’t want to write requirements that hamstring the natural and organic evolution and improvement of capabilities. In our example, if a particular piece of equipment had been specified and then, for some reason, was deprecated or even had become obsolete, a hard, specific requirement for a particular piece of equipment, perhaps configured in a very particular manner, might very well prevent an upgrade to a key system at a possible future improvement point.
Besides, the wise security architect may not have the time to assess every shift and technical capability in the many areas in and around complex systems. That is, for large, complex systems, knowing every piece of networking equipment that’s involved, and specifying these in great detail, may be way too much effort and may not actually produce security value. I try to stay out of these sorts of details unless there’s a good reason to become involved. By writing my security requirements focused on the ultimate security goal, I can avoid dropping myself, or an organization, into a technical dead end.
Sometimes, there is a tendency for security teams to want to specify particular cryptography algorithms and perhaps, key lengths. In the not too far distant past, the MD5 hash algorithm was considered quite sufficient and cryptographically secure. Then, upon the publishing of a single paper, the ability of MD5 to survive rainbow table attacks was called into question. As of the writing of this book, MD5 is considered deprecated and should be replaced.
Consider a requirement that specified MD5 at a time when it was still considered sufficient protection. Not only would every system that had implemented MD5 be subject to change, but all requirements specifying MD5 would suddenly become obsolete. What if MD5 were specifically called out in a corporate standard or, even worse, in a policy? In large organizations, policies are only rarely changed, and only with approval at a fairly high level in the organization, often with several stakeholder organizations (for instance, a Legal Department). In response to the loss of a particular cryptography algorithm that has been specified in a policy, changing the policy and all the requirements to meet that policy becomes quite an expensive proposition.
But what if the standards writers had said something on the order of “an accepted, cryptographically proven hash algorithm implemented by a well vetted, standard library.” Although it may be argued that given such a requirement, some systems might choose MD5, other systems might use SHA1, or perhaps SHA128, or any number of algorithms that are currently considered reasonably cryptographically hard to reverse. In this case, only the systems that have used MD5 need to be changed. The standard doesn’t need to be rewritten. If the policy merely says something of the order of “follow the standard,” then the policy doesn’t need to be changed either. And, as new, more resistant, or stronger hash algorithms become available, implementers and maintainers are free to use something better. I want my security requirements to be as future proof as possible.
Part of the art of writing security requirements is understanding, in some depth, existing capabilities. You may remember that when the ATASM process was introduced, one of the prerequisites to analysis was having a firm grasp on current capabilities? Understanding “current security capabilities,” that is, existing infrastructure and control systems, allows those security capabilities and mitigations to be factored into a system analysis. This understanding also helps to express security requirements at an appropriate level such that the current capabilities will achieve their security objectives. If well understood, the existing capabilities can be assumed behind requirements, without the need to exhaustively describe them.
In small organizations in which there are only a few people involved, there may not be much separation between architecture and the details of the engineering implementation. In these situations, a security assessor may also be intimately involved in the details of implementation. The assessor and the implementer are often the same person. In this type of small, tightly bounded situation, there might not be a need for formal security requirements; just to do it. The “just do it” attitude obviates, oftentimes, a need for process documentation and process governance. Still, even in this situation, if customers are going to ask what security requirements were built into a system, it may be useful to have documented the security that has been built into a system, even if implementers don’t actually need any formal requirements. Perhaps customers will?
Once organizations reach any sort of organizational complexity, with stakeholders responsible for implementing security requirements becoming different from those who write them, then the security requirements become a document of record of the assessment. The security requirements become the formal statement about the future or finished security posture of a system. The requirements, taken in concert with the “mitigations,” that is, existing security features and defenses, describe the security posture of the system in a systematic manner. Since each requirement is tied to the set of attack vectors and surfaces, the requirements set out what has been defended and indicate what has not, as well. Consequently, if no other formal documentation is produced from a system assessment, the essential document will be the security requirements. The threat model can be inferred from a well-expressed set of security requirements.
Different stakeholders of the security requirements will need different levels of specificity. The level of specificity that comes out of an assessment, as we noted, depends highly on the capabilities and skill of the functions that will act on the requirements by turning them into implementations. In situations in which the assessor (security architect) will not be involved in the implementation, generating requirements that focus on the goals of the requirement, as opposed to how the requirements will get done, has proven quite effective as requirements are handed off. Even more so, expressing the security value of requirements will avoid technological traps, obsolescence, and dead ends.
When requirements cannot be met, for whatever reason, a risk analysis will help decision makers to prioritize effectively. It’s useful to remember that different stakeholders to a risk decision may need to understand the impacts expressed in terms of each stakeholder’s risks. We covered this topic somewhat in the chapter on risk (Chapter 4). Although there are many places in the security cycle where risk may need to be calculated and expressed, the prioritization of security requirements against resource constraints, budgets, and delivery schedules remains one of the most common. This is typically a place where the security architect, who has a fundamental understanding of risk and organizational risk tolerance, can offer significant value. When decision makers have built trust that the security function has a method for rating risk in a consistent and fair manner, they may come to depend upon those risk ratings in their decisionmaking process.
What the security person can’t do is continually and repeatedly rate every situation as “high.” There always exists the possibility that sophisticated attackers, chaining attack methods and exploits together, with unlimited time and unlimited resources, can turn even minor issues into major impacts. What must be avoided is the tendency to string all the technical “worst-case scenarios” together, thus making nearly every situation a dire one. If the decision maker doesn’t simply dismiss the rating out of hand, certainly, over time, she or he will become numb to a repeated barrage of high and critical ratings.
Don’t try to get things accomplished by inflating risk ratings. Truly, not every situation has a fair certainty of serious consequences. Some method must be employed to separate the wheat from the chaff, the noise from the signal, the truly dangerous from that which can be tolerated, at least for a while.
Earlier, we proposed some derivative of Just Good Enough Risk Rating (JGERR). I’ve suggested JGERR not because it’s the best or the easiest, but because I’ve used it, at scale, and I know that it works. Any methodology based on reasonable understandings about what risk is and how it should be calculated will do. Whatever you use, it must produce reasonably consistent results.
One of the keys to helping stakeholders understand how technical computer risks affect their area is to express the risk as it impacts those things that they are charged to defend. For instance, system administrators don’t want unauthorized people mucking with their systems. They don’t want unauthorized changes, because these changes will make more work for them. Senior management seek to enhance the brand, and they generally don’t want bad marks from their peers on their ability to deliver on the organization’s goals. Telling senior decision makers that they might have someone inappropriately on a system may not get results. Likewise, telling a system administrator responsible for a few hundred of an organization’s thousands of hosts that there might be damage to the company’s customers, which might affect the trustability of the brand or lower the stock price, may not get the desired result. Probably neither of these audiences want to know the details of a remote, unauthenticated buffer overflow that allows an attacker to shell out to the operating system command line at high privilege.
Personally, I save the technical details for the background notes at the bottom of the risk assessment. The details are there for anyone to understand how I’ve arrived at my risk rating. These details can bolster the logic of the risk analysis, if anyone’s curious. Instead, I highlight the impact upon the stakeholder’s goals and what I believe to be the likelihood of occurrence.
That doesn’t mean that I win every risk discussion. I don’t. There are many factors that have to be taken into account in these decisions. The likelihood of successful attack is only one consideration. I believe that it’s my job to deliver quality information so that security can be considered appropriately. As one vice president told me, “I guess it’s my job to stick my neck out.” Yeah, it probably is.
It is a truism that in order to build security into a system, security should be considered as early as possible, some security being considered even at the conceptual stage. Certainly, if an architecture is ready for implementation and it does not contain elements to support the security features that the system will require, that architecture must be considered incomplete. Worse yet are designs that make it impossible to build appropriate security functionality.
In fact, assessing an architecture for its security, creating the threat model after the system has been implemented, is usually too late. If the system has many future phases of development, perhaps security requirements can be gathered during one or more of the future phases? On the other hand, if the system is considered essentially complete, then what does one do with unfulfilled security requirements? Probably, one will end up writing security exceptions and risk assumptions. Although these activities do help your organization understand its shifting risk posture, they’re essentially nonproductive with respect to the actual system going into production. A risk assumption provides no security value to a real, live computer system.
The various phases of the secure development cycle (SDL) are described in depth in Core Software Security: Security at the Source, by James Ransome and Anmol Misra, including the chapter that I contributed (Chapter 9).4 Some considerable attention was devoted to getting the security of an architecture correct. Security architecture, that is the application of information security to systems, the art of securing systems, is best done as an early part of any development process, as well as being an ongoing conversation as architectures, designs, implementations evolve. Certainly, there is very little that an architectural risk assessment of a system can do if the system cannot be changed.
Consequently, threat modeling is an activity for early in the development cycle. This is not to suggest that threat models are static documents. In fact, in high velocity, or
Agile processes, the breadth and depth of the threat model will emerge throughout the development process. Architectures will change during the development process, which necessitates revisiting the system analysis and threat model as necessary. In Chapter 9 of Core Software Security: Security at the Source,5 I encourage deep engagement between those assessing and creating the security posture of a system and all the other imple- menters and stakeholders. Agile works best, in my opinion, when security subject matter experts are readily available during the entire process.
The Early Bird Gets to Influence
But there’s another, social reason for an early capture of security requirements. People, implementers, and teams need time in order to incorporate the material into their thinking. This is particularly true of complex matters, which security can sometimes be. In a way, it might be said that “the early requirement gets the worm.” When designing and building complex systems, the matters that have been in consideration for the longest time will seem like they are an essential part of the system. Surprises, especially surprises about possible technical misses and mistakes, can be difficult to accommodate. This difficulty of accommodation is especially true the later in the development process that issues are discovered.