Presumably, you will want to collect data on risk scores of systems (if you’re rating or scoring risk), number of projects reviewed, number of escalations, number of exceptions, number of risk assumptions written? None of these alone is a particularly good measure of anything. However, by establishing baselines based upon your most seasoned architects, your most trusted high-performing architects who seem to get things done and are repeatedly sought out, you can get a feel for what performance looks like.
I like to establish baselines of expectations against which I can measure. First, I need to have some architects whose performance I trust. Then, it will take time to establish baselines based upon the throughput of my experienced architects. Doing so will also avoid judgment based upon one or a few outlier projects because outliers always exist in any security architecture practice.
I take the time necessary to establish baselines based upon trusted performance. That means, of course, that I have one or more performances that I trust. I get a sense that security requirements are being met. Certainly not all security requirements will be met, not all requirements will drive to completion. And some requirements will drive to completion with some noise or friction. But when’s there’s a good flow of projects and requirement completion, then I know I can start measuring.
Risk scores are interesting artifacts. First, these have to be gathered before assessment and sometime around the final governance checkpoint or project go live. Intuitively, one would expect risk scores to decrease. But this is not always what happens.
Take the example of a project that, before analysis, presents as wholly following organizational standards. This adherence to standards will cause the risk score to be low. Then, as the onion is peeled, it is discovered that one or more implementation details of the project present significant organizational risk and are not amenable to the organization standards. Such a situation is not common, but occurs regularly from time to time. I cited an example in a previous chapter. In this situation, the risk score is going to go up during the project delivery lifecycle. But the security architect is doing her or his job correctly; upon thorough evaluation, security issues are brought into visibility. That’s exactly the point of architecture analysis: to identify unmitigated attack surfaces.
The point is that risk scores won’t always go down, though this is the intuitive expectation. Assessments do sometimes uncover significant or even severe issues; it is not always possible to mitigate all the risk, as required. Each of these is an artifact that the security architect is actually doing the job correctly. Even so, in my experience, the aggregate risk across many projects, from start of project to delivery, should trend downwards. Project teams don’t necessarily know what security requirements will be needed. That’s why there is a security architect who is a subject matter expert to perform an architectural assessment and threat model.
Over time, if the relationships are strong between the security architects and other team members, you may see a downward trend in the number of security requirements written by the security architect. The teams will take initiative and write the appropriate requirements for themselves. Several IT architects who I worked with for many years began to finish my sentences, quite literally. Over the years of ongoing collaboration, these architects developed assessment skills that they readily applied to their architectures. When security responsibility starts being taken throughout the architecture practice, this, in and of itself, is a sign of success for the program.
Due to complexity, technical challenges, and dependencies, it can be very difficult to compare the timelines of the projects working their way through a security architect’s queue. Hence, the number of projects assessed is not a very meaningful measure. However, over many projects and a significant period of time, say biannually, or over the course of a year, one would expect some of the projects for each architect to move to production. It is a red flag if projects go into the architect’s queue, and they don’t come out again. It is also a red flag if every project completes successfully. Considering the complexity and difficulty of security architecture, it is almost impossible, working with a broad range of projects, not to encounter some challenges. Both 0% success and 100% success are indicators of some sort of failure. Anything approaching the minimum or maximum should be examined carefully. And, as noted before, by setting an expected baseline, a manager can have some sort of feel for whether an architect is performing successfully. Some projects should complete.
Which brings me to some red flag indicators for the astute manager of the security architecture program.
Smart people working collaboratively will disagree. There should be some amount of friction and some amount of conflict as teams try to juggle security against other priorities. An architect who never has an escalation and never needs help working through to a solution should be a worry. Obviously, an architect who constantly or repeatedly has conflicts is a management problem, as well. There may very well be something amiss in that person’s style or communication skills. That would be no different from any other individual who caused more conflict than is normally encountered as people try to work together. Still, people will occasionally push back against a security requirement. This is normal. Pushback should be expected, occasionally. If there is never any friction, something is wrong.
Along with escalations for decisions, occasionally, risk owners or other decision makers will not see eye to eye with security. At such times, the typical practice is to write a risk assumption so that the decision is formally made and can be referred to in the future. Again, like escalations, risk assumptions are a normal part of a well-functioning security practice. At an organization moving around 430 projects a year, with about twelve security architects, we experienced about a single risk assumption each year. Were I managing a program like this and significantly more risk assumptions than one or two in a year were written, such a circumstance might be an indicator that something was amiss in my program.
Of course, not every security requirement will get into the version of the system for which it was written. The usual approach in this situation is to write a time-bound security exception with a plan and schedule for remediation. Like escalations, exceptions are normal. The security architect who writes no exceptions is a worry. On the other hand, if many requirements, or even most requirements, end up as exceptions, this may be an indication that communication is strained or that the architect has not been empowered properly. It could also be an indication of a problem with the amount of influence an architect has within an organization. Presumably, over enough projects and a significant length of time, one could set a baseline of the expected number of exceptions and measure against this. I would expect the trend to be relatively flat in a mature program, neither rising nor falling.
Measuring the success of the security architecture program is not a matter of simply collecting totals for a period of time. As we have seen, however, it is possible not only to measure the relative success of the program but even that of individual architects by maintaining metrics over periods of time and developing baselines. Some measures that intuitively may seem like failures, such as escalations and exceptions, are in fact a normal part of the business of computer security and are to be expected. Even so, extremes are possible indicators of issues. Still, it is possible to quantitatively get a feel for program effectiveness over time.
-  Of course, I’ve learned a tremendous amount about system architecture through my collaboration with the extraordinary architects with whom I’ve worked.