Much airtime is given to various standards for information security and risk management, but how much value can really be derived from them? At what point do they cross the line from "useful" to "too much effort and cost"? How can you best leverage standards to improve quality and performance? These questions, and more, will be addressed in this session as we explore the most common standards and how to best leverage them in managing the operational risk portfolio.
The Ultimate Guide to Choosing WordPress Pros and Cons
The Strengths & Limitations of Risk Management Standards
1. The Strengths & Limitations of
Risk Management Standards
TOG Baltimore, July 20, 2015
Ben Tomhave
2. Let’s be frank…
Frank Gehry responds to critics during a press conference in Oviedo, Spain
Photo via: Faro de Vigo
https://news.artnet.com/in-brief/frank-gehry-gives-spanish-critics-the-finger-143262
4. The strength of standards is that
they provide a reasonable,
common starting point.
5. Key Limitations
By virtue of being generalized to a relatively broad audience…
1. Standards, and their associated frameworks, require
customization and are rarely directly implementable.
1. As a result, while standards do provide the starting point
for an effort, they still require expending resources to
achieve a desirable result.
6. What are we talking about?
• Standards related to cybersecurity and risk
management. Not protocols.
• Typically large, general-purpose works.
• Examples:
– ISACA’s COBIT 5
– ISO 31000 and 27000 series
– NIST SP/FIPS/etc.
– Standards from orgs like TOG (e.g, Open FAIR)
9. COBIT 5 Details…
• The primary standard is hundreds of pages
long, and overall is a collection of several
documents.
• “COBIT 5 for Risk” alone is 244 pages.
• This is incredibly unwieldy!
19. Lessons from NIST?
• There’s a LOT to the standards.
• There’s a lot of misunderstanding, too.
• You still need to do “stuff”…
• In fact, if under FISMA, you have a LOT to do.
• In private industry, take time to understand.
21. Closing thoughts
• Standards are useful, but no panacea.
• Standards can reduce some planning efforts,
but still require work.
• Semper Gumby!
22. Bonus Point!
Right-Sizing: Just how much do you need??
Is…
Data Value + System Value + Resilience/Defensibility
…generally adequate?
23.
24. Ben Tomhave @falconsview www.secureconsulting.net
tomhave@secureconsulting.net
Notas del editor
Let’s have a frank discussion, shall we?
I’ve reached the point in my career where I’m really starting to hate this topic of discussion. Risk management is not so hard as IT people make it out to be.
However, by virtue of being generalized to a relatively broad audience, there are a couple key limitations.
Standards, and their associated frameworks, require customization and are rarely directly implementable (I say "rarely" here because there are exceptions).
As a result, while standards do provide the starting point for an effort, they still require expending resources to achieve a desirable result.
Now, to be clear here, when I'm talking about standards as related to cybersecurity and risk management, I am not talking about protocol standards that are designed to improve interoperability. Rather, I'm talking about a handful of large, often general-purpose, standards or series of standards, such as COBIT 5, ISO 31000 and the 27000 series, the collected works of NIST, and, of course, standards from The Open Group such as Open FAIR and TOGAF (as well as, by extension, SABSA).
At this point, I think it's fitting to drill down into these samples to gain a better understanding of what it is we're talking about, and then we can, as time allows, open the floor to discussion.
First up, let's look at COBIT 5.
What do you suppose is your starting point for doing all of this <gesturing with hand toward screen>? If you guessed "massive amounts of customization," then you're absolutely correct. While at Gartner, we produced research comparing frameworks and methodologies for security and risk management, and it was our conclusion that, while COBIT 5 can be an excellent resource, it requires fairly substantial expertise and effort to conform it to your organization. Moreover, it has largely grown up around the financial services industry, which means it can be some obtuse when trying to fit it into a non-fiserv organization, a bit akin to ramming a large square peg into a small round hole.
Next up, let's look at the ISO series of publications. ISO 31000 in particular is often much-maligned, but for no good reason as far as I can tell, outside of people simply not understanding its intended purpose.
ISO 31000, contrary to critical belief-state statements about it, is not a standard in the sense of something with which to strictly conform, but instead as a general guideline that is to then be followed by ancillary standards (such as 27005). Overall, they have provided a general risk management process that is easily leveraged in constructing the foundations of a risk management program.
What I really like about ISO 31000 is how clean and clear the model is presented in this simple flow-chart-diagram format. Of particular interest to me is breaking things down between Context, Assessment, and Treatment. Interestingly, this very basic breakdown highlights perfectly for us where we see a lot of failures in risk management: that is, people often try to skip over the Context stage and jump right into "risk assessment," even though you can't actually do risk assessment without first establishing context (in FAIR, or more correctly the old school FAIR-lite, this context-setting is typically done as part of the scenario definition step).
Using ISO 31000 as a starting point, which - by the way - has been almost universally adapted by the other major standards bodies (with exception of ISACA's COBIT 5), we can then look at an actual implementation-oriented standard in ISO 27005, which is part of the Information Security Management System ISO 27000 series.
Note here that we now start to see a bit better detail emerge while still adhering to the general layout of 31000. However, in keeping with the key takeaway that standards do NOT equate to "no effort required," bear in mind that the #1 step in the ISO27000 certification prep and implementation process is... Scoping! Which means you still need to customize all of this to your environment.
Ok, pivoting away from standards oriented toward the private sector, let's take a look at NIST for a little bit. Allow me to preface this part of this discussion a bit by noting that NIST standards are like onions... they have many layers and may make you cry if not handled properly. :)
Here we see the big baddie, the Risk Management Framework. Our tax dollars at work. haha. But seriously... this doesn't look too daunting at first until you realize that each of these boxes (*gesturing*) have at least 1 or 2 standards behind them. Also, note that this is really a view of *system* risk management, not *information* (or cyber) risk management. For that, we want to look at SP 800-30 and 800-39, which drill us down into a more useful point of view for the purposes of this talk.
When I spoke with Dr. Ron Ross of NIST a few years back, he indicated that these standards are intended to be flexible enough to allow for the use of different risk analysis methods, including FAIR, which I found to be quite interesting. Within info risk mgmt circles, NIST had long been derided because of RMF, even though it turns out that RMF wasn't even the right process to evaluate.
Ok, so what can we learn from NIST? Well, first off, my trusty ax of "you still need to do stuff"... and, in fact, with the entire suite of NIST and FIPS standards, especially if under FISMA regulations as a federal agency, you have a LOT of work to do... that work can either look rote and bureaucratic, or it can be flexible and innovative…
Here’s the risk taxonomy within OpenFAIR. We can drill down into each specific box and get all “quanty” if we want, but I want to highlight three key points here.
First, FAIR is about as close to implementation-ready as a standard can be.
Second, FAIR can just as easily be used qualitatively as it can be used quantitatively.
Third, guess what? YOU STILL MUST DO WORK.
Bonus point: right-sizing - how much do you really need? if you can baseline relative data sensitivity and business importance, and then estimate how defensible (or resilient) the target environment may (or may not) be, then isn't that enough as a starting point? (elude to use of decision trees)