Skip to main content

Getting Beyond Compliance

In my last post, I discussed compliance frameworks, postulating that they should be a starting point for our attempts to secure our networks and not a be-all-end-all goal.  Getting beyond the compliance is the goal of this post.

I don’t wish to be taken as bashing compliance.  As I’ve previously discussed, compliance is a strong corporate motivator to exercise at least the minimum recognized security controls to show due diligence.  Compliance frameworks also serve as a common language, ensuring that practitioners, academics, and business managers alike can form an understanding.  Frameworks normally cover the most common situations and thereby reduce the amount of work required to develop a reasonably secure network.

The problem comes, however, when threat or technology changes outpace the changes to the framework, or our business requirements don’t fit neatly into the mold of common implementations.  In the last two of these examples, a network developed with new technology or to support a non-standard business requirement could be considered non-compliant; but is it insecure?  The answer: it depends.  I have personally seen reviews for new networks or network changes that determine a non-compliant architecture and provide a recommendation on whether or not the incurred level of risk should be assumed. 

All too often, these compliance reviews recommend disapproval due to non-compliance with absolutely no attempt to explain what the incurred level of resulting risk might be or how it might be mitigated or transferred.  I believe this is an inherent weakness in our ability as information security practitioners to explain and justify security expenses to management.  Managers generally understand risk and genuinely care about what is likely to threaten their desired business outcomes, whether it’s a competitor’s marketing strategy or a hactivist’s disclosure of sensitive information.  They merely need to understand how the potential threat event is likely to impact business outcomes and how likely the event is to occur.  Based on this severity and probability, managers can make clear, sound decisions on risk mitigation, transference, or assumption.

The weakness I have observed manifests itself most generally in a risk-averse culture among information security professionals.  Many automatically assume that simply because something bad can happen, we must take action to counteract it and cost rarely seems to be a consideration.  The focus becomes solely on the vulnerability and not on the potential business outcome and the probability that this negative business outcome will occur.  Information security professionals must become experts in risk as well as experts in articulating that risk in a manner that management can understand. 

There are multiple risk frameworks out there.  One of the most promising that I have seen recently is the Factor Analysis of Information Risk (FAIR) (for more information, see riskmanagementinsight.com/media/documents/FAIR_Introduction.pdf).  I’d be interested in seeing what other risk frameworks are being used today and to what depth.

Comments

Popular posts from this blog

Compliance versus Security

As I alluded in my previous post, compliance versus security is a discussion all its own and here is my attempt to explain my thoughts. Does compliance with regulation really make our information systems more secure?   The answer, like the answer to most of these sorts of questions, is it depends.   Merriam-Webster defines security as “measures taken to guard against espionage or sabotage, crime, attack, or escape [1].”   Clearly then, reducing exposure to risks like espionage, sabotage, criminal activity, or attack through the network improves security.   How do we, as consumers, either individually or as businesses, ensure the services we utilize are secure?   One method is the use of agreed upon frameworks of controls that the systems can be measured against.   If the framework is complete and valid and the system is compliant, then we can be reasonably certain that the system is secure, at least against the known threats that the framework provides controls for.   Seems

Baked In versus Buttered On

Virtually everyone I've ever spoken to will agree that security is always better "baked in" to the design from the beginning as opposed to "buttered on" at the end. Why is it, then, that we always seem to have so much trouble getting there? It seems as though we have implemented our systems development processes in a way that prevents us from reaching this state. While there are great frameworks out there like the Systems Development Life Cycle (SDLC), the systems are only as secure as the requirements that they are developed to. This, I believe, is where we most often go astray. Security must be a requirement, just as throughput or port density are requirements. The challenge, then, is to get security practitioners to develop requirements that will drive the design and assist in the tradeoff decisions between security, functionality, and cost that will inevitably occur. These tradeoff decisions must be backed up by solid risk analysis and not just compli