In writing an editorial on Mobile Threat Management there are many paths one could take, from an in-depth technology assessment that is aimed at identifying and mitigating potential threats to a more holistic approach where the problem of threat management is discussed and dealt within the traditional context of IT process, people and technology.
In writing this piece a more generalized and holistic approach is considered. This is derived from the basis that the use of technology in a vacuum ignores what are the two most critical factors in the management of cybersecurity viz. process and people which are in effect the major causes of security breaches.
The approach is basic, systematic and thoughtful rather than reactionary fueled by market hype and hysteria. To understand how threats, affect an organization one must understand at an intimate level the technology landscape you operate under and importantly its “event horizon” i.e. the moment at which a breach reaches a point of no return spawning a series of the cascading failure effects. It is surprising the level of ignorance that exists around the technologies that support an organization.
The development of a systematic approach begins with understanding the following concepts in which you must unequivocally respond in the affirmative and that there exist the required level of process and people.
The Life of a (Data) Packet
Can you trace or map at the packet level, data that enters or leaves your organization? In this you understand clearly the processes involved in the creation and termination, ingress and egress and its transformation as it traverses the physical and abstracted boundaries of the enterprise. Your understanding is central such that you can identify not only the data that is important and systems that are responsible for managing it as it moves and morphs across the enterprise but the risks associated in losing it through access, exfiltration or corruption. Lack of understanding of your information model means you lack definition of your universe and therefore equates to being vulnerable.
"We are playing in a zero-sum game therefore we need to return to the more fundamental practice of IT management and focus as much on process and people as we do on technology to manage threats"
Do you understand from the principles of queuing theory and predictive modeling, systems performance that define capacity management and planning strategies? Ultimately everything fails, technology predictably so and consumer technology even faster. IT deals with systems and systems of systems and not pieces of technology. In the world of failing technology, we inadvertently create complexity that no one understands and cultivate the perfect environment to exploit your lack of understanding. How do we know we have even architected the optimal suite of integrated and interoperable systems and considered its failure mode effects?
Technology Portfolio Management
Do you have a catalogue that defines the life cycle of every component of a system and system of systems (SoS)? Such a catalogue details a rolling 24-month technology roadmap listing technical specification at a hardware and software level (including OS and firmware). More importantly it details the relationship between components at the SoS level. The dependency mapping is key to understanding impact to upstream and downstream systems and manages obsolescence, an easy target for attacks. It also reduces variability into your architecture. Consider this to be a very advanced addition to a configuration management database typically associated with ITIL.
Change to a system threatens the stability of its ecosystem. Are you able to effectively identify and quantify the impact associated to any change made to the architecture such as adding, modifying, removing components to a system? Are such changes made non-disruptively and without undermining the integrity of a system? This concept speaks to the reliability and maintainability of your architecture, precursors to a redundant systems architecture that mitigates threats (fail safe models and containerization).
People, People Everywhere
Operationalization of systems are not as inviting as designing and engineering systems. It is the operationalization that is the critical success factor to a robust security posture. Do you possess a well-documented technical blue print (as-built) and current operating manual for each system, and more importantly does it document in detail the run-maintain tasks linked to people’s skills ensuring from an engineering perspective each system is tuned and optimized to achieve its intended design goal? Within such a document you would understand the task type, frequency, duration and level of competency required – these map directly to the technical skills required to maintain a robust and secure environment.
Finally, Game Theory and the Attackers Playbook
In summary, attackers typically know your landscape better than you doubt importantly, they have time, patience and know-how to exploit them. IT organizations cannot operate their systems in ignorance. Breaches occur because most of the time organizations enable them and a technology response alone will not help and may even cloud the issue. Mobility, that includes consumer-grade technology has served to make systems more complex and even more difficult to understand. Include these into your technology-based portfolio and you have a disaster in the making made worst with multiple standards, varying degrees of security hardening, ignorance, etc. We are playing in a zero-sum game therefore we need to return to the more fundamental practice of IT management and focus as much on process and people as we do on technology to manage threats.