What Security Leaders Can Learn About Decision-Making
Bishop Fox’s Vincent Liu recently chatted with the General Manager of Cyber Security and Privacy at GE Healthcare Richard Seiersen about decision-making, threat intelligence, the Internet of Things, and his new book, “How to Measure Anything in Cybersecurity Risk.”
You can read highlights of the interview in this Dark Reading piece. The long-form version is below.
The Art of Decision-Making
Vincent Liu: Rich, I want to start by talking to you about decision-making. I know you have an extensive and unique background in it; can you elaborate on that a little?
Richard Seiersen: I earned a Master in Counseling with an emphasis on decision-making ages ago. I focused on a framework that combined deep linguistics analysis with goal-setting to model effective decision-making. You could call it “agile counseling” as opposed to open-ended soft counseling. More recently, I started a Master of Science in Predictive Analytics. My former degree has affected how I frame decisions and the latter brings in more math to address uncertainty. Together they are a powerful duo, particularly when you throw programming into the mix.
VL: How has decision-making played a part in your role as a security leader?
RS: Most prominently, it’s led me to the realization that we have more data than we think and need less than we think when managing risk. In fact, you can manage risk with nearly zero empirical data. In the book “How to Measure Anything in Cybersecurity Risk,” we call this “sparse data analytics.” I also like to refer to it as “Small Data.” Sparse analytics are the foundation of our security analytics maturity model. The other end is what we term “prescriptive analytics.” When we assess risk with near zero empirical data, we still have data, which we call “beliefs.” Consider the example of threat modeling. When we threat model an architecture, we are also modeling our beliefs about threats. We can abstract this practice of modeling beliefs to examine a whole portfolio of risk as well. We take what limited empirical data we have and combine it with our subject matter expert’s beliefs to quickly comprehend risk.
VL: What about those folks who will object to this approach, stating that it leaves too much to uncertainty?
RS: To them, I say that we aim to highlight uncertainty in risk assessment as opposed to completely eliminating it. Our uncertainty, when measured correctly, guides our next action.
"If we believe we have too much uncertainty, then we can focus our efforts on gathering empirical data for that uncertainty."
We iterate in an agile fashion until the cost of data acquisition outweighs value. Time and money are frequently wasted on unnecessary uncertainty reduction. Our book explores this subject at length.
Useless Decomposition
VL: If you’re starting out as a leader, and you want to be more “decision” or “measurement” oriented, what would be a few first steps down this road?
RS: Remove the junk that prevents you from answering key questions. I prefer to circumvent highs, mediums, or lows of any sort, what we call in the book “useless decompositions.” Instead, I try to keep decisions to on-and-off choices. When you have too much variation, risk can be amplified.
Most readers have probably heard of threat actor capability. This can be decomposed into things like nation-state, organized crime, etc. We label these “useless decomposition” when used out of context.
Juxtapose these to useful decompositions, which are based on observable evidence. For example, “Have we or anyone else witnessed this vulnerability being exploited?” More to the point, what is the likelihood of this vulnerability being exploited in a given time frame? If you have zero evidence of exploitability anywhere, your degree of belief would be closer to zero. And when we talk about likelihood, we are really talking about probability. When real math enters the situation, most reactions are, “Where did you get your probability?” My answer is usually something like, “Where do you get your 4 on a 1-to-5 scale, or your ‘high’ on a low, medium, high, critical scale?” A percentage retains our uncertainty. Scales are placebos that make you feel as if you have measured something when you actually haven’t. In the book, we say that this type of risk management based on ordinal scales can be, “worse than doing nothing.”
VL: My takeaway is the more straightforward and simple things are, the better. The more we can make a decision binary, the better. Take CVSS (Common Vulnerability Scoring System). You have several numbers that become an aggregate number that winds up devoid of context.
RS: The problem with CVSS is it contains so many useless decompositions. The more we start adding in these ordinal scales, the more we enter this arbitrary gray area.
"When it comes to things like CVSS and OWASP, the problem also lies with how they do their math."
Ordinal scales are not actually numbers. For example, let’s say I am a doctor in a burn unit. I can return home at night when the average burn intensity is less than 5 on a 1-to-10 ordinal scale. If I have three patients with burns that each rank a 1, 3, and 10 respectively, my average is less than a 5. Of course, I have one person nearing death, but it’s quitting time and I am out of there! That makes absolutely no sense, but it is exactly how most industry frameworks and vendor implement security risk management. This is a real problem. That approach falls flat when you scale out to managing portfolios of risk.
Threat Intelligence: Needed or Noise?
VL: The logical extension of your position on decision-making and the value of certain intelligence could read as an indictment of threat intelligence. How useful is threat intelligence?
RS: We have to ask—and not to be mystical here—what threat intelligence means. If you’re telling me it is an early warning system that lets me know a bad guy is trying to steal my shorts, that’s fine. It allows me to prepare myself and fortify my defenses (e.g., wear a belt) at a relatively sustainable cost. What I fear is that most threat intelligence data is probably very expensive, and oftentimes redundant, noise. The problem likely has to do with the distinctions being used in threat intelligence. In the field of decision analysis, we want to know if each distinction has, “clarity, observability and usefulness.” Clarity means unambiguous terms that stand alone without explanation. Observability means exactly that; can we see the evidence of the thing in question? Usefulness means, having seen the thing, is the action we should take clear? I am not suggesting we throw the baby out with the bathwater when it comes to threat intelligence, I merely think we need to clean house. Remove the useless decompositions and see what remains, then decide how much more adept it makes us at fighting the bad guys.
VL: Some threat intel services simply focus whatever the bad guys are doing, but other ones are more specific to your digital footprint. Is that really threat intelligence, though? Many companies are providing that information under the umbrella of threat intel, and some of that information is highly suspect.
RS: Every company today faces the reality of an attack. But what do threats do specifically? A threat exploits vulnerabilities to steal treasure (value) or inflicts other harm. If we pass that statement through the filter of “clarity, observability and usefulness,” (what the grandfather of decision analysis Ron Howard deems the “Clarity Test”) one thing stands apart for me. Vulnerability and treasure are readily measurable whereas threat is more intangible. Intangibles are fine; they are often at the root of complex and interesting measurement challenges, like gauging the value of trust. When we have intangibles, we can decompose them to see where there are distinctions that pass the clarity test. Then, we can look at the cost of acquiring data based on those distinctions in relationship to the value it has in reducing probable future loss. If my goal is to measurably improve my security program, I am then laser focused on this topic. I will focus on where I can get the most value in reducing loss.
VL: Where would you focus your energy then?
RS: For my money, I would focus on how I design, develop, and deploy products that persist and transmit or manage treasure. Concentrate on the treasure; the bad guys have their eyes on it, and you should have your eyes directed there, too. This starts in design, and not enough of us who make products focus enough on design. Returning to threat modeling, this is where you can start protecting treasure earliest in the secure development lifestyle (SDL). Moreover, “secure in deploy starts in design.” This means considering early on the telemetry necessary for catching the bad guy once the product is in the production stage. We need to arrive at a point where we design in telemetry and refine our deterministic and probabilistic monitoring during design and development. We need to steer toward products that are more self-aware and self-defending by default and, consequently, more future-proof. Of course, if you are dealing with the integration of legacy “critical infrastructure”-based technology, you don’t always have the tabula rasa of design from scratch. One of the things we have yet to discuss, Vinnie, is the legacy reality of the Industrial Internet of Things (IIoT).
The Ins and Outs of IIoT
VL: You mean the integration of critical infrastructure with emerging Internet of Things technology, is that correct?
RS: Yes; we need to be thoughtful and incorporate the best design practices here. Also, due to the realities of legacy infrastructure, we need to consider the “testing in” of security. Ironically, practices like threat modeling can help us focus our testing efforts when it comes to legacy. I constantly find myself returning to concepts like the principle of least privilege, removing unnecessary software and services. In short, focusing on reducing attack surface where it counts most. Oldies, but goodies!
VL: When you’re installing an alarm system, you want to ensure it is properly set up before you worry about where you might be attacked. At least you have the peace of mind of knowing your home is secure as you can manage. Reduce attack surface, implement secure design, execute secure deployments. Once you’ve finished those fundamentals, then consider the attackers’ origin.
RS: Exactly! As far as IIoT or IoT is concerned, I have been considering the future of risk as it relates to economic drivers. The hope of IIoT is optimization. This is particularly true as we move into more of a sharing-based economy with services like Airbnb and Uber. We profit as things stay in service and we drive more value via partnership. Partnership looks a lot like sharing. I predict we will see this more in the industrial space. Think manufacturing sharing and the like. As a result, availability and optimization of resources will become key. For example, as long as the train is moving, the plane is flying, the windmill is generating power, and the car is available per SLA, I will receive a paycheck. This becomes interesting in health care, too, as it relates to wearables and health maintenance. You get paid while my health stays in certain real-time parameters, regardless of my location. Connectivity, and hence attack surface, will naturally increase due to a multitude of economic drivers. The nature of transacting more business is predicated on more attack surface. That was true even when we lived in analog days before electricity. If I want to sell more, I could expose more value, potentially exposing treasure to untrusted sources. Now we have more devices, there are more users per device, and there are more application interactions per device per user. This is an exponential growth in attack surface.
VL: And the more attack surface signals more room for breach.
RS: As a security professional, I consider what it means to create a device with minimal attack surface but that plays well with others. I would like to add threat awareness should be more pervasive individually and collectively. Minimal attack surface means less local functionality exposed to the bad guy and possibly less compute on the endpoint as well. Push things that change, and or need regular updates, to the cloud. Plays well with others means making services available for use and consumption; this can include monitoring from a security perspective. These two goals seem at odds with one another. Necessity then becomes the mother of invention. There will be a flood of innovation coming from the security marketplace to address the future of breach caused by a massive growth in attack surface.
Security for Critical Infrastructure
VL: Switching gears, what has been your number-one priority since joining GE?
RS: GE makes and services industrial products. It also has an extensive IT infrastructure. I essentially function as the Chief Product Security Officer. We have a CISO; he is responsible for the administrative network. With that in mind, taking a talent-first approach in building a global team that spans device to cloud security is a priority. For security leaders who are transitioning into the industrial product domain, you need to hire a well-rounded mix of people. You need a solid rolodex and you should be hiring leaders with solid rolodexes, too. First, you are hiring engineering-focused security pros for devices. These are a rare breed. If you can find security engineers with your specific industrial domain, then hire them. Furthermore, you cannot forget that this is product security and not IT security. IT security deals with more homogeneity from a compute perspective and scale; it deals with security as a shared service. Product security folks, on the other hand, are concerned with heterogeneity and the lifecycle of a set of products. The emphasis is on Secure by Design, Default and Deploy otherwise known as the SDL. You will also want some prodsec team members who understand threat modeling. And, yes, you need the more traditional IT types too, particularly on the cloud side. A quantitative risk management approach is needed as well. This is true for any type of security, but particularly true in the product sphere. Product security is the commercially intense side of security.
"When you add critical infrastructure to the blend, it is as security as security gets."
Personality Bytes
VL: Let’s revisit your career trajectory. What’s the 30,000-foot view of how you entered security and the path that you’ve taken?
RS: Security was a later interest for me; I was originally a classical musician and had transitioned into teaching music for a while as well. I also mentioned that my master’s degree capstone project was focused on decision analysis. It was through this study that I landed an internship at a company called TriNet, which was then a startup. They primarily support venture capital-backed companies. There were approximately 500 of them and 50 VCs by the time I left. My internship soon evolved into a risk management role with plenty of development and business intelligence. In terms of development, I was a very early LAMP (Linux, Apache, Mysql, PHP) stack user. At the time, I was more of a business guy just trying to get stuff done.
VL: So that was your epiphany. What came next for you?
RS: I went to work for nCircle. They had recently shifted into a product company. I became a developer and created security tools. Two years later, I switched to Qualys. They needed someone to integrate their platform from a web-services perspective with other web-services stuffs. I became a manager there before moving to Kaiser Permanente to initially run their vulnerability management program. I honed my interest in business intelligence and analytics at Kaiser, something I’d been intrigued by since my internship days. At Kaiser, I built out one of the industry’s first security data science and big data management teams. I also constructed their penetration testing practice and continued to expand their product security team. I have been blessed to have many of these same people stick by me through my career all the way into GE.
VL: How did you land at GE Healthcare?
RS: GE wanted a domain-focused product security leader to build out a team with capabilities that could encompass both devices and the cloud. I had eight years of experience in health care and nearly a decade spent in startups in the Valley, half of that specifically manufacturing security products. They collocated me at their GE Digital Headquarters, which is focused on IIoT, data science, and the cloud. It was a great fit.
"All of that is how I landed where I am now. I genuinely believe it’s this cumulative experience – from decision-making to development."
VL: Thanks for talking to me today, Rich.
RS: No problem, Vinnie.
Previous installments of our cybersecurity expert interview series:
Subscribe to Bishop Fox's Security Blog
Be first to learn about latest tools, advisories, and findings.
Thank You! You have been subscribed.
Recommended Posts
You might be interested in these related posts.
Dec 12, 2024
Our Favorite Pen Testing Tools: 2024 Edition
Oct 15, 2024
Off the Fox Den Bookshelf: Security and Tech Books We Love
Sep 17, 2024
Navigating DORA Compliance: A Comprehensive Approach to Threat-Led Penetration Testing
Aug 28, 2024
Offensive Security Under the EU Digital Operational Resilience Act (DORA)