Whether navigating regulatory changes or confronting new threats, the connected device landscape is constantly evolving. Product security teams increasingly find themselves facing new requirements with fewer resources. Many teams encounter overwhelming signal-to-noise ratios as they try to prioritize threat response.
The stakes to get it right have never been higher. In this second episode of Finite State’s new podcast, “The Internet of Threats,” former Chief Strategist of the CISA COVID Task Force and Founder of I Am The Cavalry Josh Corman offers us key insights on today’s product security teams, how to run them, how to get the most from scarce resources, and the evolving role of the Chief Product Security Officer. Josh also discusses the key role that a software bill of materials (SBOM) can play in creating transparency in the development process.
During this 23-minute episode, Josh and Eric Greenwald, Head of Cybersecurity Policy, and General Counsel at Finite State, examine:
- How to protect an increasingly connected world
- The growing importance of product security roles
- Advice for discerning and prioritizing the severity of threat reports
- How to operate teams and control risk with fewer resources
All episodes of Finite State’s “The Internet of Threats” podcast can be heard on Spotify, Apple Podcasts, and Google Podcasts. Listen to Episode 2 in its entirety below:
Episode Guest: Josh Corman, former Chief Strategist, CISA COVID Task Force & Founder, I Am The Cavalry
Bio: Prior to joining the CISA COVID Task Force in July 2020, Josh served as Chief Security Officer, SVP, at PTC, and Director of the Cyber Statecraft Initiative at the Brent Scowcroft Center on International Security. Earlier in his career, he also held positions of increasing importance at Sonatype, Akamai Technologies, The 451 Group, and IBM Internet Security Systems. Josh earned his B.A. at the University of New Hampshire.
Full Podcast Transcript:
Eric Greenwald: Josh, thank you very much for coming on the podcast, particularly on our inaugural interview episode. Really appreciate having you here.
Josh Corman: This will be fun.
Eric Greenwald: All right. Now, Josh, as you know, we asked you here, you are a cybersecurity expert of long standing with a lot of different credentials. We asked you to come and be the inaugural interview subject for the podcast, because of some of the work you've done in the product security field, in particular, your talk at RSA on the rise of the chief product security officer, which you did with Chris Wysopal, almost a year ago, and the work that you do in your graduate course, that you teach at Carnegie Mellon, on product security, so that we can have a little bit of a detailed conversation about what is a product security team, what is the role of a chief product security officer. But of course, your background includes lots of in-depth work on cyber-physical security for medical devices. You've also got an enormous amount of work you've done in other areas, including your work with I Am The Cavalry. And you are, you know, widely known as the godfather of the SBOM. All of this experience led to your recent work with the government as the chief strategist for the CISA COVID task force. And, what I wanted to do is just give you an opportunity to talk a little bit about how all this work fits together.
Josh Corman: Wow. I just, as you can see, I'm a fan of Marvel Comics and the like, but I like to run towards the fire. And I think I've always kind of wanted to be a superhero, I just didn't have superpowers. But as the world increasingly depended on digital infrastructure, for every aspect of our life, you know, we all know that software is hackable. And when you connect to the internet, it's exposed. So, I've been on this journey of elevating consequences where I launched I Am The Cavalry almost nine years ago, in recognition that this is where bits and bytes meet flesh and blood. So, we do a lot with medical, but it's really any industrial IoT or cyber-physical systems. That was recognizing the cavalry isn't coming, no one's gonna save us. And we wanted to be what was missing in the world. You know, fast forward, and they actually made it a brand-new government agency called CISA, to try to be a public sector institution to do defense for critical infrastructure. And while it's still fledgling and finding its footing, who would have thought that my work with I Am The Cavalry and medical stuff, and product security, would get me to design and implement the CISA COVID task force to keep hospitals up and running and make sure the vaccine supply chains were still saving Americans? And ultimately, during my time there, we saw, think of Maslow's Hierarchy of Needs and the bottom. But we saw successful attacks on the water we drink, the food we put on our table, the oil and gas that fuels our cars and our homes, timely access to patient care for our families or ourselves during a pandemic, the schools our kids go to, municipalities around towns and cities. Stuff’s on fire. So, there was already a trend for that talk we did at RSA last year, on the difference between keeping bad guys out of your network and really ensuring the integrity, the trustworthiness, and the brand and reputational risks of the products that you put into the marketplace. I know we're going to have an entirely separate conversation about SBOM, but think about it, one of the lines we like to use is: we're all in a supply chain. Most of us are in the middle. And as these attacks and adversaries are increasingly targeting that software supply chain, we need to have trusted and trustworthy relationships. The line I love that we got into the executive order that was in the end, the trust we place in our digital infrastructure should be proportional to how trustworthy and transparent that infrastructure is. And to the consequences we will incur if that trust is misplaced. That's really been my true line. Yeah.
Eric Greenwald: That's the recent executive order. 14028.
Josh Corman: Yeah. Not so recent. It’s over a year old now.
Eric Greenwald: That's right, actually, yeah, I think early May is when it'll hit a year. I've been tracking that. They're expected to release at least recommendations for the FAR Council on changes to the FAR. So, we'll see where that lands. But you alluded to a conversation that we're going to have in the future on SBOM. And I'm looking forward to that. I think that that's going to be really interesting to hear not just about where SBOM is going, but where it's come from. And the, you know, understanding that it has a long history, it's not something that's a Johnny Come Lately, cybersecurity idea. So, on the question of product security, I want to ask, what has changed in the role of the chief product security officer since your RSA talk almost a year ago? But I think before we get to that, I want to at least have an opportunity to touch on some of the trends and forces that have made product security roles increasingly important. You touched on this a little bit in what you said just a moment or two ago. But I wonder if you could expand on that a little bit just to get more specific about what product security folk are up against these days.
Josh Corman: I'd really encourage people to go watch the video from RSA with Chris Wysopal and myself on this topic. And we also did an hour-long deep dive Q&A that was also recorded, or even just join the CMU grad program for the CISO certificate, because we expand on a lot of that stuff in my course. But I would say that I love that William Gibson quote, The future is here already. It's just not evenly distributed. And whether it's called… One important caveat is very few people call this the chief product security officer. So, whether it's called that, whether it's a side duty, or an emerging and rising duty for a CISO, whether it's got a different title is less important. But what I, as the cavalry successfully engaged regulators like the Food and Drug Administration for their pre-market and post-market guidance on cybersecurity for FDA-approved medical devices, the scrutiny, the regulatory scrutiny was increasing. We had eventually initially floated an IoT bill for Congress for any federal procurement. It did pass into law, December of 2021. No, December 2020. And, you know, the UK Code of Practice did something similar, the state of California did something similar. There's some EMEA, emerging companion documents. As you start to see regulatory scrutiny, it becomes more of a boardroom issue, it becomes permission to ride for certain regulated industries, specifically critical infrastructure. And there's been some leaks, early leaders in medical devices or industrial control systems having this title, but it's more the function and the collection of functions that are becoming more important than say, the title itself.
“The regulatory environment significantly lags the threat landscape.”
Eric Greenwald: And so, you reference regulatory changes, I'm assuming that it's … part of it is driven by what is being required of product security teams and leaders. But to what extent is the change in the threat altering the landscape for product security?
Josh Corman: Yeah, the regulatory environment significantly lags the threat landscape. So, I think, you know, very early in my career, I wrote, I was worried about these things, I did see the trend for attackers to start climbing the stack and going through the application layer. I did not want to reinvent the entire security stack just for applications, software security. I think people likewise … shared that concern. And there's a lot of pioneers in that space for app sec, but we weren't doing it strategically. I wrote the RuggedSoftware manifesto to try to be a bit of a Hippocratic oath for software engineers to know what their awesome and increasing responsibility was, but the manifest harm was not quite there yet. We were talking about what could happen, not what was happening. But increasingly, even without intent, some of these software supply chain issues started to hurt cyber-physical systems. One of the watershed moments for healthcare, or really two, but the short version is, near a week or maybe a couple of weeks before my congressional Task Force on the healthcare industry, cybersecurity, there was a Java D serialization flaw in a single JBoss library in a single medical technology that took out Hollywood Presbyterian Hospital for a week. It was you know, just a, they didn't know what a JBoss was, and out of their 20,000 pieces of equipment, they couldn't answer: Am I affected and where am I affected? That really helped us push the idea of SBOM into congressional recommendations and ultimately, FDA required things and then NTIA and Allan Friedman will get there some other day. But prior to that, Billy Rios had found, several researchers had found medical flaws, but they had an adversarial relationship with FDA. And when we showed that, unlike other physical defects that need to wait for proof of harm, that maybe an unmitigated pathway to harm was enough to trigger a corrective action. And we used a lot of empathy and team-building with a very brave Suzanne Schwartz. And we got to a point where she realized we should be left a boom, and if we can take a regulatory action, so they interpreted their law and their guidance and their authority to do the first safety communication for a bedside infusion pump that without authentication could empty a three-hour dose in 30 seconds and kill people. Why do we have to wait for people to die? Let's preserve the confidence of the public in the technology they already trust, even if it's not quite yet trustworthy. So, a bit of a long answer. But the ability to demonstrate in an empathetic and collaborative way, the potential for harm and to shatter institutional trust started getting us some early wins. And then I think as other lawmakers or regulators better understood this, you know, we call it being patiently impatient. But as we build trust with them, we wanted to be left to boom. And, and we've had quite a few successes every time you saw a public attack, and it usually translated into more movement on some tech-literate policy movement.
“I'm a big fan of coordinated vulnerability disclosure programs … availing yourself of all willing allies to point out areas that you might have missed in your software development best efforts."
Eric Greenwald: Thinking about the threat reporting, that any individual product security team may see in the course of any given day or week, I'm sure there are many, many people out there who feel like the signal-to-noise ratio is overwhelming. And it's how when you're looking at threat reporting, how do you interpret it? And, you know, when you're talking about a medical device that has the potential to, you know, a flaw that has the potential to kill somebody? That's pretty obvious. But do you have any general principles for interpreting threat reporting so that people can understand more clearly the implications and the importance of acting on them?
Josh Corman: Yeah, I think that's pretty late in the story, in the strategy, and I'll just give you an unsatisfied blitz through a few superficial treetops here. When I started I Am The Cavalry, on our first birthday, we wrote this thing called the Five Star Automotive Cybersafety Framework. And later, we made a companion that's almost the exact same things, which is called the Hippocratic oath for connected medical devices. But we were using language familiar to the target audience. And when I explained it to my neighbor that has fancy names and you can find this at iamthecalary.org/oath, for example, it says, all systems fail, you have to handle failure across five dimensions. How do you avoid failure, by safety, by design, take help avoiding failure without suing the helper with coordinated vulnerability disclosure programs, capture, study and learn from failure with Tamper Evident forensic evidence capture, respond quickly against failure with security patches and updates ideally over the air and contain and isolate failure or fail safely? Like, do you separate critical systems from non-critical systems? And in a lot of cases, these imply the really simple postures one might need towards cyber-physical systems. And it's the idea that I'm a big fan of starting with a threat model. Like, do you design and architect things to have the least potential for harm? And can a compromise of one area be contained and isolated to not allow cascading failures? In the context of a threat model, I'm a big fan of coordinated vulnerability disclosure programs. Notice I didn't say bug bounties, which can be branches and sequels to that. But availing yourself of all willing allies to point out areas that you might have missed in your software development best efforts. Do you have a PSIRT team or product security incident response team? Do you have the ability to do security updates quickly in a modern software development pipeline, like a CI/CD pipeline to be able to do this and automated test your changes quickly? Can you respond to the barrage of new CVEs? Only about 3% of CVEs ever get exploited but luckily CISA has been pushing out that known exploited vulnerabilities KEV list, which can maybe help focus. Prior to that I would use things like EPSS or HDMoore's Law, which was the predecessor to EPSS and exploit probability scoring system, Michael Reitman and now first, but it showed, what are the strong positive correlations that tell you that irrespective of your CVSS score, these are most likely to be in that 3%? You know, we have pretty good data science that helps us determine that. So, when you have like an OODA loop that lets you do a threat model to have more defensible maintainable code, you have an SDLC that tries to anticipate and remove error, you have a CVD or coordinated vulnerability disclosure program to avail yourself of all willing allies not being afraid of retaliation. When you get reporting, ideally, quietly, you can triage it with something like EPSS or consult your threat model to know how bad it might be, have the ability to respond quickly and quietly. And when there's hair on fire, and there's a new thing out there like Log4j, can you answer, Am I affected? And where am I affected? With SBOMs, in minutes instead of days or weeks. And these start to bring sanity into just throwing a bunch of technology at things. It's the right artful use, and I oversimplified, but does that partly answer your question?
Eric Greenwald: It absolutely does. And, and, you know, I do think about this, because you've run through a number of different measures or models, approaches, frameworks that product security teams can take. But, you know, as you and I have previously discussed, not every team is equally resourced. And there are plenty of teams that don't have, they don't have a PSIRT, or they don't have the people to do the expertise or the labor, to do a lot of what arguably just flat out needs to be done. So, I'd love to ask you, if you have any guidance for teams that are not well-resourced, like, what are some of the things that they just absolutely have to focus their attention? What are, what are some of the steps they can take that would buy down a lot of risk with at least relatively speaking, low effort?
Josh Corman: Well, I like to say everyone has a PSIRT team, it might just not be written down, rehearsed, or staffed. So, in fact, in a lot of cases, people won't have a team, they'll just have a procedure or a plan at first. And sometimes those are forged in fire when you get your first inbound disclosure. But you're going to be better prepared if you have some of these elements. Not all products are created equally as well. So, I like, I usually encourage some risk stratification. I have a three-tiered model that I promote with some objective questions that let your executive stakeholders determine which products fit in which risk band and different risk bands have proportional scrutiny. Maybe a tier two can have your own threat model, but it's here one has to have a third-party threat model, then. So, first thing is understanding the relative risk of the products that you're working on, based on regulatory impact, fines, brand reputational damage, potential for harm. Then, the second thing is, start somewhere, start anywhere, and it could be a coordinated disclosure program, and a rough idea of how you're going to handle these things. But typically, even if you don't have a program or a staff, you can still have a plan of what to do when you have reports that make you avoid elective, unforced errors. I do like, I absolutely like prioritizing that not every CVE or CVSS score. … CVSS scores are an anti-pattern while they have a role, it's not typically relevant to something instead of what's the probability this will be one of that 3%. Something like EPSS. Or there's another one I should have mentioned, because it's come out since we started those other journeys. It's called SSVC. So, it's CVSS, backwards, stakeholder-specific vulnerability categorization. I think, it is a classification. But, you're making a decision tree of, should I care about this one? And it's based on things like, is there an exploit or proof of concept? How critical would it be to my environment? So, risk management is about identifying the chosen few products, or flaws, or categories. But, usually, people look at the products they have, not the products they're making. And I think the best place to identify and buy down risk is in the architectures themselves. So, threat modeling before you have a code instead of describing how badly the job you did, after it's too late. Those create the opportunities that you can have a failure anywhere in the product without it being catastrophic. If you do it well, or at least getting a map of the terrain lets you respond to those inbound disclosures or inbound Log4j's intelligently by having a map of the territory.
Eric Greenwald: I want to talk a little bit about SBOM. But before we turn to that, I kind of want to ask the corollary to how are the ways that you can effectively buy down the most risk for the least effort. And that is, are there activities you see security teams engaged in that are effectively security theater where they're spinning wheels expending effort, either thinking or trying to show that they are buying down risk, but in fact, they're buying down none or buying down very little?
Josh Corman: I mean, everything has some relative value. There are some that are pure theater, but I tend to have a positive message versus too critical of one. I'm not a huge… well, when everything's important, nothing's important. And I think people have to understand that there's a difference between … one of the worst cliches I find in security is, well, I'm aligning to the business because it usually translates into, I'm abdicating my responsibilities for the unique value proposition they hired me for. And I am tired of losing, so, I just claimed victory. But I think truly aligning the business means you should understand how each of your stakeholders prioritize the various products because not everything is equally important, and then help them in that decision and triage process because they might get it wrong. But then also knowing how your teammates across the organization are bonused and incentivized so that it's welcoming. And the anti-patterns I see is treating every vulnerability as a blocker or giving them a mound of things to fix when most of them may not matter. And fighting harder, not smarter. But I am, I'm increasingly critical of the race to the bottom for pen tests and for bug bounties. Because you end up with a high volume of low importance issues, they are true issues, there are real issues. But if you don't have a sense of your own prioritization, you'll just be responding and reacting to the ones revealed to you, they may not be the most important. So, I usually like to crawl, walk, run. And that's why I encourage merely a coordinated vulnerability disclosure program. And through Allan Friedman, our mutual friend on an earlier project he did for NTIA, I helped write a one-page, maybe two-page, template for early-stage courting disclosure programs that really avail yourself of the protectors and puzzlers, before you introduce any sort of economic or financial motivation. And usually, that's enough for cyber-physical systems, product companies. Famously, GM was going to dip their toe in the water, and I discouraged them from adding a cash prize, and even within 24 hours, 48 hours, they had over 100 submissions, like 25 or so that were brand-new to them. And three of them were pretty serious. And they said, eventually, we'll add some cash prize. But I think at this point, they still haven't required one, it's been a vibrant and valuable program that has not required introducing an economic transactional, element to the mix.
“Most of these really high-profile failures are somewhere in the software supply chain where you didn't write the software. So, it's not your fault, but it's still your problem. And I think a more sane approach is, are we conscious of what we're putting into these products? Are we looking for known vulnerabilities at the time we're selecting them? … We're only scratching the surface of the value of things like an SBOM and supply chain integrity. But it's certainly a more enlightened way to look at this.”
Eric Greenwald: Am I right in thinking that the software bill of materials would fit right into that because, as part of the development process, you're actually holding up and creating some transparency for what's gone into it? So that you can actually see into instead of waiting until you deploy it, and then have other people tell you, oh, by the way, you've included this bit of open-source code that's completely vulnerable.
Josh Corman: Yeah, I mean, I do, I have always, this will take longer to explain on our other, you know, potential discussion on this, but we write very little code now. So, looking for bespoke artisanal, you know, possible false positives in the code you write when it's maybe 5% or less of your total product seems to be not following the Pareto principle. But also, it's not just how much of the code is not yours. It's also where the attacks are manifest. They're increasingly manifest deep in the supply chain for Struts 2 or OpenSSL or the Ripple20, Urgent/11, or Log4j. Like most of these really high-profile failures are somewhere in the software supply chain where you didn't write the software. So, it's not your fault, but it's still your problem. And I think a more sane approach is, are we conscious of what we're putting into these products? Are we looking for known vulnerabilities at the time we're selecting them? And I think that's the tactical use. But over time, what you're gonna find is we're gonna be more like Deming, and more like Toyota supply chains where you want to use fewer and better open-source suppliers. You want to use the least vulnerable version of those fewer suppliers. And you want to track which parts go where, so that when there is a problem, you can do a … recall. So, we're only scratching the surface of the value of things like an SBOM and supply chain integrity. But it is, it's certainly a more enlightened way to look at this because developers don't like break/fixes. They want higher quality, higher performance, less entropy. There's some aligned incentives from using fewer and better parts and tracking where they go. It's very DevOpsy, it's very Demi. It's very sane, and I hope to get there as soon as possible.
Eric Greenwald: Excellent. Well, Josh, thank you so much for taking the time to talk with us. This has been Josh Corman, the founder of I Am The Cavalry and the former Chief Strategist for the CISA COVID task force. Josh, thank you so much. In the show notes, we’ll include links to all the documents and other materials that Josh is referencing here so that you guys can take a look at them in case you need to get more information. Josh, thank you so much and we look forward to having you come back to talk about SBOM in more detail.
Josh Corman: Let’s do it.
Share this
You May Also Like
These Related Stories