Â
When FERC ordered NERC to develop a supply chain cybersecurity risk management standard in 2016, they listed four areas they wanted that standard to address: (1) software integrity and authenticity; (2) vendor remote access; (3) information system planning; and (4) vendor risk management and procurement controls. When FERC approved CIP-013-1 in 2018 in Order 850, they did so in large part because NERC had encompassed all four of those items in the standard.
The first of those items was addressed in two Requirement Parts: CIP-013-1 Requirement R1 Part R1.2.5 and CIP-010-3 Requirement R1 Part R1.6. FERC summarizes the latter on page 18 with these two sentences:
NERC asserts that the security objective of proposed Requirement R1.6 is to ensure that the software being installed in the BES Cyber System was not modified without the awareness of the software supplier and is not counterfeit. NERC contends that these steps help reduce the likelihood that an attacker could exploit legitimate vendor patch management processes to deliver compromised software updates or patches to a BES Cyber System.
In reading these sentences yesterday, I was struck by a huge irony: This provision is meant to protect against a “poisoned” software update that introduces malware into the system. It accomplishes this purpose by requiring the NERC entity to verify that the update a) was provided by the supplier of the product and not a malicious third party (authenticity), and b) wasn’t modified in some way before or while it was being downloaded (integrity).
Yet, since FERC issued Order 850, what have been probably the two most devastating supply chain cyberattacks anywhere? I’d say they’re the SolarWinds and CrowdStrike attacks (you may want to tell me that CrowdStrike wasn’t actually a cyberattack because it was caused by human error, not malice. However, this is a distinction without a difference, as I pointed out in this post last summer).
Ironically, both attacks were conveyed through software updates. Could a user organization (of any type, whether or not they were subject to NERC CIP compliance) have verified integrity and authenticity before applying the update and prevented the damage? No, for two reasons:
First, both updates were exactly what the developer had created. In the SolarWinds case, the update had been poisoned during the software build process itself, through one of the most sophisticated cyberattacks ever. Since an attack on the build process had seldom been attempted and in any case had never succeeded on any large scale, it would have been quite hard to prevent[i].
What might have prevented the attack was an improvement in SolarWinds’ fundamental security posture, which turned out to be quite deficient. This allowed the attackers to penetrate the development network with relative ease.
In the case of CrowdStrike, the update hadn’t been thoroughly tested, but it hadn’t been modified by any party other than CrowdStrike itself. Both updates would have passed the authenticity and integrity checks with flying colors.
Second, both updates were completely automatic, albeit with the user’s pre-authorization. While neither the SolarWinds nor the CrowdStrike users were forced to accept automatic software updates, I’m sure most of those users trusted the developers completely. They saw no point in spending a lot of time trying to test integrity or authenticity of these updates. Of course, it turns out their trust was misplaced. But without some prior indication that SolarWinds didn’t do basic security very well, or that CrowdStrike didn’t always test its updates adequately before shipping them out, it’s hard to believe many users would have gone through the trouble of trying to verify every update. In fact, I doubt many of them do that now.
It turns out that, practically speaking, verifying integrity and authenticity of software updates wouldn’t have prevented either the SolarWinds or the CrowdStrike incidents, since a) both updates would have easily passed the tests, and b) both vendors were highly trusted by their users (and still are, from all evidence). What would have prevented the two incidents?
Don’t say regulation. I’m sure both vendors have plenty of controls in place now to prevent the same problem from recurring. Regulations are like generals; they’re always good at re-fighting the last war.
What’s needed are controls that can prevent a different problem (of similar magnitude) from occurring. The most important of those controls is imagination. Are there products that will imagine attack scenarios that nobody has thought of before? I doubt there are today, but that might be a good idea for an AI startup.
Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.
[i] Affandicios of the in-toto open source software tool point out that it might have prevented the SolarWinds attack, although that assertion always comes with qualifications about actions the supplier and their customers would need to have taken. While the benefit of taking those actions (or similar ones) today is now much more apparent, that need wasn’t apparent at the time.