This Insights article was contributed by Dr. Ulku Clark, director of UNCW’s Center for Cyber Defense Education (CCDE) and Dr. Geoff Stoker, CCDE affiliated faculty.
Good patch management is key to solid cybersecurity. By better understanding some of the differences among vulnerability types, CEOs, CIOs, and CSO/CISOs will be better able to set good policies and to properly support the employees charged with executing those policies.
Two years ago, Equifax announced that it had suffered a data breach resulting in the download of personally identifying information (PII) for over 145 million consumers. This summer, Equifax finally reached a class action settlement with the FTC that includes $425 million to help those affected by the breach. This is on top of the $1.4 billion in related costs that Equifax has reported incurring since the breach.
Less than a month after the 2017 announcement, the CEO, CIO, and CSO had all retired early and the (former) CEO was called to testify before the U.S. House of Representatives’ Committee on Financial Services – all because a known software vulnerability had not been successfully patched.
Figure 1 Key events from the 2017 Equifax data breach.
While the primary cause Equifax’s breach was their lack of a comprehensive view of their global IT infrastructure, part of what emerged from the story was that leaders lacked an appreciation for the many differences among vulnerabilities. The following brief exchange during the house hearing best highlights this point:
Chairman Hensarling: “…What do you believe is a reasonable amount of time for a critical vulnerability patch to be pushed out and implemented on all affected applications?...”
Mr. Smith (former Equifax CEO): “…Our policy, our program at the time was within 48 hours and we did that….”
While it’s not entirely clear what Congressman Hensarling meant by “critical” or how firm Mr. Smith felt about the 48 hours, this exchange provides some interesting insight into how two high-level leaders from government and business think about patching. Both the question and the answer seem to imply that rapidly pushing patches to vulnerable machines is a sound strategy and that there is a standard timeframe within which these actions should occur.
These days, at a very high level, all patches are the same – pieces of software created to alter existing pieces of software; however, the term patching originates from when instructions were conveyed to machines via paper cards that were punched with holes. If an error was made and a hole was in the wrong spot, you could apply a patch and then punch a new hole somewhere else.
While patches might be created for functionality or performance reasons, patches that plug security holes are probably what most people think of first in this era of heightened concern for cybersecurity. The way patches can be applied, and how easily, varies – sometimes quite dramatically.
Some software can cleverly apply patches to both volatile (RAM) and non-volatile (disk) memory with no impact to users, though this is not common. Application patches might replace a component specific to the application and then require only that application be closed and restarted. Patches to dynamic system libraries used by multiple applications or operating system (OS) services likely require all affected applications or services to restart in order to load the new, patched library code into RAM. Patches to key OS services or the kernel will often require rebooting the system so that the OS can be loaded with the new fix in place.
The types of patches described so far are relatively straight-forward; and if we ignore compatibility concerns and potential second-order impacts to business operations, they can be accomplished by users (think smart phone) or admins (think enterprise devices –PCs/laptops/servers) relatively easily.
Patches to static libraries are a little different as they require that applications using those libraries be recompiled from source code and that means (typically) getting software developers involved. This can get tricky if the hole being plugged has been around for a while. The Apache Struts vuln at the heart of the Equifax breach is a good example. Since the hole existed in Struts versions dating from 2012, any software application created with a flawed version of Struts from 2012 through 2017 needed to be recompiled. This means digging code out of a company repository that could be up to five years old and which may have been written by someone no longer at the company, then recompiling it with the new version of Struts. Ignoring again concerns about compatibility and operational impact, it’s clear that this kind of patching is inherently more complex than what is typically thought of as “patching.”
Even if they knew they were running the vulnerable version of Struts (they didn’t – and this was the primary cause of their downfall), it’s not clear how Equifax developers could have applied the patch safely and without serious operational disruption within 48 hours of notification. Because Equifax leaders lacked an appreciation for the important differences among vulnerabilities, the email sent out to over 400 personnel contained their standard policy requirement:
“…as exploits are available for this vulnerability and it is currently being exploited, it is rated at a critical risk and requires patching within 48 hours as per the security policy.”
Applying this blanket policy over a long period of time may have led to an overemphasis on timeliness tracking, which could have led to pressure for task close-out, which in turn de-emphasized and led to the break-down in infrastructure visibility. Requiring that every critical vulnerability be patched at all (vice mitigated in other ways), let alone within an arbitrary standard like 48 hours, fails to take important differentiated IT contexts into account.
To learn more about business-relevant cybersecurity info, mark your calendars for UNCW's Cybersecurity Awareness Colloquia on October 18th. Visit https://csb.uncw.edu/ccde/colloquia.html for details.
Robert T. Burrus, Jr., Ph.D., is the dean of the Cameron School of Business at the University of North Carolina Wilmington, named in June 2015. Burrus joined the UNCW faculty in 1998. Prior to his current position, Burrus was interim dean, associate dean of undergraduate studies and the chair of the department of economics and finance. Burrus earned a Ph.D. and a master’s degree in economics from the University of Virginia and a bachelor’s degree in mathematical economics from Wake Forest University. The Cameron School of Business has approximately 60 full-time faculty members and 20 administrative and staff members. The AACSB-accredited business school currently enrolls approximately 2,000 undergraduate students in three degree programs and 200 graduate students in four degree programs. The school also houses the prestigious Cameron Executive Network, a group of more than 200 retired and practicing executives that provide one-on-one mentoring for Cameron students. To learn more about the Cameron School of Business, please visit http://csb.uncw.edu/. Questions and comments can be sent to [email protected].
Jenny Callison - Feb 26, 2021
Christina Haley O'Neal - Feb 26, 2021
While Lapetus Solutions is a startup based in Wilmington, much of its growth and market has been focused across the world in the Asian conti...
Cape Fear Catamarans on U.S. 421 customizes, designs and builds aluminum boats....
In a transformative time for downtown Wilmington, the organization tasked with economic development efforts there is rethinking its mission...