Home Safety & SecurityThe Case for Why Better Breach Transparency Matters

The Case for Why Better Breach Transparency Matters

by David Walker
0 comments

Cybersecurity experts are calling for a major shift in how companies handle data breaches and security failures, arguing that greater transparency and specific detail disclosure about how and why they occur is essential if the industry hopes to effectively reduce cyber-risk.

At the upcoming RSAC Conference, threat research experts Adam Shostack and Adrian Sanabria will make the case for greater incident transparency and the need for structured feedback loops in cybersecurity, in a session aptly titled “A Failure Is a Terrible Thing to Waste: The Case for Breach Transparency,” scheduled for Monday, March 23.

Shostack, founder of security consultancy Shostack and Associates, and Sanabria, founder and principal researcher at The Defender’s Initiative, posit that the cybersecurity field is sorely lacking in formal processes for providing feedback after a major data breach or other type of security incident. This type of feedback, they argue, is critical to how other safety-motivated industries such as aviation, medicine, and public health operate.

Related:Life Mirrors Art: Ransomware Hits Hospitals on TV & IRL

In an interview with Dark Reading, the researchers cite incidents such as plane crashes or patient deaths in medicine, which are heavily scrutinized, with measures often being put in place to avoid a repeat of the scenario. 

Rather than doing this in the majority of cases, however, the cybersecurity industry often treats breach investigations as legal liabilities or shameful incidents to be hidden rather than lessons to share and from which other professionals can learn.

Security Culture Change Needed

This culture is contrary to facilitating an environment in which organizations and professionals can take away valuable insight from security breaches that could prevent future incidents, Shostack says. 

Instead of hiding most of the details or deflecting blame, the guiding principle of all breaches should be, “If you’ve made a mistake, admit you’ve made a mistake and tell us what happened,” he tells Dark Reading. 

However, organizations often are viewed as negligent after a security incident, even though research suggests that many successful attacks involve chains of small failures — missing patches, misconfigured tools, weak monitoring, inadequate testing — rather than comprehensive incompetency, Sanabria says. 

“It’s rarely one thing,” he tells Dark Reading. “There are dozens of controls that should have stopped the attacker and didn’t.”

If organizations disclose these small details of incidents instead of hiding them for fear of shame or blame, it can benefit the industry as a whole and help prevent future breaches, Sanabria says.

Related:Chinese Police Use ChatGPT to Smear Japan PM Takaichi

Laws and policies for how data breaches are treated in the US vary from state to state and organization to organization, the researchers say. Publicly traded companies, for instance, are required to disclose major security incidents in SEC filings, but only if they have a material impact on the company. 

There are a couple of key reasons that the cybersecurity industry on the whole isn’t doing diligent, mandated post-mortem on major breaches and incidents. One is the legal ramifications for the organizations that may be found to be at fault and thus liable, financially or otherwise, for the impact incidents cause, Shostack explains. 

This points to a difference in organizational culture — more specifically, the difference between engineers and lawyers, he says. “Lawyers in their ethical code have an ethical requirement to zealously pursue the interest of their clients,” Shostack explains. “Engineers, in contrast, are required to consider things like public safety.”

Typically, when a cybersecurity incident happens, an organization’s lawyers warn the CEO not to talk about it, for fear that the company might get sued, he says. But this is contrary to “the normal way that engineering works,” Shostack says.

Related:Malicious Next.js Repos Target Developers Via Fake Job Interviews

“This isn’t what we do when a bridge falls down, when an airplane falls out of the sky,” he says. “When we have any other technologically mediated system failure, we talk about what happened and we learn from it.”

With no formal governance over how organizations handle security incidents, this difference in culture continues to keep many details around breaches shrouded in mystery, Shostack says.

Another reason is the current lack of federal regulatory support or requirements for breach transparency. There was a short-lived federal effort, the Cyber Safety Review Board (CSRB), that aimed to create a model similar to the National Transportation Safety Board to investigate major cyber incidents and publish real-time feedback on them.

While the board did manage to issue several reports, the current Trump administration fired all of its members soon after taking office in January, while the CSRB was in the middle of investigating the breach of numerous US telecommunications companies by the China-backed APT Salt Typhoon. Whille the board theoretically still exists, no one sits on it, and thus it’s not currently doing the work it was formed to do.

The Data Is Out There

That’s not to say that there isn’t ample, publicly available data on major security breaches and incidents if one knows where to look. In fact, Sanabria has spent months reviewing public breach documents — congressional reports, regulatory filings, lawsuits and after-action reports — and says “there is a pile of gold” in breach data that people tend to miss.

“There are a lot of extractable lessons that you can find that are literally sitting there waiting to be picked up to be seen,” he says. 

The challenge in analyzing this data is that breach narratives tend to get both overlooked and oversimplified. In the 2017 breach at Equifax, headlines focused on an unpatched Apache Struts vulnerability as the cause of the breach. But later congressional materials revealed deeper cultural and process failures, including breakdowns in internal communication and testing.

“What everyone remembers is Day One,” Sanabria says. “The deeper lessons often come 18 months later.” By that time, however, the breach is old news, and rarely do people “get to page 171” of a report summarizing the failures of the breach, he says.

There are examples of organizations taking it upon themselves to release detailed accounts of breaches for the public good. For example, after a ransomware attack in 2023, The British Library published a detailed after-action report acknowledging mistakes and outlining lessons learned. In Canada, a federal privacy commissioners released findings into a breach of PowerSchool education technology and its effect on various educational institutions, providing insights into systemic failures. 

Here at home, the US Federal Trade Commission also has published detailed complaints in breach-related cases. However, rarely do these public resources offer comprehensive breach details that provide enough feedback to help cybersecurity professionals learn from others’ mistakes, Sanabria says. “None of these sources go deep,” he tells Dark Reading. “They can’t tell you the narrative or the how of the breach.”

The Way Forward

Without better data and empirical evidence to inform breach prevention, the industry risks investing in what Sanabria calls “busywork generators” — tools and compliance activities that may not significantly reduce real-world risk.

“Every other industry that cares about safety builds feedback loops so they can get better,” Sanabria says. “Reducing risk is more of a gamble without data.”

Indeed, without formalized transparency and governance, the researchers warn, the industry will struggle to determine if organizations are truly making improvements in their response to and prevention of secrity incidents, the researchers say.

To promote breach transparency in a way that will foster improvements in cybersecurity, the pair support building structured, institutionalized mechanisms for breach transparency, potentially including anonymized reporting, delayed public disclosures, or regulatory safe harbors for good-faith transparency. The goal, they say, is not public shaming, but collective learning.

“Modern engineering is built on studying failure,” Shostack says. “We don’t have enough of that in cybersecurity.”



Source link

You may also like

Leave a Comment