Gavin Millard, Deputy CTO and VP of Market Insights at Tenable talks through one of the biggest security threats to businesses worldwide, ransomware. How can businesses mitigate threat in this landscape?
When you talk about the “Jay-Z rule of cyber”, what do you mean in relation to the current state of cybersecurity?
This has been a source of frustration for me for many years, observing numerous different attacks. Regrettably, behind every headline about a breach are known flaws. Examining any of the major breaches over the years, they typically involve an initial ingress point or somewhere within the attack path where known flaws, easily exploited by attackers, exist. Take, for example, the unfortunate incident with LastPass quite recently. LastPass experienced a breach towards the end of last year, and this breach, as I’ll elaborate on the next slide, unfortunately allowed attackers to penetrate LastPass’s cloud environment.
The details of how this occurred are quite intriguing. If you delve into the forensics of the attack, the initial point of entry exploited a vulnerability in a Plex server. For those unfamiliar, Plex is a multimedia server commonly used in homes for sharing videos with televisions. This vulnerability in Plex was actually identified by Tenable in 2020. The vulnerability itself wasn’t particularly severe, but in software that many might not consider significant. What transpired in the attack was an exploitation of this vulnerability to access one of the developers’ home networks, then move laterally within their network, gain access to their computer, and subsequently obtain keys to penetrate the LastPass environment. So, the initial point of entry was a known, albeit not critical, vulnerability, which was then exploited to escalate into the cloud environment. This pattern is commonplace. Consider JBS, the world’s largest meat producer. Their breach resulted in substantial impact.
Store shelves were left barren as they couldn’t supply meat, leading to an US$11 million ransom and the shutdown of their Australian production. MGM’s breach involved a social engineering attack targeting their Okta systems as the initial point of entry. In each of these breaches, known flaws and weaknesses were exploited. Now, considering the number of flaws out there, there’s an overwhelming number indeed. Just last year, 22,000 vulnerabilities were disclosed, with over half, 56%, classified as high or critical according to the common scoring system. However, only a fraction of these flaws are ever likely to be exploited in an attack. While many flaws are disclosed, not all are utilised by attackers; some are too complex, and others don’t grant the desired access.
But those flaws that are easy to exploit, offering the right level of privilege, and fit into the attackers’ strategy, tend to persist for a long time. Hence, the Jay-Z rule applies: you’ve got 99 flaws, but an attacker only needs one. If we need to remediate vulnerabilities, let’s start with those an attacker is likely to target. Only about 3% of flaws are expected to be exploited out of all the disclosed vulnerabilities. Last year, for instance, of the 22,000 vulnerabilities disclosed, only 2.7% were likely to be exploited.
How can organisations analyse which primary flaws could potentially be likely to make them vulnerable?
Indeed, it’s an exceedingly challenging task to undertake solo. With 20,000 vulnerabilities disclosed last year, staying abreast of those and identifying the ones actually being exploited is tremendously difficult. This is precisely why individuals turn to us and our machine learning algorithms, which help differentiate among the various vulnerabilities. We actually leverage about 9,000 different sources of threat data. As new vulnerabilities are disclosed, we analyse them, looking for indicators of their likelihood to be exploited. We also scrutinise that threat data to see if it’s currently being utilised. A prime indicator of a vulnerability’s likelihood of being exploited today is if it was exploited yesterday. We amalgamate all this information.
We perform this analysis nightly, as the beauty of machine learning is its lack of need for rest. So, we’re constantly analysing these vulnerabilities and prioritising them for our clients. Regarding specific use cases, we’ve observed that the vulnerabilities exploited across different environments are typically the same few. It’s not a matter of different vulnerabilities for financial institutions as opposed to SMEs. In reality, it’s more about the major 90 or a hundred vulnerabilities that are commonly exploited. By focusing on these key vulnerabilities that we identify as the most critical, organisations can significantly reduce their risk of a breach.
Once they find their biggest flaws, how can they begin to mitigate them?
To carry out this task effectively, we first need to distinguish between vulnerabilities that are likely to be exploited and those that are not. To do this we require threat context. At Tenable, for instance, we utilise machine learning to predict which vulnerabilities are prone to exploitation. Once identified, these vulnerabilities must be located within your environment and then effectively addressed, either through mitigation or remediation. This requires operationalising the process of identifying and addressing these flaws. For me, it’s all about how you communicate the importance of cybersecurity, prioritising the identification and resolution of these issues. We often hear about security metrics, like the number of items blocked by a firewall or malware stopped by EDR, but these figures are somewhat superficial. It’s crucial to operationalise the process of effectively identifying and addressing these vulnerabilities. As I mentioned, we do this using effectiveness metrics.
Read more exclusives and news in our latest issue here.