If you’ve ever read about cybersecurity, the term “zero-day” is likely to have come up once in awhile to describe vulnerabilities that have been exploited by hackers. You’ll also quickly find that these tend to be the deadliest. What they are and how they work has already been discussed succinctly by my colleague Simon Batt.
But as you get deeper into the subject, you’ll discover some things that perhaps you might have rather not known about as you begin to think twice about everything you run on your devices (which isn’t necessarily a bad thing). Cybersecurity studies such as this research from the folks at RAND Corporation (a U.S. Armed Forces think tank) demonstrate that zero-day exploits have many ways of showing us just how fragile our digital world is.
Zero-Day Exploits Aren’t that Hard to Make
The RAND study confirms something many programmers who have dabbled in proof-of-concept hacking have suspected: it doesn’t take very long to develop a tool that simplifies the process of exploiting a vulnerability once it’s been found. Citing from the study directly,
Once an exploitable vulnerability has been found, time to develop a fully functioning exploit is relatively fast, with a median time of 22 days.
Keep in mind that this is the average. Many exploits are actually finished within days, depending on the complexity involved in crafting the software and how far-reaching you want your malware’s effect to be.
As opposed to developing software for millions of end users, the making of malware has only one person in mind as far as convenience is concerned: its creator. Since you “know” your code and software, there is no incentive to concentrate on user-friendliness. Consumer software development experiences many hurdles because of interface design and fool-proofing the code, so it takes longer. You’re writing for you and therefore don’t need all the hand-holding, which makes the programming process extremely fluid.
They Live for a Shocking Amount of Time
It’s a point of pride for a hacker to know that an exploit they’ve made works for a very long time. So it might be a bit disappointing for them to know that this is a common occurrence. According to RAND, the average vulnerability will live for 6.9 years, with the shortest one they’ve measured living for a year-and-a-half. For hackers this is disappointing since through this they find out that they’re not necessarily special whenever they make an exploit that rampages through the web for years. For their victims, however, this is terrifying.
The average “long-lived” vulnerability will take 9.53 years to be discovered. That’s nearly a decade that every hacker in the world has to find it and use it to their advantage. This troublesome statistic comes as no surprise, since convenience often calls shotgun on the ride while security is left with the back seat in the development process. Another reason this phenomenon exists is because of the old adage, “You don’t know what you don’t know.” If your team of ten programmers can’t figure out that there’s a vulnerability in your software, surely one of the thousands of hackers that are actively looking for it will give you a hand and show it to you the hard way. And then you’ll have to patch it, which is another can of worms in itself, since you could end up introducing another vulnerability, or hackers could quickly find a way to circumvent what you’ve implemented.
Altruism Isn’t in High Supply
Finifter, Akhawe, and Wagner found in 2013 that out of all vulnerabilities discovered, only 2 – 2.5 percent of them were actually reported by good Samaritans who came across them on a morning stroll as part of a vulnerability rewards program (i.e. “We give you goodies if you tell us the ways our software can be hacked”). The rest of them were either discovered by the developer itself or by a hacker who painfully “enlightened” everyone to its existence. Although the study doesn’t differentiate between closed and open source, it is my suspicion that open-source software gets more altruistic reporting (since having the source code out in the open makes it easier for people to report exactly where a vulnerability takes place).
My hope here is that this provides a perspective on how easy it is to fall victim to an exploit and how zero-day exploits are not as rare as they seem. There are still many unanswered questions about them, and perhaps closer study will help us equip ourselves with the tools we need to combat them. The idea here is that we need to stay on our toes.
Were you surprised by these findings? Tell us all about it in a comment!