Why Do Data Breaches Happen So Often?

The news is packed with reports of huge data breaches. A quick survey of the last few years revealed a huge array of damaging data breaches that exposed everything from credit cards (Target) to Social Security Numbers, work history (OPM breach) and credit card numbers (Experian). An interested observer might wonder just how hard these companies are trying to protect your data.

If you ask, you’ll find out that corporations are doing everything they can to protect their systems. A big attack costs a lot of money and degrades customer trust.

However, securing computer systems is radically difficult. If you forget the mortar on a single brick, your house probably won’t fall down. But in computer security, that’s exactly what happens.


The complexity of security systems can be surprising to the uninitiated. If only there were a big “Hackers Off” switch.

While no educated person imagines that security is so simple, the true level of complexity is hard to grasp. Millions of lines of code to describe thousands of functions work in a few frameworks and interact with an enormous range of complementary systems. Even when perfectly coded, each of these lines, functions, frameworks, and connections represents a potential security flaw, or “attack vector,” which can be exploited.

In security, the sum of the opportunities for compromise is called the “attack surface.” Even when applications contain the most sensitive data, their attack surface is frighteningly enormous. The complexity of modern applications prohibits any other reality.


A huge majority of the Internet runs on open-source software. This software is maintained by volunteers, with no formal rules for code review beyond those that they set themselves. Plans are based on the availability of unpaid members, their expertise, and their interests.

On one hand, open source is essential. We cannot have every coder re-securing the wheel. And truly, the people that build and maintain the mundane open-source projects that support the Internet are worthy of canonization. But the volunteer basis of many open-source projects means a blind expectation of security and interoperability is a serious risk.

While essential projects often get financial support from corporations, that support is often insufficient when compared to the work required to keep the project safe. Look no further than Heartbleed, the enormous SSL vulnerability that existed for a decade before its discovery.

Since a huge majority of security systems run on open-source software, there’s always the possibility of a hidden but devastating bug lurking in your favorite open-source framework.


There’s an expression in the world of digital security: programmers need to win every time, but hackers only need to win once. A single chink in the armor is all that’s required for a database to be compromised.

Sometimes that chink is the result of a developer taking a shortcut or being negligent. Sometimes it’s the result of an unknown zero-day attack. As careful as a developer might be, it’s a fool’s errand to imagine you’ve patched every security hole.

Declaring any lock “pick proof” is the quickest way to find out just how optimistic your lock designers were. Computer systems are no different. No system is unhackable. It just depends on the resources available.

As long as humans exist in the system at any stage, from design to execution, the system can be subverted.


Security is always a balance between convenience and safety. A perfectly protected system can never be used. The more secure a system, the harder it is to use. This is a fundamental truth of systems design.

Security operates by throwing up roadblocks that must be overcome. No roadblock worth running can take zero time to complete. Hence, the greater a system’s security, the less usable it is.

The humble password is the perfect example of these complementary qualities in action.

You might say the longer the password, the harder it is to crack via brute force, so long passwords for everyone, right? But passwords are a classic double-edged sword. Longer passwords are harder to crack, but they’re also harder to remember. Now, frustrated users will duplicate credentials and write down their logins. I mean, what kind of shady character would look at the secret note under Debra’s work keyboard?

Attackers don’t need to worry about password cracking. They just need to find a sticky note on the monitor of the Assistant (to the) Regional Manager and they will have all the access they want. Not that Dwight would ever be so careless.

We have balance security and convenience to keep our systems usable and safe. This means that every system is, in one way or another, insecure. Hackers just need to find a tiny hole and worm their way in.


The defining characteristic of a hacking attack is system-level deception. In one way or another you are tricking part of a security system into cooperating against its design. Whether an attacker convinces a human security guard to let them into a secure location or subverts the security protocols on a server, it can be called a “hack.”

The variety of attack “in the wild” is extraordinary, so any summary can offer only a broad overview before digressing into the details of individual attacks. At the very least, an aspiring hacker must learn the system they are attacking.

Once a vulnerability is discovered, it can be exploited. From a practical perspective, an attacker could look for open ports on a server to find out which services a device is running, and at which version. Correlate that with known vulnerabilities, and, most of the time, you will find viable attack vectors. Many times, it’s a death by a thousand cuts: small vulnerabilities chained together to gain access. If that doesn’t work, there are password attacks, pretexting, social engineering, credential forgery… the list grows every day.

Once a vulnerability is discovered, the hacker can exploit it and gain unauthorized access. With that unauthorized access, they exfiltrate data.

And that’s how data breaches happen all the time.

One comment

  1. “Open Source Is a Risk”
    Blaming Open Source for security breaches is disingenuous. Open Source is called that for a reason – anybody who cares to can examine the code. This means that it is impossible to hide malicious code. Since only a privileged few can see closed source code, who knows what errors and vulnerabilities are in there. Close source code is security by obscurity and we know how successful that is.

    “This software is maintained by volunteers”
    That is a straw man argument for three reasons. 1) much of the open software is developed by paid programmers working for companies. 2) you are implying that just because they are “volunteers” they are not as skilled as paid programmers. 3) because the code is Open Source there is peer review by thousands of of eyeballs, whereas closed source is reviewed only by the developer who believes that his/her source is perfect.

    “They just need to find a sticky note on the monitor of the Assistant (to the) Regional Manager and they will have all the access they want.”
    Another straw man. You make it sound like the desk is located on a traffic island in the middle of Times Square where every passer-by has access to the terminal or desk pad. To get to this apocryphal desk in any self-respecting company, the miscreant would have pass through at least three security points.

Leave a Comment

Yeah! You've decided to leave a comment. That's fantastic! Check out our comment policy here. Let's have a personal and meaningful conversation.

Sponsored Stories