By: Renketsu Link
You’ve undoubtedly been trying to figure out what to do about what might have been the worst data breach this year, the compromise of the multinational credit bureau Equifax. As it stands now, the credit histories of millions of people in the United States and other parts of the world are now in the wild and undoubtedly being sold on the black market at cost (nowadays, rather less than $20 per dossier) and if you’re reading this you’ve probably got free credit monitoring until the year 2030. But that doesn’t answer the key question here: How the fuck did things get so bad?
The answer is a complex one and involves significant amounts of suck and fail at every level of complexity. Let’s start at the top of the stack.
Instituting a security program requires funding from the company itself as well as buy-in from upper management. Without money, the system administrators at the company can only cobble things together in their spare time – hardening, monitoring, patching, reporting, and deploying the occasional passive security measure. In some companies to this day, they have to do this on the down low because management there is actively hostile toward security and will force the admins to remove “those useless things.” Some measures require the purchase of additional hardware and license keys, and not every budget has a couple of thousand dollars to spare on a few new boxes. If you don’t think places like that exist, they do – I’ve worked at a few, and they make our work much, much easier. Even if they have money, the C-levels (Chief * Officers), V-levels (Vice *), and D-levels (Directors) need to make it public that they support the security program, will abide by it, and officially order everyone who works there to abide by it, too. All it takes is one C-level who doesn’t give a fuck to cut the nuts off of the entire thing.
Second, let’s talk about so-called security practitioners. Probably 80% of the “cybersecurity experts” I’ve butted heads with are barely able to turn on a computer, let alone actually put up a fight. Most of the security industry pimps certifications like the CISSP or Certified Ethical Hacker sheepskin don’t actually know anything useful about security in any way, shape, or form. The Wikipedia pages talk a good game by throwing around words like “provable,” “experts,” “ethical,” and “cyber,” but if you actually read any of their training texts (which, of course, are published by those certification bodies and cost as much as your average college textbook) usable information is pretty scarce. Let’s take the CEH: If you look at what it actually teaches you (things like not running telnet, setting up firewalls, installing patches, and not running all your shit as root) it actually reflects the publically known stuff about security in the late 1990’s. You’d be hard pressed to find a Linux or real UNIX that actually includes in.telnetd these days but doesn’t mention anything about the sorts of vulnerabilities that one finds today (like process injection or memory hardening evasion techniques). As for the CISSP, it tells you up front that the Common Body of Knowledge is a mile wide and an inch deep (or more recently “at the thirty-thousand foot view”) but in the same breath they’ll also tell you that when you actually sit for the exam all you have to do is pick the least wrong answer; if you actually know anything about security, for about 74% (if I did my math right (hey, the book’s a thousand pages, cut me some slack)) of the questions have all incorrect answers, and if you actually did what you were taught… well, you know how I make my living, so by all means, keep doing exactly what you were taught.
Practically every company out there has some legal or industry-specific guidelines that they have to at least make an attempt to comply with, and there’s no shortage of them: PCI-DSS, NIST SP 800-53, NSA IA, HIPAA, ISO 27001… I could go on and on, but all you need to know is that they all say basically the same thing: Google “how do I harden <insert operating system or appliance here>,” follow the instructions if the link isn’t from a perfectly legitimate Russian or Chinese business conglomerate, patch your shit every couple of days, read your logs and respond to what you see, and generally don’t be a dumbass. In practice, however, they get treated as lists of checkmarks or cells in a spreadsheet. A couple of meetings are scheduled and suffered through by everybody who bothered to show up (of course, at least one Android phone that now belongs to someone like me is on the table) and roadmaps are drawn up that are supposed to act as a timeline for security measures that need to be instituted. Sometimes, once or twice a year, a security assessment is held; rarely a security company is hired to do the work. Then, and here’s the fun part, the remediation loophole kicks in. It goes like this: Every security program has a requirement built into it that basically says, “You now have x months to fix the findings from this assessment, after which time we’ll run another assessment.” You probably see where this is headed. Nothing happens to fix the vulnerabilities found, the next assessment happens, nothing has changed (usually things have gotten worse in the meantime), and during the burndown meeting someone says “Okay, you now have x months to fix the findings from this assessment, after which time we’ll run another assessment.” Over and over and over again.
Of course, there are a few out there who actually have a clue. They’re the ones who don’t last very long because they eventually get tired of being ignored, quit, and occasionally go into business as hired guns with their inside knowledge (come on in, the water’s fine!) They run their scans, tell the sysadmins to patch their shit and harden SQL Server, and read their logs. They’re also the ones who get told that installing patches adds bugs instead of fixing them, get told that complex passwords are unnecessary, get bitched out at all-hands meetings for trying to institute multifactor authentication because it adds an extra step to normal work (meaning that your Battle.net account probably has stronger authentication than your bank), and watch in horror as C-levels plug flash drives they found in the parking lot into workstations where the user’s logged in as the local admin. The lot in life of a real security professional is a sad one that often results in functional alcoholism, endless bitching at hacker cons (attended under the pretext of “vacation” because actually hanging out with hackers can cost someone one of those expensive certifications if anyone finds out) and often early retirement to a log cabin in Appalachia. That’s if they don’t get fired for actually doing their jobs; nobody ever likes being shown that their security program doesn’t actually work and the messenger always gets shot.
Next in line are the sysadmins. As with any group, there is a subset that actually know what they’re doing, and go as far as they need to so they can do their jobs (which, if they know what they’re doing consists of automating everything in the first month, fucking off the rest of the time, and having a boss key set up so they can look busy whenever somebody wearing a tie walks by). The rest are content to stand up a Window or Linux box, throw an app or two on it, and let it go at that. Some don’t bother patching anything, either because it’d be too much work or because the developers won’t let them (“If you patch that you’ll break our production app!”) Most have a patch cycle that’s entirely too long (weeks to months), which leaves them vulnerable for extremely long periods of time. Also, operating system ecosystems are becoming more security hostile in very subtle ways (you’re welcome). There is no shortage of Windows APIs that let a creative user turn off or evade security policy entirely, and systemd has been a godsend to hackers the world over.
Last and certainly not least are the end users, who may as well be on our payroll because they make it all possible. They’re the ones who use Password_1 as their passwords because password complexity guidelines don’t let them use strings like qu;;o5Eey9aiV-ai3FexiC<a7cu2hGhi|g}e (okay, so that’s not entirely their fault but I’m not above a cheap shot now and then (trolled people are people who make exploitable mistakes)), open every document sent to them from a vaguely official looking e-mail address, and are trained from an early age to click on buttons that make error messages go away. Let’s not forget those wonderful people who forward our trojaned documents to entire teams and make it rain shells. App developers are the ones who demand that sysadmins not lock their shit down because they don’t know how to write robust code (you’d be amazed at the e-mail threads where a stupid bug made by a dev was blamed on a security patch) and say that many different classes of RCE are theoretical and thus are wastes of time to mitigate (protip: Getting caught selling 0-days in your own code is a career limiting move.) Analysts that spend more time at work surfing porn than looking at system logs or vuln reports are always fun, plus if you pop their laptops they occasionally have security reports that make life easier for us in the short term.