Lesson learned from Facebook storing user credentials in a plaintext password

Check out this story about Facebook storing millions of user credentials in a plaintext file: https://techcrunch.com/2019/03/21/facebook-plaintext-passwords

Obviously this is a bad security practice, everyone makes mistakes (big and small). This story however stuck in the back of mind because lately I’ve been thinking of why certain security tools, while effective in concept, aren’t so in practice. The first one that came to my mind was threat modeling. At my last company, threat modeling was hailed as the end of security bugs, if you just threat model security vulnerabilities would drastically be reduced, yada yada. And yet, almost a decade later, we still see lots of security vulnerabilities, even really egregious ones.

Is it lack of tools? Are developers not using those tools? Lack of education? I don’t think either of these, I’m actually starting to feel as though it’s an issue of the scope of the tool. Techniques like threat modeling live in the design layer of an application (how you plan to engineer an application). Whereas things like code analysis tools live at the implementation layer of an application (how the application is coded or implemented). The problem is none of these talk to each other, so it’s the case of “what you say you do, is not necessarily what you actually do”.

This also applies to other areas. Compliance is the one that jumps out at me immediately. You create a bunch of policies and procedures, but are you actually following it (even with an audit) day to day?

So in the case of Facebook, no doubt they have policies and procedures to prohibit the storage of credentials in plaintext files, but clearly that’s not what the developers are doing day to day. How do you connect those two in a way that scales well. I smell opportunity, what do you think?

–Kevin