Evidence Suggests Security Training Is Ineffective

In their TechTarget article Security Awareness Training, Kinza Yasar and Mary K. Pratt noted that security awareness training is a strategic approach that IT and security professionals take to educate employees and stakeholders on the importance of cybersecurity and data privacy. The objective is to enhance security awareness among employees and reduce the risks associated with cyberthreats.

The article lends support to the message of Neel Lukka’s recent SC media article titled: The rise of employee IP theft—and what to do about it. Employee training was listed as one of the ways to mitigate risks.

Is employee security education the key to fixing our worsening security situation? Because it certainly does need fixing.

A year and a half ago Tanium ran a series of full page ads in the Wall Street Journal with headlines such as:

WE WILL SPEND $160B THIS YEAR ON SECURITY SOLUTIONS THAT ARE FAILING TO PROTECT US (in that year and a half that has grown to $200B!) and

WHY IS CYBERSECURITY GETTING WORSE?

Helpfully, that second headline was followed by

IT’S BECAUSE THE CURRENT APPROACH IS FLAWED.

Flawed indeed.

But that word “flawed” is one of those items that resides in the eye of the beholder. If you’re part of the security technology “solutions” industry, it becomes difficult to see the flaws in something that produces an annual revenue growth rate consistently over ten per cent, with generous earnings to widen that blind spot.

Actually security has been badly flawed since before 2005, when a MIT Technology Review cover story proclaimed THE INTERNET IS BROKEN, citing the same kinds of evidence as Tanium.

Is employee security awareness training really a significant part of the solution to steadily worsening security? Or is that like saying that more training is the solution to the problem of a defectively designed airliner that keeps crashing – thus providing an excuse for avoiding a costly redesign of the aircraft.

Allow me to cite some evidence that that the training solution is much more difficult than The articles by Mary K. Pratt and Neel Lukka suggest suggest.

Every year, I attend the RSA and AGC security conferences in San Francisco. RSA serves security technology experts, while AGC is for security industry executives. Like most attendees of both conferences, I also enjoy the many after-hours parties put on by exhibitors and others.

At least once per evening at those parties, I engage a security expert, typically a CISSP, in conversation. At some point, usually after a beer, I say “I have to admit, I’ve clicked on bad links and attachments.

That’s my land mine.

Over 50% of the time my fellow party goer steps on the mine when they respond with “Yeah, I know, I’ve done that too.”

Sincere apologies for my disingenuousness to all those who have stepped on my mines, but they were planted for a good cause (and of course identities will never be disclosed.) The cause, my reconnaissance mission, is to assess the validity of my suspicion that employee security education is a lot more difficult than it appears. Perhaps it simply does not work.

The question is obvious: if the security experts who are teaching the teachers about how to recognize a phish, themselves fail to recognize a phish, how do they expect the mass of employees to be able to detect a phish?

Employee security education falls under a category of security approaches I will call CTBG security. Catch The Bad Guys security.

In my books I introduce Kussmaul’s Law of Security, which applies to all CTBG security techniques. Basically it says that an incremental improvement in the attacker’s techniques requires a tenfold or larger improvement in the defender’s techniques. If the perpetrator crafts a slightly better phish email, the defender must mount a hugely better detection effort. And that goes for other methods used by attackers besides phishing.

And let’s face it, the more ambitious attackers, with bigger goals, tend to be the smarter attackers. That’s the basis of my corollary to his Kussmaul’s Law: When using CTBG security techniques, the difficulty of stopping an attack is exponentially proportional to both the amount at risk and the skills of the attacker. Stopping amateurs is easy. Stopping the skilled ones can be impossible using CTBG.

Does that mean that the security situation is hopeless?

The answer is yes, if we continue to rely on CTBG.

Meanwhile, a vastly superior approach has been hiding in plain sight since it was conceived in the seventies and eighties. It’s built on the same asymmetric cryptography we use every day when we go to websites whose address starts with https. If you use a blockchain-based service, that’s also built on asymmetric cryptography. (In fact, the crypto community seems to think that asymmetric cryptography was invented as part of blockchain/bitcoin.)

Another corollary to Kussmaul’s law is that the use of this approach reverses the first corollary: an incremental increase in the effort to apply this method results in a 10+ increase in the effort required of an attacker to defeat it.

AC got its start in the ‘70’s when James Ellis asked himself, and then his British government GCHQ colleagues Clifford Cocks and Malcolm Williamson, the fateful question, “What if we had a system where anything encrypted using one of a pair of keys could only be decrypted by the other key?”

This, along with other things such as secure symmetric key exchange added by Whit Diffie and Martin Hellman, and other important pieces from Ralph Merkle, Ron Rivest, Adi Shamir and Leonard Adelman, allowed us to build tunnels between users and websites.

So let’s think about tunnels for a moment. A tunnel is just a tube, right? Very secure through the length of the tunnel, but wide open at the ends.

No one reading this can claim that “I don’t understand security stuff, I won’t be able to follow this” because physical tunnels and digital tunnels share exactly those same attributes: secure in the middle, wide open at the ends. If you understand physical tunnels then you understand that digital tunnel. Disregard those techy SSL and HTTPS acronyms, they’re not relevant for this discussion.

Now let’s imagine keeping your files, holding your meetings, and letting your kids hang out inside a “secure” tunnel. If an unauthorized person had to drill through the earth or swim through the water surrounding the tunnel and then break through the reinforced concrete, well, that is just unlikely to happen.

That’s especially true considering how much easier it would be to walk into the tunnel from one of its wide open ends!

A couple of paragraphs back I mentioned that AC has allowed us to build tunnels between users and websites. That bit of conventional wisdom is not exactly true. So far we have only built tunnels between browsers and the servers that host websites. The browser can be used by anyone. The browser is a wide-open tunnel end, as is the server. The server has a certificate of course. But that leaves the question of what human being signed that certificate?

Answer: none. It’s a tunnel end that’s as wide open as the browser end of the tunnel.

Now, picture something that’s kind of like a tunnel but which exhibits an important difference: a pedestrian bridge between two office buildings.

One or both office buildings has a main lobby. In that lobby, before the turnstiles that let you into the elevator lobby, is a reception desk. Seated at the reception desk is a receptionist. The receptionist notices whether or not you’re wearing an employee ID. If not, you’re a visitor. You walk over to the receptionist, who greets you and asks who you’re there to visit. The receptionist also asks you for some form of ID: driver’s license, passport, or even just a business card; then issues a visitor badge with your name on it.

The buildings may also have a person in the basement watching monitors that display images of entrances, watching for anomalies. That’s the physical form of CTBG security.

By contrast, the receptionist represents ABE security. ABE stands for Accountability Based Environment. ABE is built on the assumption that catching bad guys is generally futile, while having an environment where everyone is accountable is the right way to establish security.

If you think about it, isn’t that what a building is? Isn’t a building just a set of accountability spaces? Isn’t accountability the main thing that distinguishes indoor spaces from outdoor spaces?

The internet used to be called an information highway. So what is a highway but an outdoor public transport facility?

And how do we typically use highways? Don’t we typically use outdoor highways to take us from one building to another? One indoor space to another indoor space?

“Quiet enjoyment” is a legal term that sums up in two words what one has a right to expect from a physical building: useful spaces, elevators that work, comfort, and security.

And that’s why (trigger warning: plug coming) the title of one of my books is Quiet Enjoyment. Quiet Enjoyment is all about building digital versions of these accountability spaces called buildings.

The answer to our security problems is Accountability Based Environments, also known as buildings.

We have the very best asymmetric cryptography construction materials with which to build these buildings. Let’s get going! Let’s fix our digital world with accountability – that is, with digital buildings!

By Wes Kussmaul