Deus ex something something…

­­The past few months, I keep finding myself in this conversation about “cyber” and insider threat. Generally speaking, it seems quite a few people think insider threat is a “cyber” issue – and I can’t disagree more.

I think there are three reasons people equate insider threat with cyber:

  1. In the media a “cyber” event is often ascribed to an “insider”. This is about as much as people hear about insider threats, so the words are assumed to be interchangeable.
  2. At least in the U.S., there’s a tendency to focus on spectacle – by this I mean the never before seen technology or tactics, and the spectacular employment of them, while simultaneously ignoring the less spectacular or historical tactic or technology.
  3. From a mitigation standpoint, organizations are rightfully focused on protecting critical assets, which these days tend to be information on a network; in this vein network security is the “cyber” element – protecting assets from an insider, an outsider who finds an opening in the network, or an outsider who becomes an “insider” by obtaining existing insider credentials for access.

In that view, people are understandably confused. On the second point there’s a valid psychological underpinning to the bias toward the “unknown” and newly perceived threat. In the last point, a mitigation avenue for a particular critical asset begins to color views of insider threat.

So why isn’t insider threat a “cyber” thing?

An insider event is precipitated by a trusted person with access.

A “cyber” event could be precipitated by an outsider or an insider.

If an insider, then the individual already has access to the victim organization (employee, supplier, contractor, etc), and they leverage that access to sabotage computer resources (physical or non), leak data, steal data, or otherwise attack the confidentiality, integrity, or availability of said organization’s data

If an outsider, then they are not a trusted member of the victim organization, rather they pose as one.

The outsider may manipulate a person within the organization, wittingly or unwittingly – say through social engineering, to enable the outsider’s access into the victim network – but the outsider is only presenting as someone with legitimate access. In this case we might call the manipulated person in the organization as an insider – either unintentional or intentional depending on their malicious intent or lack thereof – and the external hacker the outsider.

That all said, the cyber insider event does not make up the majority of insider events – it might be as low as 22% in fact. Cyber insider events are the spectacular – they do a lot of harm in a seemingly short period, but they are not necessarily the most devastating.

Consider the following:

Edward Snowden managed to take a whole bunch of data from the US government, using his placement and access. Then he “sneaker-netted” that information overseas. Sure, he got the information from a virtual data source, but it wasn’t a “cyber” event.

Say what you will, insider threat is older than computers. It is as old as espionage and plain old vengeance, and that’s pretty old.

Cyber isn’t insider, much like a hammer isn’t the only way to open a lock.




Quick Post – Insiders and Religion

Just about 24 hours after my last post our second child was born, hence the lack of updates.

Real quick, I saw this study this morning which indicates children raised in religious households are less altruistic and tolerant. The article at Forbes goes into the evolutionary basis for morality versus the human development of religion, something Tooby and Cosmides have written quite a bit about (evolution and morality that is). As the article states, religion was an effective way to develop cohesive groups, to define the inside group versus the outsiders. The point being this is often at odds in our present day world, the foci of much conflict.

I’m wondering, if this is the case, are insiders more likely to have religious (or comparable organizational belief systems) convictions. Do secular societies have a lower rate of insider events? What are your thoughts?

Information supply chain



I was listening to a CERT talk on supply chain issues recently. At some point the commentators said something to the effect that supply chain issues are getting attention because businesses must interact with vendors and suppliers. I imagine the commentator was addressing the increased complexity of products, the increased complexity of these business relationships, and the ever shrinking world we live in (which is also increasingly complex) and the perception these risks are on the rise as a result of these elements.

As someone who looks at supply chain issues on a regular basis, I don’t see a light at the end of the tunnel. Information exchange is probably one of the earliest forms of supply chain dynamics/threats. The animal nature to exploit advantages to maximize survivability and reproduction (success) is not limited to interactions in the physical realm, but includes access to information otherwise limited to others. Eventually, the barriers involved in compartmentalization of information break down; the systems once put in place to restrict information flow to maintain survival advantages (within a family, tribe, company, or nation) become the victim of entropy or the death of a thousand leaks. The information becomes commonplace and the value of that information decreases.

From an evolutionary standpoint, it’s probably safe to say the benefits of social exchange outweigh the risks. Social exchange has an element of Locard’s principle; something of each party is left behind. Each party, to the extent they are capable, becomes aware of the other’s strengths and weaknesses, many of which will not even be primary to the issue being discussed. On the other hand, much of this information could be ascertained through observation absent social interaction. Social exchange  affords the chance to misrepresent oneself while still reaping the reward from the exchange. Either way, information is transmitted, and may be ‘lost’ to another entity which is not entirely beneficial. With this in mind, each of us goes into the social contract, or really any interaction, with a degree of acceptable risk.

The increasing interconnectivity of the modern world seems to have a negative correlation to the window of time on which individuals can effectively exploit emerging relationships. Information cannot be effectively managed simply because there is too much of it to process. Although some might claim efforts to analyze “big data” allows for such, the effectiveness is limited by inputs, for some of which there are simply not any collection mechanisms. The human mind simply has not evolved beyond its hunter-gatherer roots; our minds are essentially tied to a world in which you might only meet tens of persons in a lifetime. Automated crunching of big data is a boon to interpreting an increasingly complex world with a limited ability to process information, but we are generally kept in a reactive state.

So what is industry to do in the face of the lightning speed of supply chain issues? No longer is it just an issue of where materials or sub-components come from, rather it is the source code development, the development of universal standards, the academic thought train, the emerging political realities, all interwoven and changing.

Obviously industry must continue to monitor and react to the relationships which affect their overall survivability, as do all animals, but getting beyond a purely reactive stance means more than that now. NIST and CERT both address the defensive mechanisms all industries should establish, but beyond that we are faced with the supernova of information which needs to be process to completely get in front of supply chain. That’s where we all need to focus, on determining what level of risk is acceptable, what level is manageable. Once those domains are established, looking one level beyond the traditional supply chain vectors becomes more digestible. We can and still should watch where the widgets come from, but now perhaps we also pay attention to the human climate those widgets come from.


Turn risk to reward


, , ,

I was talking a person in the education industry recently and she related to me how schools are having issues with tablets they provide to students and school computer networks; namely the kids get around security measures and in some instances are selling access credentials or methods. Naturally the schools are paying some sort of third party for firewall, intrusion detection systems, etc, but I was surprised/not surprised when I heard how they didn’t make these vendor/product selections. They never consult the kids. They don’t consult the kids on promotional videos, they don’t consult the kids on the network vulnerabilities, they don’t involve the kids in securing the networks.

This isn’t too different of a picture from what we see everywhere else today in the security industry – the C-level knows today they need to care about the insider threat, but too often they consult only the point of sale for possible solutions when getting a reality check from the workforce could go much further. Beyond a reality check of our assets and security posture, we can go even further and actually give employees ownership over security. In my military days there was something along the lines of, “every soldier is a sensor.” It’s certainly not a new concept and really the efficacy of it lies in the execution. It’s one thing to say “we have a policy that employees report,” that’s great, you should have a policy, but policy does little to motivate on its own.

Integrating what employees (or the kids) are telling you into visible changes, or some level of transparent feedback, feeds into that sense of ownership – that’s what motivates people. In the case of the kids, I’d love to hear of a school teaching pen testing and essentially creating a white hat team responsible for the day-to-day maintenance of the school network. That’s not really practical in a workforce situation, but figuring out how to make your workforce part of your insider threat hub, rather than just a data feed you may or may not get, could change that dynamic.

Why Insider Threat Detection Fails



Virtually anyone who works in industry or government can tell you what the reportable warning signs of insider threat are – sudden behavioral changes, unexplained affluence, odd working hours, etc. Yet every time an espionage incident, intellectual property theft, or mass shooting takes place, it seems as though indicators are either not reported, or somehow fail to reach those who need to know. So what exactly is going on here?

There are a variety of mechanisms responsible for the failure of insider threat detection; reporting mechanisms, inter-organizational communications,  and the existence and enforcement of policy are just a few laid out in CERT’s Common Sense Guide to Mitigating Insider Threats (2012). While any valid insider threat program certainly should address the nineteen components presented within the guide – it must also examine how detection is communicated to employees.

In a discussion pertaining to evolutionary psychology and business ethics, Cosmides and Tooby (2004) delve into a crucial element of the human mind that gets overlooked when discussing threat detection and reporting – humans are unable to detect procedural rule violations that are not precautionary or social in nature. The hunter-gather mind that humans have developed is equipped with specific machinery to detect social contract violations – instances wherein one receives the benefit (Q) without paying the price (not P) or vice versa – but the majority of humans fail at detecting violations of non-social “if then” rules.

The reason for this selective reasoning specialization is simple; our minds are the product of millions of years of natural selection. In terms of scale, we have just recently emerged from hunter-gatherer societies, yet our minds largely remain within this realm. Our mental machinery has been tailored for a starkly different world from which we live in today. In the past, societies were smaller and people often lived with extended family and spent most of their time outdoors. The number of people that an individual might have encountered throughout his or her lifetime was far less than that of an individual in 2014. In a world where people spent most of their days simply trying to stay alive, being able to detect social contract cheaters, or free-riders, was an essential skill because every individual had the incentive to reap benefits without expending personal resources.

Within the context of natural selection, the fact that humans are adept at detecting violations of precautionary rules (e.g. if you’re going take risk A, then you must take precaution B) makes perfect sense. Possessing this skill provides palpable utility to an individual; and that utility is survival. However, the procedural rules of the workplace are another matter. They are not social or precautionary rules and they generally do not identify a benefit or risk to the individual. For example, most insider threat programs can be boiled down to “if you see something, say something.” While straightforward, it simply does not hit the same mental circuits that say, walking through a pit of snakes might. If there is no obvious risk to the individual, and no potential personal benefit – humans are less engaged.

What threats and benefits to an organization mean to an individual remains largely ambiguous. The human mind was developed in an environment in which social exchanges were face to face, in real time, and the results were often observable. The indirect relationship between benefits to the individual and the group were more readily observable (e.g. if I spend time crafting tools in order to allow the hunters more time to hunt, I will eat better). Reporting a coworker who fails to lock their computer may not activate the same mechanisms. The value to the individual through the group is not as apparent and the threat and benefit are obscured. Even within organizations that are serious about implementing security measures through negative reinforcement (counseling, performance review), individuals generally do not lose their jobs. With that said, a culture of enforcement and repercussions can be advantageous.

To put it in more everyday terms, this is one of the reasons why it’s so difficult to get the public out of traditional ways of doing things. For example, it is common knowledge that studies reflect a direct correlation between smoking tobacco and cancer; it’s usually just a matter of time. In most metropolitan areas of the United States today, the effects of smoking are not observed and documented as often as they should be. Going back a few decades, we all knew smoking led to cancer, but it took serious public campaigns and incentives to curb smoking – even though people could rationally understand that smoking might kill them, the lengthy process generally wasn’t rapidly observable enough to command the public’s attention.

If there isn’t a negative repercussion directly associated with an action, our minds fail to acknowledge the association. This is the substance of modern parenting. In order to curb dangerous behaviors, punishment must be swift, consistent and enforceable; otherwise the lesson is lost. This concept can be assimilated to ocean thermal delay – when actions and reactions are separated by timeframes that exceed the normal human attention span, we are less apt to acknowledge (and accept) the connection.

So how can an organization take steps to effectively address insider threat? Anchor the threat of observable impact to the employee. Simply providing training on the machinations of “if you see something, say something” does not go far enough; insider threat detection needs to be tied to livelihood. Consider the impact of the following two statements:

  1. All personnel must badge into facility X, never allow a person to “tailgate” into the building.
  2. Reviews of security incidents over the past two years have found tailgating to be the most common method for unauthorized personnel to gain access to intellectual property at facility X. As a result, several companies are now selling our product at a lower price. We will likely have to find ways to streamline budgets, to include no bonuses or pay increases, and the possibility of layoffs.

The first statement is valid, but it fails to emphasize the bottom line impact. Even the second statement is insufficient due to the fact that the damage has already occurred; therefore, the threat could be considered non-existent.

Another aspect to contemplate is the likelihood of a perceptual difference in security stance between management and the average employee. There are very good reasons for employees to nod in accordance with management when security edicts are discussed, but the underlying truth can be acutely different. Management may be oblivious simply because no one wants to tell the emperors they have no clothes.

In order to address this issue, organizations might consider a neutral third party assessment that compares attitudes and perceptions of security from the viewpoint of both employees and management on a scheduled basis. Industrial psychologists could also assist organizations through framing security training in a manner that elicits not only compliance, but active participation from employees as well.

The combination of impartial active listening, conveyance of threats to the individual employee, and the implementation of swift, observable repercussions can create a proactive culture of security awareness, but the organization must be willing to invest.

Reposted with permission from TSC. Content originally authored for TSC blog