Countering Information Operations

For me, one of the most interesting developments in the 21st century so far has been the adaptation of old school active measures to new technologies. If you’ve been awake the past few years you’re probably familiar with it. Short version, the Russian government has taken the internet, psychological operations, and its existing services to build an effective society-cleaving scythe.

Psychological operations and active measures are nothing new. I won’t go into the long history, suffice it to say every nation has engaged in them to some extent over the course of history. It’s cheaper then firing missiles and plausible deniability does go a long way. Today, it’s even cheaper to launch a multi-pronged operation on multiple fronts as the attack surface has expanded exponentially with the emergence of social media. Some, myself included, might even say the use of such weapons against democracies could be a veiled effort to move more open societies to push for internet sovereignty by using the very tool which they believe threatens them – freedom of speech.

If we are to go down that road – mandating all online interactions have some human attribution for identity and location – well there’s a saying about hunting monsters.

Still, that seems to be the path we are on now here in the U.S. Long the champion of an unbounded internet, we now have to contend with what that means – that anyone with the technical know how can launch disinformation and misinformation campaigns with very real ground results – against the entire population with relative ease.

Yes, real ground results – in case you hadn’t heard, false personas and groups literally created the environment for rallies and protests on both sides of the same issue. Divide and conquer?

So what is a democracy (or democratic leaning) nation to do? Close up the borders? Require everyone exercising their free speech to provide proof of identity – regardless of the type of speech? Do we abandon privacy in the name of security, or does free speech – regardless of type – mandate publicity? Do we monitor everyone? If so, will we use that information?

In World War II, my grandfather was a tail-gunner on a B-17 who was shot down over Nazi occupied France. I don’t know what if any relationship that has to the following, but I present it for context of the man. When I was a kid he told me, “believe half of what you see and none of what you hear.” Sometimes I think he set me up to be something of a cynic, he certainly set me up to think critically for myself.

So an alternative to becoming the monsters we seek to defeat, encourage critical thinking. Checking sources is fine, but that too has its limits as the technology to impersonate anyone has now reached terrifying levels. Everyone would like you to make you decision in 30 seconds or less, that’s the way everything is served here in the West, but it’s your life – you don’t have to be rushed. Stop, take a breath and be mindful of whatever it is going on between those ears. Who is thinking? Is it you? What assumptions are you making? Do you care about your biases? Why do you believe what you believe in?

Believing in half of what you see and none of what you hear is the best guidance anyone could give in the face of information operations. I don’t know if that was what my grandfather was getting at, maybe he just didn’t want me to be duped by car salesmen – but how different is that?

42-3538-photo

Image of the B-17 #42-3538 / Ten Knights in a Bar Room, taken shortly before it was shot down on October 4th, 1943.

Information unchained

The study of the corvid family, specifically crows and ravens, has given cognitive scientists much to think about in the past few decades. While they went down a very different evolutionary path from the primates, scientists have found these birds able to problem solve, build specialized tools for tasks, demonstrate long term memory, observe and employ social contracts, seemingly communicate specific lessons learned across generations, and employ theory of mind. While their biology is notably different, their brains have evolved to pack quite a bit of punch in terms of cognitive load versus brain mass, traditionally reserved for the primates.

raven-1565293

In short, theory of mind is the ability to infer what an other is planning, thinking, observing, wants, etc. Theory of mind is thinking about how another thinks; at the most basic level it involves observing where another’s eye gaze is directed. In the case of ravens, scientists have conducted experiments demonstrating food catching behavior in which the corvids would hide food more quickly and return to hidden food catches less often if they thought they were being observed by another raven. While not conclusive, the evidence across these kinds of experiments lends researchers to consider crows and ravens have the ability to consider the knowledge competitors have or are obtaining, to consider the information available to others, and to modify their own activities to maintain or increase their survival capabilities.

Certainly most people, and probably primates, ravens, and some pets, use theory of mind to navigate life. Theory of mind is essential for complex social relationships, specifically social exchanges, the “if-then” logic rules which humans are fairly good at detecting violations of. Of note, humans in general are particularly bad at detecting violations of logic which are not in the context of social exchange.

brains-1427616

Why all this thinking about how others think? Aside from the obvious advantage of being able to dupe or trick another who doesn’t have the same level of insight or reflection, it functions as a protective mechanism – as highlighted in the food caching experiments. It seems, all living organisms who have theory of mind are recognizing a principle of information – in its natural state it is free, fast, and available to anyone who might be available to observe it. In this classic sense of information, the confidentiality and availability of the information are limited to the observers. The integrity of information would also be limited by physiological considerations of the organism observing, and potentially by its mental state, and history with the observed event in question. Availability of information may also be impacted by the desire of any observers to relay the information in question through some medium, but naturally this second and third order transfer of information can have all sorts of effects on the integrity of the original data.

While observers may participate in modifying the availability, integrity, and confidentiality of any ingested information, at least as it pertains to sharing, not sharing, or altering recalled information, the original information is of course unchanged. The fact that a tree fell in the forest does not change (hopefully) with the retelling or observation of said tree falling, although quantum mechanics brings some questions to the forefront in regards to reality and observation.

Today, technology can certainly make the availability of information fairly close to the precipitating event more “available,” but confidentiality and integrity have seen significant losses for the same reason. Certainly there have been temporary successes in terms of encryption technologies, methods to detect potential integrity issues in relayed information, temporal assurances of confidentiality, but in the scheme of things these advances are of course lost to counter-technologies. As information is capital in a competitive world, one group or another will always attempt to circumvent one or all elements of the security triad. All organisms have biological underpinnings which drive them to do so, to minimize stressors and maximize they likelihood their genes will be successfully passed on.

1db306f3f8e782ec62d09dc81d73d39c

Perhaps, rather than investing all this time and energy into maintaining information dominance while actively attempting to undermine the attempts of others to do the same, it is time to reconsider the nature of information. Perhaps information wishes to be free and resists our attempts to render it otherwise; or as some would say, the truth will come out.

We are probably not too far off from being able to look at the quantum nature of information, that is detecting any attempts to change the integrity of information would become apparent due to entanglement. Early experiments into quantum mechanics imply even observation of information changes the nature of the observed, again having implications in regards to entanglement (short version – everything is connected, touch an atom in Wisconsin and one on Alpha Centauri vibrates too).

If this is the case, that the information resultant of a tree falling in primordial Pangea, can be retrieved if for only the technology, then it may well be time to rethink how we treat information, and therefore each other. The ethical implications were always there and only become more stark in a situation where only one society maintains access to the “true” state of information.

If we strive to be a species in conflict, the idea being that competition gives rise to – something, then fine, treat information as capital. If we strive to move beyond a species simply fighting for generational turf, then perhaps we should allow information to move freely more rather than less.

Of course this all fine and good until we reach that fine line between private and public data. The right of the individual to own his or her data, for time to wipe away memories – true and false, is already at odds with a world hungry for data to construct its latest chrysalis. It seems lately the choice has been made by us already, through our collective actions. At the end of the day, progress as a species may be more likely if we begin to change our default perspectives rather than our default environments.

Something completely different. Sorta.

Hello blog, it’s been about 2 years and I have some thoughts, more on the psychology side and less so on insider threat, but there are connections to be made. Apologies if this gets stream of consciousness.

First up, work. My last post dealt with the evolution of work and the potential for a universal basic income, so I suppose it makes sense to start here.

I heard a news story this morning about Snag Work. It’s essentially a hiring service for  menial labor jobs in the service industry – hotels, food service, and retail. From what I gather, it’s a pretty good set up for people who need and income (all of us) and flexibility (all of us), but hires are treated as 1099s rather than an employee of the work organization.

Never having been a 1099, I wasn’t aware of the lack of legal protections and regulations these work types have, but the counterargument is essentially about the downsides of being a 1099. No unemployment coverage, issues with healthcare coverage, etc.

I could see more and more of society shifting to this kind of gig-work. In many ways it seems inevitable with the amount of automation and reduction of traditional manufacturing labor needs. So perhaps the argument shouldn’t be about the downsides of an increasingly gig-based economy (which seems to go hand in hand with global volatility, doesn’t it?), but rather about if we want to have an economy that works, we need to build more robust and useful social support programs – you know like guaranteed health coverage or a UBI.

I’ll get off that soapbox now… and onto the next.

From the time my children could move about on their own, I’ve noticed how they choose to squat or stand, rather than sit. Increasingly I get the impression sitting is less about comfort and more about group dynamics and institutional control of subordinates. Why do I think this? I’ll get there.

As others have put it, our ancestors have essentially been on a millennia long camping trip only until very recently. Our minds (as I’ve expressed elsewhere) are the result of THAT world and best suited for THOSE purposes, regardless of where we find ourselves today. Our bodies too, are suited for a lifetime of movement, and yet we seem to avoid movement, to nearly enforce sitting or laying down whenever possible, rather than standing or squatting.

I digress. As anyone with kids, or who works with kids will tell you, they have a hard time sitting still. At our home, if one of our kids has a hard time sitting still during dinner, they are welcome to stand. It is not a punishment, it’s a recognition that their body needs to move. Restricting the body, housing the mind, is going to place negative stressors on the mind which will result in some net effect. So why not just stand up?

Back to why I think sitting is about power dynamics – I got a report a certain young man had trouble sitting still in circle time. It’s not a big deal and wasn’t made a big deal of, just noted, but it got me thinking. Why does he need to sit? Can he stand up? Can he squat? So I asked.

The answer, of course, was yes. Kids who want to stand are allowed to in the back of the circle. I get it, there is a certain degree of crowd control required to do anything with more than 2 kids at a time, and distractions are just that. But how about we all just stand then? My guess is, the kids who might want to stand, see it as a punishment because they are nearly removed from the group. In a sense they are the “out group” because they are not acting in sync with what has now been established as the “right” way to act or “in group acceptable norms.” Certainly I’ve seen it as a preferred stander at meetings. Others will practically treat it as a hostage situation if you stand while they sit, trying to cajole you into a seat. I’ve also seen people fall asleep at said meetings, something which happens with much less frequency in standing meetings.

I’m heartened to hear of schools that are experimenting with solutions for kids such as standing desks or pedaling desks, so their bodies can exert the energy necessary to stay in shape – which also means healthier.

On the other hand, I was dumbfounded to learn recently that many people cannot actually squat in place – we have been that trained that muscles are weakened due to lack of use or misuse. I’m no physiologist, but squatting must help posture, reduce tension on the spine, blah, blah – which should result in a positive net effect for our mind.

When we feel better physically, we feel better mentally. When we feel better mentally, we are less likely to be jerks to others, and really isn’t that what the hokey pokey is all about?

In summary – evolutionary psychology is informed by evolutionary physiology. In establishing power dynamics (e.g. “I talk, you listen”; “I important, you not”) we have created cultures which insist upon the suppression of valid physiological needs (e.g. body moves, and in doing so gets exercise, stays healthy). So next time you see me standing next to a chair at a meeting, or squatting at the bus stop, maybe give it a try first and see if you feel better too. Then, pass it along.

 

7.4 billion monkeys with enlarged craniums and a trillion devices….

Tags

If you’ve been follow artificial intelligence and the coming revolution like I have, you’ve probably picked up on a few common themes:

  1. Universal Basic Income
  2. People are freed of work
  3. Peoples’ lives are less meaningful without work

I’m on board with one and two, but point three is really sticking in my craw.

But let me back up. If you’re not following the general conversation on AI, here’s basically what’s going on. The development of machines eliminated some human jobs (routine manual labor), the development of computers eliminated other human jobs (routine cognitive labor). Up to now, the non-routine, cognitive, and to an extent manual workload has been the safe haven of humanity – machines weren’t able to make the mental shortcuts required for these tasks. Well that day is basically here.

53H

A few months ago, with the development of “deep learning”, an AI named AlphaGo was able to beat the world champion of Go (a strategy game, essentially chess on PCP). Only months before the thinking was that level of AI was a decade out – not so much. Let me put this in perspective – the game, Go, has more possible permutations than there are atoms in the universe, and by more I mean by a large factor. Traditional computing couldn’t go through all the permutations in a reasonable amount of time to take down the world champion. This AI basically played the game and developed concepts, allowing it to reach deity rank (for the game), something no human has done.

Basically the consensus now is the days of machines out performing humans in all tasks, that is routine manual, routine cognitive, non-routine manual, and non-routine cognitive, is effectively now. The commercialization and deployment of this technology will only be as slow or as fast as the economy mandates, but it’s already taking out call centers (millions of jobs globally), being added to your messaging services, how you buy things, etc.

dream_8757231eef

A nightmarish depiction of the author via Deep Dream

Even the arts are not free from AI – just last year you probably saw a few Google deep-dream art pieces on your friends’ feeds; well AI is moving into music and writing as well. Given the massive repository of data for AI to digest and learn from, everything is really a matter of time – and not much time at that.

If you’re saying, “So what?” or “sounds good!” we’re on the same page. A world in which machines do the labor while people rest on their laurels sounds like a place that might know no war, know no famine, and is free to pursue higher aspirations, whatever those might be.

The train of thought continues – if there’s no work, then we end up with a universal basic income for all. I think this, coupled with the idea of no “jobs”, is what really bothers a bunch of folks, particularly here in America where some consider hours spent at work to be a status symbol.

If you haven’t seen it, here’s a sample from a recent article on theguardian discussing Yuval Noah Harari’s thoughts on the matter:

“What might be far more difficult is to provide people with meaning, a reason to get up in the morning,” Harari says. For those who don’t cheer at the prospect of a post-work world, satisfaction will be a commodity to pay for: our moods and happiness controlled by drugs; our excitement and emotional attachments found not in the world outside, but in immersive VR.

I would counter the Western concept of a life well lived, that is going to work 60+ hours a week and dying of a heart attack immediately after retirement, is not actually in line with our innate biological drives or mechanisms, and at their core I imagine most folks would agree.

working-with-laptop-3-1545962-1280x960

I’m not sure who gets out of bed for this.

In America at least, if you’re not working in coal mines or tied to a desk, you’re usually perceived as a free-rider, a neer-do-well, a parasite. I recall one day in college calculus when a professor at random said to me, “idle hands, Mr. Whalen, are the devil’s workshop.” Strangely I had never heard the phrase before, but this seems to encapsulate the American attitude towards work – if you’re not actively engaged in a task, and by that we mean unpleasant effort, then your are evil. In other words, a life of pain is good, a life free of pain is bad.

I won’t bother delving into the roots of this cultural phenomenon, but it’s fair to say this mental framework is in exact opposition the the biological drives of all animals; avoid stressors, conserve energy. Yes, all animals are driven to eat and reproduce, but not to the extent they create a calorie debt; biologically we are no different.

123H

Praying mantis in its natural state.

In the cultural context I’m defining work as a non self-fulfilling task which one engages in to obtain either the means to purchase or directly obtain food, shelter, or other goods and services. In this respect an artist who enjoys their work is not engaged in the cultural context of work, even though they may obtain payment for their work. I’m not belittling the value of the artist’s work, simply stating that culturally speaking in the West, this is usually not considered “work” in the classical sense.

So does everyone sit in their apartment, play video games and smoke pot? Maybe, but what exactly is the problem with that? Those individuals are making risk benefit decisions about how they want to live their lives, which frankly if they won the lotto in a pre-UBI world, would not receive as much disdain. Perhaps it is the leveling of the field which bothers many of us so much – that it becomes harder to demonstrate being one up on our neighbors? Perhaps it’s jealousy, of being unable to allow oneself to relax the way one would like to.

When I left active duty in the military I had 75 days of leave saved up. Rather than “buy-back” some of that time, I took the full 75 days off from work. I was not able to play video games the whole time, technically I could have, but I just couldn’t sit around that much. And that’s not because I was an army guy – I didn’t really fit the stereotype (and few of us do). My body and brain needed me to get out and see the sun, to do something, to interact with the world. This is where I think a UBI would take us; to new concept of work.

With UBI, work could become a choice – a task we engage in for the enjoyment of the task or the fruit of the task. Farmers could farm, for the joy of it. Gamers could game, for the joy of it. Writers could write. Hikers could hike. Politicians could… stay at home?

And everyone would have the freedom to change their new work as they pleased. That’s what UBI gives us, a chance to be human beings in the the way our ancestors dreamed of – free to decide when to work, how to work, what work to do, and when not to work. Our ancestors didn’t dream of tethering our bodies to plows, desks, or war machines. That’s just how we got here. 

65H

This is not the master plan.

 

 

 

We’ve Identified the Enemy and it’s Us!

The overwhelming majority of insider threat events are not the result of a malicious employee’s actions, rather they are caused by the unintentional insider – someone gets hit by a spear phishing email, data spillage occurs, documents are destroyed improperly, a data storage device is lost or stolen, people are the victims of social engineering and elicitation.

Research shows that while well-known events like the Ashley Madison compromise, which involved an insider, get a lot of attention, organizations may be too focused on the spectacular threat vectors. A 2013 CERT Software Engineering Institute (SEI) study on unintentional insider threat showed that 17% of cases were unintentional hacks, while 49% were unintentional disclosure. The SEI highlights that employees should be cognizant of the non-spectacular risks which are far more common than the over-exaggerated spectacular risks. In other words, it may be more often the case that organizations are the victims of 10,000 paper cuts rather than a single atomic event.

While a lot of time and energy has gone into examining the root elements of the malicious insider, the unintentional vector has received less focus. Available research on the topic points to the perception of risk, biases, the influence of environment, and everyday stressors, though we shouldn’t discount simple ignorance. So, how can organizations address the very real risk of unintentional insider threat?  Start by getting inside the mind of the average employee as you roll out your strategies.

Insider threat and cybersecurity training during onboarding or even annually may not be enough. The constantly evolving threat landscape requires ongoing training. For example, phishing emails used to be fairly obvious – spelling errors, an obviously incorrect sender email address, etc. Now, spear phishers commonly spoof legitimate sender email addresses, or have taken control of a legitimate user’s account through earlier attacks.

There may be little that employees can do other than call the sender to verify unusual requests for information or action, but that highlights another challenge to security.

Many of us don’t want to contact our superior on seemingly simple requests at the risk of looking insubordinate or challenging their authority. This deference to authority is exactly what attack authors are preying on. Of course, this could be addressed by management or even incorporated into organizational policy – such as using voice confirmation for certain types of requests – but it must be part of a larger cultural shift to have any lasting effect.

People in general have a deferential attitude towards what happens on their workstations – if there is no error screen or warning, then they must be ok, right? For most people who are not intimately familiar with the mechanics of computing and the internet, they trust their machine or an administrator to tell them when something is wrong. The message to employees and the public in general usually says something to the extent of having anti-virus software and not providing a social security number to anyone asking for it in an email.

Consider the feet-on-the-ground workplace culture – in this case I am referring to culture as a system of beliefs, practices, orientations, acceptable norms, demonized and praised attributes, that organically emerge – not the practices as written. Individuals may be trained to respond in a certain way when risk presents itself, but face a cost-benefit analysis in terms of culture compliance. When there is no obstacle between the individual and cultural practices, or when individual and cultural norms are aligned, there is no pressure on the individual to act in a contrary manner (e.g. not follow security protocol). When the individual’s behavior is not in line, or contrary to cultural norms, then the individual must make a cost-benefit decision, the result of which could depend on any number of factors.

Consider the recent Department of Justice “hack” in which 20,000 FBI employee names were released. According to media reports, the hacker said he or she was able to access systems by telling a help desk attendant he or she lacked a token code (dual-factor authentication) and the attendant provided one since he/she posed as a “new employee.”

It seems unbelievable at first, but consider it from the attendant’s point of view. The caller seemed to know what they were talking about in terms of access. The caller may be in a position of authority and could pose risk to the attendant’s job by not performing. The attendant’s primary duty is to resolve issues, not to analyze issues.

Going back to the authority statement, while most of us have been trained to know when to deny a privilege escalation – it’s another thing to get a request from a person who seems to be in charge – who might be able to affect our day to day stress level. So what do you do? What does a low level system administrator in today’s economy do?

A dysfunctional work culture, or work culture incongruent with documented policies and procedures, tends to be the result of some incentivized behavior, either through perceived or actual punishment or reward. This isn’t too far a stretch from mixed messaging in parenting psychology – inconsistent rule application/messaging and unbalanced, sometimes opposing, responses to behaviors, result in confusion for the child and ability to function accordingly.

As I pointed out in my blog post “Why Insider Threat Detection Fails”, humans are poor performers when it comes to detecting rule violations of anything other than social contract or personal safety rules. While we function in a super connected world of relationships, the human mind still functions in a hunter-gatherer world, designed to monitor maybe 50 relationships. Simply put, the human mind is really concerned with its own survival, and by extension its progeny; concepts like threat to the corporation from abstract concepts like supply chain are not natural to the human mind and do not present as an immediate threat to self.

As such, if training and communications about cybersecurity are only presented as a series of “if-then” concepts without tying those to the individual’s health and well being, they will fall on deaf ears. That message – that the health of the company is the health of the individual – needs to be articulated, repeated, demonstrated, and believable. Rote memorization of “if-then” rules will yield some measure of protection, but it does nothing to build a culture or to take real residence in the mind of the employee.

Your employees are your first responders, your first line of defense, and the most critical asset. There are certainly a variety of factors which might cause them to become the next unintentional insider threat, but nothing is worse than apathy.

5 Ways to Combat Insider Risk

  • Climate surveys by a third party industrial psychologist can clarify what the culture really is.
  • Messaging to the workforce – if in doubt, question. Build a culture of rewarding security posture and questioning suspect vectors.
  • Tie organizational risk to real life employee risk in training. Don’t just say it’s bad for the company to lose money from IP theft via insider threat. Tie it all to the employee’s bottom line.
  • Be consistent – what’s on paper needs to match what managers exude.
  • Encourage questions. It might save you a lot of money. Employees who think they might be facing a security issue, insider or cyber, should feel reporting/questioning is a duty rather than a burden. Make this a value and you could very well save a lot of pain in the end.

*Originally written for tscadvantage.com, reposted with permission.