Skip links

Through Your Mind’s Eye: How to Address Biases in Cybersecurity – Part 2

In Part 1 of our Through Your Mind’s Eye series, we explored how our brains don’t give each decision we make equal attention, and we take mental shortcuts known as biases. These biases allow us to react quickly, but they can also lead to mistakes and oversights.  Because we all have biases that shape who we are, our decisions in and out of cybersecurity can be impacted in both good and bad ways.

Safety Bias

Safety bias is focusing on shortcomings so as not to take a risk. Many studies have shown that we as humans would prefer not to lose money even more than we’d prefer to gain money.  You may have heard about studies where people are offered a lower amount of money now or higher amount in two years. Most participants took the sure thing of money now rather than wait for more. However, this changes when people are faced with a loss decision. For instance, when asked if they would rather definitely lose $100 or take a 50% chance of losing $1000, most say they would take the option to risk losing $1000. Because of safety biases, progress in decision making is slowed and healthy forms of risk taking are held back.

Safety bias is seen in security development operations, risk assessment, policies and procedures, decision making, and identity and access management.  For the area of security development operations, is your dev ops team applying traditional network controls to the cloud or are they looking at how they can refactor to help take their organization to the next level? Are they stuck in the past or moving to the future?

When was the last time you reviewed your security products and their capabilities for risk assessment? Are you keeping what you have because you already purchased those solutions, or are you reviewing them to ensure they’re the best at keeping your organization safe? For example, does your current solution have a vulnerability scanner that can identify advanced vulnerabilities? Would you upgrade if it didn’t? If you aren’t evaluating your security products against emerging threats on a regular basis, your risks can be impacted without realizing it.

There are also parallels with our example above where participants took the immediate sure thing. The same thinking causes companies to invest in solutions that may be overkill to address overly specific and high impact/low probability risk factors. They are solving for something with a low probability of happening and, as a result, may be spending much more on policies and procedures than necessary.

When there is an ambiguity in decision making, system owners may be reluctant to upgrade or apply the latest patches. There may also be an unwillingness of end-users to configure security features, and a lack of interest from developers to add new security features to an existing application.  As a result, these system owners err on the side of caution so as to not break or change something since they see this as more of a risk than installing the latest patches. Likewise, developers may opt for cost savings rather than add in security features.

As you move from on-prem to cloud solutions, have you considered what software applications need to be retooled for optimization in the cloud for your identity and access management requirements? What new identity analytics solutions need to be put in place to be prepared for the future? Or are you keeping things “as is” because that is the safe thing to do?

Some social scientists lump the ostrich effect with safety bias. The ostrich effect is based on a myth that ostriches bury their head in the sand when they sense danger. Is your team “burying their heads in the sand” when they need to make a risky decision?

To overcome safety bias, get some distance between you and the decision being made. Imagine a past self already having made the choice successfully in order to weaken the perception that there will be loss. Another idea, if you feel this is something happening in your environment, is to balance out your team with both risk-taking and risk adverse team members.

Other Biases That Could Arise

Framing Effect The framing effect also influences safety bias and relates to how something is “framed” or described. For instance, if something is worded in a negative way to emphasize the potential for loss, the receiver may be afraid to take a risk. You may have seen commercials for cyber services that say, “1 in 5 companies lost their data while using another service”. Instead of focusing on the 4 that did not lose their data, they focused on the 1 that did lose so you’ll think about them protecting you instead of their competition. Another example that drives home the point is related to health. Let’s say you needed an operation. How would you feel if the doctor told you that you had an 80% chance of recovery? Now what if the doctor said you had a 20% chance of death by having this same operation? Would you think differently how you approached the operation? Pay attention to how statements are phrased to overcome gut reactions when deciding.

Affinity Bias Affinity bias is gravitating to what we know or are comfortable with as opposed to the unknown. For example, when you see a stranger wearing your college alma mater sweatshirt in another city you instantly feel a connection to them even though you have never met. This creates an “in-group” bias. This can manifest in cyber as an aversion to new product offerings. Are you still using the same solutions you’ve been using for the last 20 years because they are familiar and comfortable to you or are you using an XDR solution now? You may also feel your direct team alone has all the right answers and no one else knows how to secure the environment or application better than your team. Is that because it’s true or because you are most comfortable with them?

Similarity Bias Similarity bias occurs because we as humans are highly motivated to see ourselves and those who are similar to us in a favorable light. We unconsciously create “ingroups” and “outgroups”. These could be related to the city or country where we grew up or live today, where we went to school, areas of interest, etc. Are you hiring people who are similar to who you currently have on the team or are you looking for skills and individuals that bring diverse perspectives or meet your needs in the next 1-2 years?

Loss Aversion An example of loss aversion can be observed when companies have already invested in their traditional IT infrastructure so why move to the cloud? Moving to the cloud takes time and resources. Instead of modernizing, they keep buying new servers and storage to keep the environment running as it had been for decades.

Distance Bias Distance bias is prioritizing what is nearby whether it is in physical space, time, or other domains. Prior to the pandemic when we were in conference rooms having conversations, how many times did you observe people in the meeting room failing to gather inputs from their remote colleagues on the phone? Or have you decided based on what you needed to do sooner in time instead of considering the long-term effects of what was best for the company?

How to Address Biases in Cybersecurity

As you saw in each of the biases featured in both of our articles, they are not mutually exclusive. There are many overlaps between the different types of cognitive biases. How do we address these?

  1. Acknowledge – Security is not just one product, but a combination of products, process, and technology. All of which depends on human behavior, and human behavior lends itself to biases. Acknowledging this helps us to uncover which biases we fall victim to.
  2. Seek & Review the data objectively before deciding – Don’t base a decision on what was done previously or of the only the opinion of an “expert”. Review the data, look at how the options were framed and provide feedback. This can help address availability bias, confirmation bias, and framing effect.
  3. Include everyone that needs to have an input on the decision or incident (including those who you may not agree with). This addresses confirmation bias and unconscious bias.
  4. Utilize third party companies to help evaluate in an unbiased way. Third party companies can review your policies and procedures, perform pen testing, and risk assessment just to name a few things. This objective opinion can address all the biases we discussed.
  5. Look to the future without attachment to the past. Ensure you are using monitoring tools that have the capability to understand human weakness and provide proper analysis based on user behavior analytics. This can address safety bias, loss aversion, affinity bias, and similarity bias.
  6. Don’t group human behaviors. Instead, look at individual behaviors – including your own. Educate your employees that many cyber issues are due to cognitive biases that attackers target in combination with technical flaws.

Where to go from here:

Awareness of the cognitive biases at play for you and your teams is one of the first steps to ensuring your company is not at risk. After you have acknowledged the possibility of biases and flaws in your environment, examine where you may have biases influencing your cybersecurity posture. This requires personal insight and empathy by all involved.

Begin to educate others on where and how biases could be impacting your cybersecurity posture. Once that is done, have a thorough review of your current cybersecurity posture and adjust as necessary. Over the next few months, work on building habits across the team to ensure you are consciously removing biases that could be influencing your cybersecurity posture.

Our adversaries understand human biases and actively try to exploit them. Removing these biases as much as possible can help you and your team improve your security posture and defend your organization across all levels.

Source