top of page

Cybersecurity: It’s about your people, not your systems




People who 'do cybersecurity' tend to be IT systems specialists. Good for them! …at least, as far as their skillset goes. But in truth, these days the biggest risk to IT security isn’t the tech part of the system. It’s the people using it.


The first step to a better defence is to invest time in understanding human behaviour. Read up on (or brush up you knowledge of) human biases, especially the ones that affect #riskperception and #trusting. In evolutionary terms, our dear old monkey-derived human brains are full of bias-shaped holes that present a huge 'attack surface' to a would-be penetrator. The latest generation of data thieves don’t just hack machine code, they are #socialengineers: they hack people. And – trust me, I’m a behavioural scientist – people are terrifyingly easy to hack. Over on the dark side, the new social engineers have done their homework on human brain science. They’ve found out how to repurpose #behavioural insights as a route map to systems hacking. They’ll use this new knowledge to play your staff like a violin.


AI cuts both ways

Whilst the developers are busy convincing us how AI is going to solve all our old problems, really, of course, AI brings a stack of new ones. Granted, AI’s pattern-detection skills mean it can spot, in a split second, some threats that human auditors would never have detected. But let’s always remember that AI has trained itself using a broad diet of ‘human cultural’ content – meaning, not just the good bits of human output (scientific analysis, art, ethics) but also the entire loonytune sphere that is the world wide web. AI has soaked up our human biases and is now busy amplifying and spreading them. So… let’s not default to relying on AI to keep us safe.


Cultivating cyber-smart working

A properly resilient approach is to get your own people involved in actively defending their work sphere. This means far more than just 'locking the doors' of system access. It means keeping everyone actively engaged: teaching them about the latest forms of scam, besides keeping their #situationawareness skills sharp. Where people have been trained how to respect own their human instinct for risk-sensing, to use it and to keep using it, those (anti)social engineers are far less likely to come calling.


The way we respond to system-borne messages reveals a lot about how our brains work. Whether at home or in our massively connected workplaces, we’re constantly the objects of the ‘attention economy’ – all kinds of actor pushing content at us. We consume content with special enthusiasm when it arouses us in some way – to joy, fear, or anger, say, or a wish to be helpful – and this is what online marketers use to snare us into buying stuff. Or at least, to read their targeted ads.


And just as marketers trigger arousal states to get us to buy stuff, so too do the ‘social engineers’, who also want to part us from our money, or valuable data. Here are just five psychology things the bad guys know, that you may not have thought about, much:


1. that it’s easy to pique our curiosity (hey, look at this! – did you know that?); 


2. that we’re creatures of habit (same password for everything?);


3. that our lovely human tendency to empathy-giving, to ‘want to be helpful’, swings open the door to all kinds of scams;


4. that, despite social change and flattening of work hierarchies, we still defer a bit too quickly to authority (the big boss, the visiting official, the new IT helper, the expert contractor);


5. that – despite all the scam warnings – we still engage when there’s a sense of urgency to  get results (‘could you just help me quickly? ‘cos I’ll be in trouble if this isn’t done before the boss gets back…’). 


The fact that behavioural science gives fancy names to all these effects, doesn’t make them any less common, or less stupid, or less exploitable. (You really want the fancy names? DM me and I’ll put them in a comment.)


Tips for would-be Digital Neighbourhood Watchkeepers


Staying Sharp: Follow the news: brief yourself, and colleagues, on the latest cyber fraud MOs ('how they did it').


Reward thoughtful working, such as ‘pause before you click’, to stop impulse-clicking and doom-scrolling. Who’s behind that unsolicited message, in whatever form it arrives? How do we know this is who it says it is? (Really; the deepfakers can now perfectly imitate your ‘big boss’, your family members, your IT helpline, your Compliance officer. Think carefully about what that implies. Do you even know what your big boss, IT, or Compliance people really look and sound like?) A social hacker who I recently interviewed, said they get furthest in, fastest (as in, access to your deepest secret data) by successfully pretending to be your Compliance team. Clever, right? Because, who doesn’t want to help Compliance, to get them off your back quickly, right?  


Weave in critical thinking and intuitive risk skills to your workplace #culture. Critical thinking can be easier than it sounds: just try this version of the ‘Five Whys’: Why this message, why from this person, why to me, why now, why on this channel? Intuition our most under-used human superpower, our greatest evolutionary inheritance: 'the Gift of Fear', it’s been called. Properly used, it can protect us against many kinds of unexpected, unwanted attention. Far too few organisations even know about its power or potential value, let alone that this instinct can be trained, like a muscle; perhaps because they’re all still too busy ooh-ing and aah-ing over the AI that’s going to solve all their problems.


Evangelise: spread the word. Be your local enthusiast for the value of ‘risk-aware working’. Yes, this skill can be taught. (Actually I’ve been teaching it quietly, to anyone who’s asked me for it, for over 20 years now. Yes, there are a lot of very happy workshop alums out there using this technique – even though I never did get round to actively marketing this service.)


Conclusion

We can be more cybersecure, but it takes the whole village to do this. I’ll be taking a much deeper dive into all of this, later this month at GDS CyberSecurity Summit in Amsterdam. As well as including the all-important psychology bit, I plan to guide attendees through six types of ‘bad actors’ (no, not that kind), and to share my conversations with said actors. Yes, of course I interview the bad guys, always have done! How else d’you think we’d learn why they do what they do? 

コメント


bottom of page