Have you ever been stopped by an in-store security guard because the tag on a product you bought elsewhere triggered the door alarm in their shop? It turns out there’s a brand of stock-control chip that has such a reputation for doing this, it actually causes a security vulnerability. The problem is especially interesting because it involves social engineering.
I set off the alarm on the way out of my local supermarket at the weekend. The security guard came over and asked whether I’d bought any electronic items. I hadn’t, but I did have a CD in my pocket that I’d bought elsewhere earlier, and its security tag had triggered the system. The guard gave me a knowing look and said that this particular brand of RFID chip is notorious for causing false positives in a number of other stores. He enthusiastically demonstrated the problem by using my CD to set off the alarm a couple more times, and then cheerfully waved me on my way. He also unwittingly revealed a vulnerability in the supermarket’s security procedures.
At no time did the guard examine the content of my shopping bags, which I had left on the street side of the sensors during our entire conversation. In other words, the existence of a false positive was enough of an explanation to convince him I wasn’t a thief.
Luckily for the supermarket, he was right.
Now I know what you’re thinking: surely at this point the guard should eliminate the CD as a possibility and then ask you to push the trolley through the sensor again, right? This is where the social engineering comes in. If you appear to be well dressed, articulate, polite and helpful, chances are you’ll fail to raise any suspicion, and the explanation for the alarm that you’re presenting will be accepted – especially if the guard has seen it happen before. The odds are good that you’ll get away with it.
It’s very difficult to defend against this sort of trickery with minimum wage security guards and a system that is prone to false positives. I’m sure that if you asked any shop with one these alarms, they would say their procedures should prevent this kind of con, but in the real world it’s often possible to get round systems that rely on humans to be effective: people are usually the weakest link in any security system.
Something to think about next time you go through an airport security checkpoint.
Now, this is an interesting point.
Unfortunately (or fortunately, I’ll come back to that later), people will always be a key link in any security system. Security systems vary in form and technology but they more or less always come back to comparing something to a threshold. Comparing a measure to a threshold induces, among other things, a probability of detection (i.e. true positive) as well as a probability of false alarm (i.e. false positive). Amazingly both physics and maths agree on the fact that the two are linked and that you can’t increase one without increasing the other.
That’s where the human factor comes in the play: it’s up to people to ultimately make the distinction between a correct detection and a false alarm. And one you think that what’s at the end of the detection chain could be a weapon system, I am glad a person, as imperfect as we all are, has the final call before triggering any decision.
No doubt people are the weakest link in security but what’s the alternative? No security system? We live in a society where unfortunately it’s not an option. I am not naive enough to think any security system will deliver absolute security but for one, I don’t mind wasting a little time going through security checks at an airport knowing that, although it doesn’t guarantee a terrorist free flight, it might deter some would be terrorists of trying to get on my flight.
@French-Spy: I agree with your analysis, but I think there are ways that the weakest (human) link can be strengthened. Unfortunately they’re expensive.
Firstly to your point about detection thresholds and false positives, specifically thinking about airport security and terrorism: the trouble here is that terrorists are *incredibly* rare compared with the population of legitimate travellers. 99% of security guards manning airport x-ray machines will never come across one. Virtually every positive indication from the system (of machines and guards) will be a false positive, even if the false positive rate is vanishingly small.
Secondly, watching bags go through x-ray machines (or shoppers go through tag readers at shop exits) is a mind-numbingly boring job. After a while the brain starts to make efficiencies and basically stops paying attention – we subconsciously assume that the results will be constant, making false-negatives more likely.
So to summarise, minimum wage security guards watching something very dull, looking for something incredibly rare, are unlikely to succeed.
Now, as you rightly point out, a small degree of blanket screening is important. It helps to catch the “idiot terrorists”, and means that even smarter terrorists cannot *guarantee* they won’t be detected. But pouring vast resources into the screening effort isn’t smart – the limit of diminishing returns is rapidly met.
So what’s the alternative? Well, I like Bruce Schneier’s suggestion: take that extra money and invest it in training security officers to be extremely good at spotting “out of the ordinary” behaviour: nervousness, sweating, unusual interest in airport security aparatus. The training has to be good enough to eliminate profiling for race, gender and other stereotypes, but basically I think airports should employ experienced officers who can ‘just tell’ when something doesn’t add up.
The problem with this is, of course, cost. That’s why the foreseeable future has us removing our shoes and slowly filing past bored-looking guards ๐