Why Social Media Is A Security Issue

Social Media securityWith change accepted as an every day part of life – and with Web 2.0 accelerating pace of change – ambiguity in all spheres of business, government and society is reaching unprecedented levels. Technology writer Jason Feifer, writing in a US business magazine Fast Company (November 2014), bemoans the fact that at a policy retreat with Congressional and Obama staffers on the Internet of Things, “virtually all the discussion was about risk”.

Feifer admits that, given revelations about the US Government spying on it citizens, “it is understandable that people fret”, but adds that legitimate concerns can quickly turn to paranoia and progress can be impeded by a “neo-Luddite class … that has a stake in opposing technology”.

However, just because you are paranoid does not mean they are not out to get you. There is a very real risk that your email accounts may be hacked, someone may eavesdrop on your phone calls, and documents may be siphoned off cloud services, and “they” may not be your government – or any government for that matter – but activists, organised crime, or terrorist organisations. Or, a 15-year-old Wisconsin kid with nothing better to do with his time.

An even bigger concern is the amount of information that is leaking from organisations through social media services that could jeopardise individuals, employers, and the public.

Then there are the unintended consequences or services, most notably how quickly criminal elements figure out how to abuse such services, not to mention the ability for pranksters and activists to use services to generate support, disrupt activities, or create embarrassment for corporations and governments.

Meanwhile, mention, say, social media to a security manager, and the response is likely to fall into two camps: “that’s for the IT department”, or “corporate communications handle that stuff”.

Really? Consider just some real-life examples.

Case study 1: A young woman has a new job as cabin crew for an airline overseas. During training, she uploads an Instagram picture of the sunrise over her workplace. On the same Instagram page are pictures of t-shirts reading: “Don’t talk to the bitch” and “F*#K you”. Hardly the image the airline’s marketing budget strives to create. Armed with her name and her picture from Instagram, a stranger can now identify her Facebook page, where she writes about her new life in a foreign country, and includes her new telephone number. All publicly available. No invasion of privacy. No hacking. Just open search techniques. From a security point of view, the problem is not only organised crime or terrorists approaching a new employee, but the more mundane issue of disgruntled passengers or potential stalkers. The point is, would she walk into a bar and shout out her telephone number?

Case study 2: A woman applies for the position of PA to a person in a sensitive position. Open search reveals that in December she has been posting comments about how much Christmas was costing. In January and February, she complains about the size of her credit card bills. In March she tells all that she has taken a job pole dancing to earn extra cash to pay her credit cards. None of these revelations are particularly startling (or illegal), but in aggregate create a different picture of a potentially vulnerable staff member with access to important information.

Case study 3: An activist group with no resources gathers open source intelligence on a power plant, including corporate videos on company websites. Armed with sufficient – and it is worth stressing here, publicly available – information, the group mounts a demonstration that closes the plant for weeks and costs millions of dollars. The group also posts a video online to show other activist groups how it is done.

At what point would an IT department be involved in any of these cases? To be sure, it may issue the likes of an acceptable use policy, but the focus would be keeping stuff off the network – i.e. stopping hackers or other hostiles getting into systems. The real problem, however, is likely to be human factors. Even if you had a computer in a sealed room and no network connection whatsoever, at some point a human has to interact with it.

As for the corporate communications department, there is every chance of them unwittingly being part of the problem if Twitter campaigns are anything to go by.

For example, when the New York Police Department media people ran a Twitter campaign inviting the public to post pictures, they expected pictures like this:

article1-pic1

 

 

Instead, what they got was this:

article1-pic2

 

 

And, this:

article1-pic3

 

 

And, a lot more besides.

The fact is, media and public relations departments view social media as a broadcast tool; a means of communicating with constituencies in real time. Which it is. But it comes with inherent risks and overlooking those risks – not engaging with security and risk management – is resulting in risks not being anticipated, far less mitigated.

Risks not only entail staff misstepping online, but a whole raft of new crimes and problems are beginning to surface, and quickly. For example, Airbnb, a popular online service that allows people to rent out a spare room or their entire home to strangers, was being used by people smugglers in Europe to house their illicit cargo. Cases have also surfaced of various scams, such as people renting homes then posing as real estate agents advertising the property for cheap rent to obtain deposits. This begs the question, how long before we hear an Airbnb defence after a drug raid: “It must have been the people who rented my place that left it, your Honour.”

Misuse and scams are not the only issue here. Too few people look under the hood of the app they are downloading. As a rule of thumb, you can pretty much take it as read that, if it is free, you are the product – i.e. any data you are giving access to is being bought and sold to third parties you know nothing about. Some people respond that they do not care what happens to the data, they are doing nothing wrong/illegal/immoral. What if a future government decides that something is to be outlawed? The problem, too, is that data fails to give context. You may be searching “Obama” and “bombs”, because the President gave a bad speech; the algorithms are not to know that. What about the people behind the app? Ask people if they use the popular app Viber, which provides free communication. Point out that Viber is owned by a former Israeli intelligence operative, that it was funded by “family and friends” (but we do not know who those friends are) and that, while the company emphasises that it is not incorporated in Israel, its research and development arm is based there. That is not to say that Viber is doing anything inappropriate, but point this out to anyone in an Arab country and the next question is how to delete it. It should also give anyone in security related fields pause for thought, too. Again, people argue that even if intelligence agencies do have access to their kitten videos and buying habits, so what? From a surveillance point of view it may tell a lot about you, your character and habits. Moreover, the likes of advances in facial recognition begs the question where future field operatives will come from in a generation that has grown up as digital natives in a sharing world. More importantly, data analysis is a relatively new field and how it may be used in the future lies in the realm of unknown unknowns.

There is no doubt that social media and other Web 2.0 developments have changed and will continue to change the way in which we communicate and work, resulting in ambiguity rising to unprecedented levels. While organisations may be good at solving complicated problems, it tends to be when all parameters are in clear view; not so much when the only variable you can rely on is uncertainty. This will mean learning to tolerate ambiguity and taking security beyond the realms of traditional risk management.

As Rolf Dobelli writes in The Art of Thinking Clearly, “Risk means that the probabilities are known. Uncertainty means the probabilities are unknown. On the basis of risk, you can decide whether or not to take a gamble. In the realms of uncertainty, though, it is much harder to make decisions.”

Feifer, our technology writer, may have been miffed that “virtually all the discussion was about risk” at the recent policy retreat with Congressional and Obama staffers, but somehow security will need to become an integral element in developing or adopting new technologies.

Who will be making the hard decisions? The IT department? Corporate communications people? Or a new breed of security managers oriented toward the future, prepared to continually adapt and learn new concepts and skills, rather than trying to apply yesterday’s solutions to today’s problems?

Or, security managers prepared to hire such people.

Rod Cowan
Rod Cowan has contributed for over 30 years to security around the world through his writing, teaching, speaking at industry conferences and public events, as well as assisting in various Government investigations and corporate research. Cowan is a Research Fellow with the Research Network for a Secure Australia (http://rnsa.org.au) and was convener of its Safeguarding Australia Annual Summit 2017 in Canberra. He is also a Strategic Advisor to the Dubai-based Emirates Group Security/Edith Cowan University Centre of Aviation and Security Studies (CASS).