Is society becoming too accepting of data breaches? Do we claim to want more privacy but then continue to treat our own data in a cavalier fashion?
A recently leaked internal memo from Facebook revealed the company’s plans to normalize data scraping leaks and change the way the public views these incidents.
We spoke to Derek Taylor lead principal security consultant at, Trustwave about why we shouldn’t be encouraged to accept data breaches as the norm and how the user’s privacy calculus around data disclosure decisions (known as the privacy paradox) can easily be manipulated.
BN: How can users’ decisions around their data be manipulated?
DT: Various tactics are deployed by organizations to influence user behavior. The first of these would be automated decision making, these are technologies such as artificial intelligence, deep learning, context aware computing, that type of thing. Large elements of of data use, like consent, affect automated decision making.
The second would be tracking and surveillance, the creation of shadow profiles, which have elements of use in a variety of contexts, it could just be people advertising, or it could be cyber bullying or social engineering.
BN: Is there an element of risk in anthropomorphism — making computers seem like humans?
DT: There are things like natural language processing, chatbots, robots, etc. These things can be trained using volumes of data to impersonate individuals. This data is likely to be useful to IoT type applications, vehicles, wearables and so on. If you’ve got lots of people’s data you could actually manipulate their physical behaviors as well. There’s a whole range of technology challenges, many of which are legitimate, but if you have access to large volumes of non-consented real data the ability to represent yourself could be compromised through manipulation.
BN: Is there a lack of awareness among people of just how valuable their data can be?
DT: I think there’s a broader challenge in society, some companies, Apple for example, are certainly marketing themselves as being security and privacy aware. If you buy an Apple product there’s an expectation of security and privacy. Large elements of Android, it could be argued, are different from that, you are accessing free services, you’re signing up to T and Cs and you’re not reading those T and Cs. Your data is then being used by organizations, effectively subsidizing services.
There is an academic argument that basically says there’s an average price point for Android, which is lower and therefore is privacy a privilege of the wealthy because they can afford it. In a less developed country, chances are you also have a demographic that may only be able to afford a low end smartphone and that will not have the same levels of privacy associated with it. We’re kind of sleepwalking into a commercial model in the West where, maybe, we’re forcing others to a point where privacy is for those that can afford it, unfortunately.
BN: But don’t you also have companies incentivizing you to give away information by offering discounts or other offers?
DT: Yes, I think fundamentally for people in general data privacy is not something that falls into Maslow’s hierarchy of needs. But it is hugely valuable as a multi-billion dollar industry is profiting from people’s apathy, in effect. And when you do knowingly consent you think you’re just giving away some data, but you don’t necessarily read the small print. So often there’s overreach, which isn’t necessarily in line with either privacy best practices in terms of industry norms, or at least policy norms, or indeed elements of some of the more progressive regulations.
People don’t read the small print and appreciate how valuable their data is but also importantly they don’t know how to monetize themselves. You know, maybe I’m getting slightly cheaper car insurance that is personalized to me, and that’s absolutely fine so long as I consent. There is an issue where people don’t necessarily knowingly do that. There are whole reams of privacy professionals trying to create privacy policies that are much easier to read, but it’s still feels like you’re tinkering around the edges. Fundamentally it boils down to if you offer something cheaper that virtually does the same thing then giving away some data is a small decision in the individuals head but very profitable for the businesses.
BN: If people realized what their data was worth would they demand more in exchange for it?
DT: Again it boils down to personalization, so the more data you give, the more personal a particular service might be, which can be exactly what you want. There are a range of privacy-enhancing technologies which are designed to be deployed and enable privacy in relation to whatever service is being provided and to do so in a manner that is helpful to the end user. Whether that’s minimizing what’s collected, or whether it’s about abstracting data, or whether it’s about de-identification.
It’s privacy by design and default, turning on what you are sharing in small incremental bite-sized chunks, rather than the default being, ‘we give you everything that you can’t turn off’. That makes a large difference, particularly if the user is repeatedly prompted by regular reminders to review those same things and to reconfirm that they still want to share a piece of information. So, going back to the car insurance policy, that might be your mileage, your usage or whatever.
BN: Do you think people will be willing to pay more for services that protect their data?
DT: Most people now have a smartphone, most of the revenue from the mobile phone market is Android, but most of the profit is Apple. I think that tells you two things, that you can make money either way, but also that if you can correctly position a high end product or service that’s more expensive at the point of purchase and double down on privacy as your market differentiator, you can make more money than any single competitor.
BN: Does there need to be more openness surrounding the disclosure of data leaks?
DT: You have legislation like GDPR that means organizations have to make certain disclosures. And when a breach occurs it does hit a company’s stock price. But reputations recover over a relatively modest period of time, so Equifax is an example of a data company that suffered a significant breach but now makes more money selling PII insurance. This is where there’s a privacy paradox, people claim to care but individually — and collectively quite often — the consequences for businesses breaching material in the long term are not severe.