Skip to Content

Our Blog

The Great Hack Explained: Why marketers need to talk about ethical data management

If you haven’t seen The Great Hack on Netflix yet then clear your calendar. It’s 2 hours and 19 minutes of discomfort. The documentary takes us through the Cambridge Analytica scandal. It unpacks how algorithms and data mining have used our data to manipulate and distort information. 

Lifting data from Facebook, Cambridge Analytica’s data scientists created detailed voter profiles. And turned them into a communications arsenal like nothing we’ve seen before. With this arsenal Cambridge Analytica helped elect Trump and bring us Brexit. Thanks in no small part to Facebook.

We can’t overstate the significance of this. We’re talking about democracy, civil liberties, and outcomes that change the world. It’s propaganda on steroids, so targeted that it changes how we think. Cambridge Analytica’s ex-Business Development Director, Brittany Kaiser, considers it weapons-grade technology. 

Yet, after all of this very few of us are willing to actually quit social media. As communications professionals, this should excite and terrify us. 

But even though we keep scrolling, consumers are waking up to the fact that companies hold data and are debating the ethics of data management. And while very few brands hold the volume of data that Facebook does, these conversations about the power of consumer data and the ethics of using it impacts us all. 

Should consumers be worried? Yes, and no.

Facebook has tens of thousands of data points on each of us. Saying nothing of Google,  Microsoft, ISP’s, or any other company’s data. And data scientists are creating ever more powerful analysis techniques, creating algorithms that predict and influence our behaviour. But this power can be used for good, as much as it can be used for evil. So whether or not we should be concerned is very much a case-by-case consideration.

Personalisation like Netflix recommendations and Spotify’s Discover Weekly are great. Fake-news used to change voting decisions? Not so much. Then there’s the grey, the maybe-ok. Optimising your social network to increase the time users spend on your app? Great for shareholders, but maybe not for your users. 

How do we decide what’s acceptable and what isn’t? 

It’s a difficult one to navigate. Technology moves faster than legislation and we’re playing catch up. In the absence of firm rules or guidelines, we’re left to make these calls for ourselves. Does the fact that we can do something mean we should?  

And as marketers we need to know what we’re leveraging. We need to consider what’s appropriate for our products and what’s reasonable for our customers to assume what we’re doing with their data. 

Algorithms that know who to target wedding dresses to are harmless, but other products? Not so much. What about deploying these algorithms to sell vaping gear? What about to deliver anti-dairy messaging?

There is no easy answer here. But as marketers it is our responsibility to ask these questions and form our own guidelines. Not only to protect consumers but our brands, too. 

The Great Hack explains why we’re unprepared for the challenges we’re facing.

There’s no need to panic. We just need to ask better questions, and not just at an operational level. Governance structures need to play a role in this and they need to have access to the right experts. 

Aggregated, anonymised data is a powerful tool in PR and marketing. For starters, it allows us to genuinely personalise our offerings or upgrade products and services based on consumer behaviour. It’s also often a vital tool we often leverage to secure media relations. But it does need to be managed responsibly. 

If your company hasn’t had a conversation about what ethical data management and re-targeting looks like, your reputational armour is lacking. 

If you are sitting agency-side and don’t know where your clients sit in terms of data management, you have a huge blind spot. When red flags are detected you need to be able to raise these with your client, the same way you would with any other reputational or brand risk. 

For me this conversation needs to centre around being honest and transparent with customers about what we’re doing, communicating our commitment to security and privacy at all times, and ensuring our data practices don’t override human will.

Back to top