Wednesday, March 21, 2018

What have we learned about Facebook that we didn’t already know?

This week’s backlash against Facebook’s practices was as predictable as it was unavoidable. The smoking gun is in the hands of Cambridge Analytica, which is either the epitome of corruption or simply the poor kid who happened to be in the orchard when the farmer walked by, even though every other child had been stealing apples with impunity.

Let’s pick this apart. The backlash was predictable — why, yes. We have been handing over our data in the vain hope that some random collection of profit-driven third parties might nonetheless act in our best interests. How very naive of us all, but we did so apparently eyes-open: if you are getting something for free, then you are the product, goes the adage.

What we perhaps didn’t realise was just how literally this might be taken. We, or our virtual representations, have been bundled into containers and sold to whoever took an interest, feeding algorithms like mincing machines that have, in turn, fed highly targeted and manipulative campaigns. Keep in mind that the perpetrators maintain that they have not broken any rules.

And indeed, maybe they haven’t. The ‘crime’, if there is one, revolves around the potential that Facebook’s data (well, it’s ours really, but let’s come back to that) was in some way ‘breached’. If that is the case, at least privacy law has something to go on. Let’s be realistic, however: this is trying to fit an ethical square peg into a legal round hole.

Meanwhile, the practice of Facebook data harvesting via quizzes etcetera carries on. You don’t have to be as underhand as this: simply have a page of puppy pictures, get lots of people to like it and then sell it on. We already know what is going on, deep down, but we live in hope that the worst will not happen. Optimism is a positive trait, but blind optimism less so.

When we are told exactly what is going on with our data, when it is presented in black and white, it is like a veil is removed from our eyes. People are angry, and perhaps rightly so when they realise the depths to which organisations will stoop even if “they have not broken any rules.” It’s the same, shocked reaction as when someone finds out exactly where their meat comes from, and swears to become vegetarian on the spot.

Equally, the backlash was unavoidable as we arrive at a point of realisation, and therefore responsibility, for what our new, data-oriented powers give us. Back in 2014 I wrote that we were heading towards a Magna Carta moment; more recently I said how GDPR, good as it is, doesn’t deal with the kinds of challenges that we now face. Rest assured that I’m not going to say it all again, but the points still stand.

What about Cambridge Analytica — should the company be hauled over the coals? It’s not as simple as that, as the organisation happened to be in the right pace, or the wrong place, at the right time. The chances are that its algorithms are not all that smart, as in, they probably depend on some machine learning or Bayesian techniques that are now well understood. This also means the organisation will not be a lone wolf: how many more have reaped the harvest of this ‘data wants to be free’ wave?

At the same time, doing wrong just because it is possible, or because everyone else is doing it, or because the law isn’t clear, is still doing wrong. We need the debate about digital ethics to become a top governmental priority, and we all need to adjust our consciousnesses, and our consciences, to what is now possible.

No comments:

Post a Comment