In a TV interview, I recently illustrated why news organizations are being held accountable for defamatory comments on Facebook pages, specifically when they enrage such arguments with rabble-rousing click-bait reports.
The Australian high court has ruled that a few of the country’s biggest media groups will now be held responsible for slanderous third-party remarks published on their social media pages.
A former convict in an Australian youth detention centre, Dylan Voller Facebook comments triggered the issue. He asserted that inimical comments stationed on a picture of him shackled in a hood on social media were slanderous. Now prominent news platforms that enclosed the Sydney Morning Herald and Sky News, they pleaded that order asserting that they couldn’t be stored accountable for third-party remarks created on their social media posts, though the high court supported the decision.
The appellants’ actions in promoting nurturing and thereby helping the posting defamatory comments on social media by the third-party Facebook users caused them publishers of those remarks. The appellant’s tries to describe themselves as lethargic and unwitting targets of Facebook’s functionality have an aura of unreality.
Holding measures to ensure the commercial usefulness of the Facebook functionality, the appellants endure the legal impacts. Perhaps Facebook is yet to remark on that court determination. Still, it is possible that Australian news media may now disable the comments feature or post fewer stories on social media – something they express confines liberty of speech.
This conclusion will encourage the exchange of opinions enabling Facebook users to turn off prospects for comment. I urge Australia’s Attorneys General to rectify this inconsistency and bring Australian law in line with those of equivalent Western democracies.
Let’s talk about the decision now with a panel of experts. Chadwick Moore (journalist and commentator), Jennifer Dempster (legal analyst), and Bill Mew are on the show with me (privacy activist and technology expert). Thank you to everyone; I’m sure you’ll all hold strong opinions about this. Chadwick, I’ll start with you. What are your thoughts on this as a journalist? Is it fair to hold the media accountable for third-party posts?
Chadwick Moore: “Interestingly, it does appear you know that it’s going to stifle speech. Interestingly, the media would be held accountable for a Facebook page, which you know they don’t. They may post content on that page, but they don’t own it. Facebook owns everything that goes on to its platform. That’s very clear. This is very different from a media company being held accountable for comments on their own articles on their own websites. So here Facebook is once again sort of reiterating the lie that they’re a neutral platform and that the onus is all on people who post it when Facebook is obviously not a neutral platform. They’re a company with an editorial voice. They’re a publisher. They just get free content from news organizations, so it’s interesting that it almost says that these pages are owned by these news organizations in which they’re not. Another thing is that it supposes that people actually take comments sections seriously. It kind of speaks very lowly of the court’s opinion of the public, that people are reading comments as though they have gone through an editorial process through the publication of which these people are commenting when you know most people read comments to be entertained. They know it’s maybe a bunch of yahoos, and most people are basically able to do their own research and look into things that people are claiming in these sections. It’s kind of a stretch to hold a publisher accountable for defamation, or what have you, based on just what someone on the internet wants to say on their Facebook feed.”
Jennifer, it appears like this could have massive ramifications – you tell me. Does this intercept with Facebook? Does this proceed to all social media firms? Does it conclude the monitoring of comments on all websites worldwide?
Jennifer Dempster: “Oh I think it will you know. Anytime there’s a groundbreaking case like this, you know other courts and other potential litigants are going to start to see that sort of opportunity to go forward. So I think this could be, in essence – this could maybe spread [to] of course other companies. I agree completely with what with what Chad was saying about how these tech giants, these social media companies, like Facebook, Twitter, others are able to skirt around their own liability for setting up these platforms as publishers themselves because we all know that they absolutely have the ability to go in and to take out comments. And to where this puts us all on the on the actual users, the liability and the users, then I think that again allows Facebook to skirt some of this liability, but it also could open the door up to more responsibility – more responsibility for these social media giants – because we all know now that the Communications Decency Act in the US which gives these tech companies ‘carte blanch to allow whatever they want and to take off and remove whatever they want, does make them publishers and you know these … some of these news outlets, I mean, they have had lots of troubling articles. There is a lack of trust and faith in these major news corporations, especially in recent years. And people are starting to see that, so comments that continue to perpetuate that, or that might cause some kind of defamation need to be at least looked at – especially when these are the people that have lots more of a platform, but again that is a platform that was provided by Facebook, Twitter and others that gets it out to a much bigger audience and the news really. The information channels have completely changed that’s why we’re going to see continuing litigation in this field – because people are going to address this. And I think even in the coming months and years, we’re going to continue to see [a] backlash against not only news outlets but also social media companies that have allowed some of this stuff to perpetuate, which has hurt people and also hurt even public health.”
Bill, comment sections of any website … they appear like a sort of a double-edged dagger. It appears as the companies relish them. Perhaps the assembly has more appeal. They develop a rumour close to a report or a certain topic, but also they arrive with that perilous side that individuals will say a lot of nasty, potentially slanderous things. Is this the most suitable solution now -what’s been suggested – or is there a more suitable way to go?
Bill Mew: “There are a large number of responsible publishers who are reporting the news in a very responsible manner, but there are also [at] the other end of the spectrum a number of tabloid-like publications that are publishing rabble-rousing content in sort of click-bait format, which is designed to drive attention and have as much comment and it’s like lighting the fire to the (sort of) rabble-rousing extremist defamation that they just stand back and watch with glee because they’re driving up readership, they’re driving up advertising clicks, and they stand to gain from it. And they seek to wash their hands by standing back and claiming, ‘Oh but we’re just publishers’, but you need to differentiate between the publishers that are actually doing a thoroughly professional job and those that are publishing this sort of rabble-rousing, click-bait, and I think we need some accountability here. There needs to be accountability directly on those posting the comments, and if you want to pursue those people – you can, but at the same time, if the media companies are encouraging this by skewing what they publish on Facebook and other platforms to generate interest – to have sort of rabble-rousing, click-bait – then they need to share some of the blame here.”
And I desire to fetch it back to Jennifer. Just because of the lawful component, Jennifer – I can believe particular tabloid websites, such as Bill was directing to, covering an article perhaps about a celebrity. I can think of a couple of currently experienced footballer scandals, one of whom was the footballer. It was indicated they’d been involved in a child sex abuse case. Now their individuality wasn’t disclosed because of the worry that it would expose the victim’s individuality.
Nevertheless, you go on the comment sections, and the individuality quickly evolves obviously because they get doxed by the commenters. There’s this ‘Wild West’ where anything will go on there. So would this be a suitable thing if you could close down that sort of thing occurring?
Jennifer Dempster: “You know, it is a double-edged sword. This is a tough issue because you know in one way we should … we should take personal responsibility. We should be able … we shouldn’t be allowed to have freedom of speech regardless of what that is and hope that people have the wherewithal to know what to follow [and] what to ignore, but on the other hand, I think what the Australian court was saying here is that that social media has become so large – the audience is so large – whereas before Facebook maybe a tabloid or any kind of news outlet that was going to publish a story about somebody say, this particular football player or anybody else, or publish any kind of story, whether it was true or false, would maybe have whoever their subscribers were – maybe a couple [of] hundred thousand, ten thousand, a couple [of] – hundred now with Facebook this outlet now has an audience of millions and millions of people – 1.2 billion people around the world – and so the audience is much, much larger. You’re looking at a lot more chances for damages or harm to one’s reputation, and so that’s why I think at the very least Facebook at least should have been mentioned as complicit and part of that is having these large audiences because we know that they can also choose what is said and what goes in – especially in comments – but what the court was saying here was that when an article is published, even if they try not to mention the name, they’re still … it still becomes almost part of that article to have those comments because you chose to put it. You’re encouraging people to comment. You’re encouraging people to say things – whatever that might cause to a third party or just somebody else and that is what they’re looking at … is that people, a lot more people, are going to read comments from, say a John Doe with 10 followers if it’s on some major tabloid article or even any major news outlet organization.”
Do you think the media, Chadwick, takes… In the sense that there is a struggle now for media organizations’ attention, and they use the comment area to displease interest in a story, you know, you ought to abide by some obligation here. And they must be totally aware of the incendiary remarks made there, many of which are emphatic anti-Semitic and defamatory on a sort of topics, but it fits their intent. So is it essential for them to take on more accountability for this?
Chadwick Moore: “Well, media organizations already [are] held to extremely high standards. [It] varies obviously in which country you’re in. It’s much more lax here in the United States in terms of getting sued for defamation, and you know if they do not do their due diligence – if they’re not careful in their words and how they present facts of course they can and will get sued and they will suffer greatly for that. So the media organizations are covering there and, you know, when it comes to tabloid and salacious things, I mean like, when has that ever not been the case? That’s been the way journalism …”
What do you think about the comment section, Chadwick?
Chadwick Moore: “That’s the bit I’m talking about because the journalists may be quite careful, but then anything goes in the comments section, and that’s what a lot of people now will if you just go on, for example … email. Online, people will go in there just for the salacious comments rather than the article… right! Well, most news organizations do monitor their comments, especially on their own website. So, full-time people to get rid of extremely hateful language … maybe, you know, if somebody is using like racial slurs or something, they will get rid of that so, you know, they do have some employees to do that. I think the difference here is that you’re talking about Facebook, and you’re holding … it’s a Facebook property, you know. We always talk about how social media is a new public square. Well, obviously the social media companies like to look at it both ways. So, what’s really … this is the new world we live in. This is the digital world. You put information out there, and it spreads faster. Obviously, it spreads more widely than just if you’re, you know, yelling from a soapbox in front of the local post office, but if this is a new town square, this is what we have to get used to. And we have to start training people. People need to be smarter about how they take information and what they read online. Most intelligent, rational people know when they’re being misled. They know when something is click-bait. They know when some of the comment sections may be full of it, or they need to look into something more deeply. You know, as a free speech absolutist, I think that if you want to go down this route, the owners need to go on the owner of the property, which is Facebook … you know the New York Times or someone, has a Facebook page and someone’s commenting. New York Times does not own its page. Facebook does, and that’s very clear in terms of service that Facebook owns everything you put on their site, so it’s … does it seem suspicious that this comes down onto the publisher? And not the publisher being the news organization and not Facebook itself. And I would still say that let people talk because the truth is gonna get out there, of course. There are certain things that people … that we can … that most people agree don’t want to see online, even in comment sections. Things like that …”
My apologies Chadwick. My apologies for interrupting you. We are just a bit lagging for time. I would quickly revert to Bill, facilitating another chance to comment. And Bill, I would also like to highlight a simple thing and that’s what about the cost of moderating? For a media colossus like Facebook, it might not be a big deal to hire people to mediate, but is this going to possibly heighten the cost of it … cause some media organizations are darting to simply shut down their comments.
Bill Mew: “I think media organizations are crying foul when actually there are two very sensible means that they have at their disposal to address this. First of all, within Facebook, they can shut off comments if they’re not particularly worried about having or driving up a big debate or a big furore, then they can simply turn off comments. And secondly, they can’t be sued directly if a comment appears in … against one of their articles. They have to be given the opportunity to take it down, and therefore if somebody points out that a defamatory comment has been published and the publisher has a chance to take it down, then the publisher should do so. If the publisher ignores this opportunity and doesn’t take it down, [then] very rightly the publisher should face the music. It isn’t going to take an enormous amount of resources for them to take action and just go and have a look at comments that have actually been flagged up to them. And if they choose to remove them, then that’s one thing, or if they think that they are not particularly derogatory or inflammatory, then they can choose to keep them up, or they can just disable comments entirely, so I think it’s a little bit overblown by the publishers to complain.”
I despise doing this, and I surely don’t desire to mediate posting defamatory comments on Facebook, but we’re definitely out of time. In fact, we’re late. Really respect your time – Chadwick Moore, Jennifer Dempster, Bill Mew. Thanks so much.
GDPR at 4: the Good, the Bad and the Ugly
I have written several recent opinion pieces to reflect on the fourth anniversary of the European General Data Protection Regulations (GDPR). I wanted to summarise them here for the readers of Elnion.
Much of the commentary from me and others has been somewhat negative, pointing out what has not worked – and there is plenty that has not. However, we should remember that prior to GDPR there was limited general awareness of the importance of privacy, little recognition by organisations that they needed to take it seriously and an assumption by many that privacy was simply too complicated or intangible to be regulated at all.
With this hindsight in mind, it is obvious that we have come a long way and that there is now broad awareness of privacy, organisations are almost all now taking it seriously and that GDPR has not only been in force for four years now, but it has spawned many other privacy regulations elsewhere – from CCPA in California and POPI in South Africa, to LGPD in Brazil and countless further regulations in other nations or US states. This is no small achievement.
Unfortunately, GDPR has come in for much criticism. This has either been a reaction to the cost and inconvenience of compliance, or it has been frustration at the way that it has been applied or enforced.
Many commentators, myself included, have railed against the cost and inconvenience of GDPR compliance. My personal mantra has always been to seek to strike the right balance between meaningful protection (digital ethics, privacy and cybersecurity) and the maximisation of economic and social value (cloud, digital transformation and innovation).
It can be argued however that we currently have the worst of both worlds – there is little in the way of meaningful protection, given the lack of enforcement (which I will come on to). And at the same time, we are inhibiting innovation, with many startups opting either to base themselves outside the EU in order to avoid the overhead that GDPR represents or struggling to thrive within the EU while at a disadvantage to overseas rivals.
To a great extent, such complaints can be overstated. ALL organisations that value their customers and their own reputation, should not only have adequate data management processes in place, but should also have an organisation-wide culture of respecting both privacy and cyber hygiene (and with it cybersecurity). It is only those organisations that lack a ‘privsec’ culture that incur what they would see as ‘extra’ cost when it comes to compliance. That is not to say that the burden could not be eased for start-ups to help foster innovation – under the clear emphasis that they should be getting their act together anyway, because the rules will apply to them fully at some point if they succeed in growing at all.
GDPR’s greatest failing has not been down to the regulations themselves, but to their enforcement. As I explain in detail in my articles for Accounting Web and for Commvault, most of the responsibility for regulating the tech giants has fallen to the Irish regulator – the Irish Data Protection Commission (DPC). This is because most of the large tech firms, attracted by the country’s low corporation taxes, have chosen to base their European headquarters in Ireland.
Reluctant to rock the boat, the Irish DPC has almost entirely failed to enforce GDPR on firms like Google and Facebook that not only have business models focused on exploiting data, but that have also been accused of being among the most flagrant abusers of people’s privacy. Whatever the merits of such accusations, it is the DPC’s job to investigate and where necessary to take action.
Even in major instances where the highest European courts have ruled against such firms, as was the case almost two years ago for Facebook in the SchremsII trial, the Irish DPC has yet to enforce such rulings. Indeed such is the DPC’s failure to enforce GDPR that it has even been sanctioned by the European Parliament – in a vote of 541 to 1.
It is notable that in the latest European regulation on content moderation, there has been a move towards central enforcement, to avoid the scenario where a local enforcement organisation such as the Irish DPC is ineffective.
The Irish DPC’s complaint that it is under-resourced rings hollow. It may well have a far lower budget than Facebook or Google spend on their lobbying or legal activities, but this has not prevented other local Data Protections Authorities (DPAs) ruling against these giants.
Indeed such has been the success that Facebook has had in holding off enforcement, even of the SchremsII ruling, that some firms now see an actual business case for non-compliance. Indeed the Irish DPC has been accused not only of being complicit, but even of also being potentially corrupt in the way that is has failed to act.
While changes to the regulations themselves to encourage innovation or to the enforcement regime to hold BigTech to account are actions that the EU could take to improve GDPR, there is one major issue that is beyond its control entirely – the schism between the EU and US.
There is an the ideological gulf that exists between the EU’s prioritization of privacy as a human right and the US’s prioritization of surveillance for national security. It has already led to the demise of both Safe Harbor and Privacy Shield (note the SchremsII ruling) and will dog attempts to implement any replacement.
The recent announcement of a new transatlantic agreement lacked much in the way of substance or legal merit. It’s claim that it would be supported by Presendential executive orders is of great concern as these are easily reversed and have little legal foundation. Introducing real measures for adequate judicial supervision in the US would require legislation. Unfortunately, complete gridlock in Congress has made it impossible to introduce any federal privacy law. Adding the need for additional measures to keep the EU happy would make any such legislation even more difficult to pass.
We shan’t be holding our breaths for any federal privacy law, let alone one that might resolve the concerns on this side of the Atlantic.
The most that we can probably do is seek as much harmony as possible on either side of the Atlantic divide, with as much alignment as possible between the EU and UK versions of GDPR and with regulation at the federal level in the US at some point in the future to align the proliferation of state by state privacy laws.
I am an optimist – hence my ability to recognise the ‘Good’ where many others have not. I also believe that the ‘Bad’ can be addressed eventually if the Irish DPC can be forced to act (or failing that can be bypassed). I have less confidence that the ‘Ugly’ will be addressed any time soon.
How to fight for your privacy
In my last two blogs I’ve looked at what privacy is all about in a digital sense, and how creeping surveillance affects us all. In 2022, we’re all just a product to the ad industry.
In this blog, I’m going to provide some tips on how you can protect your privacy by doing some basic stuff – and I’ll also keep the tech jargon to a minimum (that can’t be simply explained, anyway).
Check your phone and tablet’s privacy controls
It might be a surprise to many that in your device’s setting there are several ways to limit what’s collected by your handset or tablet manufacturer1. Despite their reputation, Google does provide a lot of controls to limit tracking, though older Android handsets that can’t run newer versions of Android will miss out on recent changes. Don’t get smug Apple users, I’m talking to you too; the default iPhone settings send plenty of ad related info to Apple.
One thing a little less obvious are your device’s network options. Turning off WIFI and Bluetooth when you go out shopping will stop you connecting to shop beacons, which track you to target you with ads.
Snooping on you 24/7? There’s an app for that
More apps than not, in fact. I’m not kidding. Even if you flick every privacy switch you can on your devices, the second you install an app you might be turning your device back into a digital spy. And again, Apple users, this applies to you too, even if you choose ‘ask apps not to track’ in iOS.
Apps might ask for access to your location, contacts, your photos, microphone, camera… do you really know what they are doing with it all? Could your apps access these things without asking? Once you’ve given them permission, they can (largely) access them whenever they like. So, you should only install apps from companies that you trust, have good privacy controls or you’re happy for them collect data about you. In most cases, apps like Facebook, Twitter or LinkedIn can be managed in a browser much more privately than the app. Sure, they’ll bug you to install their apps, but that’s only because they want more data to monetize you!
Even apps that don’t come from the big tech giants like Facebook and Google might still use their services, especially where ‘app measurement’ is concerned. Technically, this should be data sent back to developers to tell them about the use of the app – performance, crashes etc. but a lot of metadata about you can also be sent, and aggregated with other data on you to build a more granular picture. It’s possible to limit this too – see the section below on Firewalls.
Many other types of apps, especially games, productivity, photo and messaging apps won’t have the browser option – so you either don’t install it or let them suck your data. Don’t like WhatsApp because it’s owned by Facebook? Try Signal instead, you might be surprised how many of your friends use it. By the way, there are genuine apps for lots of stuff that don’t play the surveillance game, so watch out for those.
Browsers and the web
There is good news about browsers – there are a good number of privacy friendly alternatives that can do a lot to shield you from data harvesting. Personally, I use DuckDuckGo, Firefox and Brave (a privacy focused version of Chrome). Each is different, but all do a good job of blocking attempts to track you. Just using multiple browsers is a good thing too, and don’t be afraid to clean out the cache regularly – it helps with privacy a lot (normally in settings>privacy or settings>data management).
This is more of a desktop issue, but a word of caution on browser plugins. Brave is a version of Google Chrome (which is in fact Open Source) made to be more private, so you can install Chrome plugins. If you start with privacy focused browser and then add data vampire plugins, you’re no better off. Choose your plugins carefully. You should also weigh-up the privacy implications of ‘Sign in with’ tools from the big tech companies – each is different, and certainly don’t use any of them without two-factor authentication.
Next steps: VPNs and Firewalls
A VPN (Virtual Private Network) uses a technology that hides your IP address – the ID assigned to your device for network access. It does it by sending all your network traffic through an encrypted tunnel to a datacenter somewhere. This has several advantages. It means your telco can’t monetise your browsing or network habits, and it also means you can connect to services back home while you’re your travelling3. Someone can travel from Europe to the US for example, and still get the local to home experience because all of the network traffic will come from your home country. The ones to look for commit to no activity logging, but again, you need to choose carefully and look at reviews from independent experts in this area. Also remember you have to pay for VPNs – if they’re free, they are invariably just data vampires.
Firewalls are another useful tool in your armoury3. I use a firewall that blocks tracking using app measurement tools and lots of known malware, and while it’s free, they really want you to use their paid VPN service. Mine does break a couple of apps, but it’s something I can manage. If you do run one of these and app stops working, switch off the firewall and try again.
When I first installed my firewall and saw just how much traffic was blocked (much of at night) I was amazed. I’ve set my phone up to be ‘private’ and it’s still blocked 89K attempts.
Just as with browsers, the search engine news is also good. While Google is by far the runaway leader in search, there are other options that will deliver great results. Startpage and DuckDuckGo both offer comparable search tools, though I would recommend letting DuckDuckGo manually know you location – before I did this, I was unhappy with the results. Now I use them all the time. Other privacy focused search engines are available.
Cars. TVs. Speakers. TV dongles and streaming boxes. Smoke detectors and heating thermostats. Water and energy meters, plugs and lights – the list goes on. All now smarter than they were, all now capable of surveillance, so don’t ignore them. My own bugbear are smart TVs. One of mine has no privacy controls at all, the other won’t let me upgrade to the new OS without turning off the privacy controls. Neither are legal under GDPR or the UK’s Data Protection act, but they get away with it. Whatever you have, check what privacy controls you have, and use them. Popular devices like Amazon Alexa and Ring, Google Nest and other smart devices may have more controls than they used to, but you should still read their privacy policies – you might be surprised what you find.
The privacy arms race
Why bother with all this? I point you back to the first blog in this series. Surveillance is rife, and it’s hidden from you. The free stuff you get is hailed from the rooftops, but the sleezy snooping is quietly swept under the carpet… but it’s very well used, to productize you and your life. And not in a good way. I’ll warn Apple users again too. While many Android users are aware of what they’re dealing with, many iOS users are falling for the privacy ads from Apple, who have been ramping up their advertising revenue very nicely, thank you. Don’t be complacent with either platform.
Where will tracking go next – time-based pricing for energy or water use? Car and health insurance, perhaps? It’s already happening. Right now, it’s early adopters are taking these things up because it suits them or there’s a financial advantage. What about when it’s the norm and you’re on the wrong side of the system or in a marginalized group?
Also consider this. Should sensitive data about you be hacked, things could escalate quickly and you could end up feeling like you’re in an episode of Black Mirror. Remember, if you don’t look after your privacy no one will do it for you. It’s time to tool-up4.
1: Wired has tips for Android here and iOS here. For iOS, ignore the advice about Protect Mail Activity – it’s actually bad advice. Instead, turn Protect Email Activity OFF and new switches appear, turn both switches ON for the best protection
2: Browser Fingerprinting: What Is It And What Should You Do About It?, PixelPrivacy, July 2021
4: Privacy myths busted: Protecting your mobile privacy is even harder than you think, CNET, Jan 2022, https://www.cnet.com/tech/services-and-software/privacy-myths-busted-protecting-your-mobile-privacy-is-harder-than-you-think/
The problem of creeping commercial surveillance
Not long ago if you went to McDonalds for a coffee, it came with a sticker on the cup. If you saved up six on a little piece of card (also on the cup), your 7th coffee was free. A simple system, and very private.
That sticker system was scrapped in 2020, for an app1. Suddenly, McDonalds have gone from knowing nothing about you, to collecting your financial info, your location and even looking at your search and browsing history. And for your <cough> convenience, you can sign-in with Facebook, Google or Apple.
OK, so I work in business too, and it’s imperative that you use data to understand customer behaviour and market to them. But do you really need this level of information? This is just one example of what I refer to as ‘creeping surveillance’.
How data about you is monetized
I can only guess at what all that data is used for in the example above2, but it’s typical of many apps that give you free stuff in exchange for data. I also don’t want to beat-up on McDonalds – as a reputable business they do provide opt-outs and are doing a lot to transform their business into a more sustainable one, which is great news.
Much more problematic are where the developer’s ONLY purpose for the app is to siphon your data – the user functions are merely a ruse designed to get you to install it. From games to widgets, productivity apps, and even ones that claim to protect your privacy. Whatever function you need, you can be pretty sure there’s a ‘data vampire’ app for that.
But why? Well, data about you is a lucrative business.
We value your privacy
It may be the biggest lie ever told. They sure do value it though. They value it because your privacy is up for sale all over the place – and mostly, WITH your consent3. Cookie walls – you hate them, so you click OK. Privacy policies – you hate them too, so you don’t read them. Data collection opt-outs are hidden or long winded. It’s all by design. It’s called obfuscation, and they want you to give in. Even user ‘privacy control panels’ are mostly designed to mislead you – all of which, incidentally, is not technically allowed under GDPR.
Data about you4 makes $billions for big tech, but smaller companies can still make huge profits from personal data collected from websites and apps. Depending on the level of detail, the data from just a few thousand individuals could net thousands of dollars for a small company, or even a lone developer. It all goes into a pot that profiles you to a scary level of detail – believe me when I say they can predict your actions better than you can. That’s why real time bidding ad-clicks can go for anything from a few cents to $2 and up. Remember, that’s just for a single user click on an ad.
It’s time for business to believe in choice
Creeping surveillance has become normalized when it really should not be – it’s dysfunctional. Many people believe privacy to be a human right. Even Mark Zuckerberg purchased the properties surrounding his own5 to create a privacy buffer for his family (which you contributed to, I’m sure he’s grateful). It’s not the only example – Larry Page, the co-founder of Google is a famously private person. The fact is that the people that want to trade on your privacy, crave it for themselves.
This is why data collection should be a choice. Just because you can, doesn’t mean you should. Let’s not forget that many people are happy with data collection – they want the most integrated and tailored experience possible. And they want it even though they know the ramifications; I’ve met ‘privacy professionals’ that think this way. Personally, I do my best to avoid tracking, apart from with the companies I trust, then I play the game. Surely looking for engaged people like that means you’ve found your hot prospect?
Talking from the business side for a moment, there are plenty of ways to combine tactics – tokenisation, anonymization, differential marketing (and more), in addition to what I’m sad to say has become ‘traditional’ data collection. Let people choose, it fosters trust. Make it simple to select what level of tracking people are comfortable with, including ‘none’. This layered approach will make your prospects and customers happy – if they trust you. It shouldn’t really trouble you unless you are Meta, Google, or swathe of other big names, of course.
The problem is that marketing teams are under so much pressure to get lead numbers up, lead quality suffers. This is often driven by sales management demanding more leads, only to complain about lead quality afterwards, at which point they blame marketing. So, whether you’re in sales and marketing or just reading this with a casual interest in privacy, consider this. When the starting gun was fired on GDPR in May 2018, many US websites worried about the legal consequences blocked European visitors, but the New York Times took a different approach. They dropped real-time bidding and behavioural ads and focused on contextual and regional ads instead. The NYT ad business continued to grow ‘nicely’ as they put it6, even without all that creepy targeting.
In my next blog I’ll be looking at what you can do to preserve your privacy. When you lose ground to this sort of creeping erosion of your rights, it can be really very hard to reclaim them, so it’s important that we all keep fighting.
1: This blog refers to the UK App; screen shot shows the privacy information from Apple’s UK app store. Data collection may vary by country or region
2: I have asked McDonalds why they need my browsing history on several occasions, but so far, they have not responded
3: Don’t get hung-up on consent folks, it’s not needed if a business has what’s called a ‘legitimate interest’ – which can be a rather stretchy, elastic term when in the hands of less reputable businesses
4: Note that I don’t refer to it as ‘your data’ – that’s because it isn’t
5: Why Mark Zuckerberg buys up properties that surround his 10 homes:
6: After GDPR, The New York Times cut off ad exchanges in Europe — and kept growing ad revenue, Digiday, Jan 2019:
Huawei takes action to help operators address network energy efficiency
What can we expect from Huawei? Innovation, 5.5G and much more.
Mainframe Modernization is a Non-Sequitur
Next-Gen Networks & 5G, Facilitating Enterprise Business Transformation
Discussion with James Canham-Ash, Head of EMEA Comms, Manhattan Associates
Hybrid Cloud Patterns, VMware Cloud on AWS: Evolve Event 1
Trending On Elnion
- 5G1 year ago
Next-Gen Networks & 5G, Facilitating Enterprise Business Transformation
- Supply Chain1 year ago
Discussion with James Canham-Ash, Head of EMEA Comms, Manhattan Associates
- Cloud1 year ago
Hybrid Cloud Patterns, VMware Cloud on AWS: Evolve Event 1
- Telco & Mobile1 year ago
Discussion with Stacey Marx, President, National Business & Channels, AT&T
- Digital Enterprise1 year ago
NextGen Networks Transforms Enterprise Business: CXOCyience 2￼
- Cloud1 year ago
Changing Face of eCommerce: Virtual Panel – Supply Chain Insights
- Cloud1 year ago
Discussion with Director Solution Consulting at Manhattan Associates, Alex MacPherson
- Data11 months ago
GDPR Adequacy Decision of UK Aims to Focus on Innovation over Privacy