Search
  • Why Everybody Hates You

S1E5: Why does data matter to corporate reputation?

In this episode we look at data use, misuse and abuse. Is data morally neutral? Can there be such a thing as too much data? I’ll answer these questions, explore how data capitalism is shaping our world – for better and worse – then take you on a brief gallop through the politics of algorithms and data breaches.


To tell you more, I'm joined by three excellent guests:

  • Hellen Beveridge,  Privacy Lead at Data Oversight

  • Rachel Williams, Research Director at Populus, and

  • Line Kristensen, Community Strategist at NationBuilder

Find the episode here:

Subscribe on all good platforms (find links HERE)


Or read the transcript here:


Daisy Powell-Chandler: [00:00:00] Welcome to Why Everybody Hates You, an audio support group for reputation professionals. If you have any responsibility for how people talk, think, and feel about your organisation, then you are in the right place. My name is Daisy Powell Chandler and today I'm talking about data

[00:00:30] We love data these days. Everybody seems to be talking about data, big data, personal data, hacked data. Data is shiny and alluring. Good use of data can save us all time and money. Some applications may seem a tad trivial, but others are, frankly, life-changing just in time. Supply chains often driven by [00:01:00] real time data and AI save money, reduce emissions from unnecessary transport and they cut waste.

But what about just in time services, British gas already envisaging a day when, as well as the thermostat that keeps your house cozy, they will also have sensors inside your boiler that give them advance warning of a potential failure. Instead of you calling them and having to take an unexpected day off work because your house is freezing.

They'll be able to send you an alert and book in a convenient time for an engineer [00:01:30] to check the device before it fails. Big data is also fueling new discoveries in the health sector. For example, lingua Matics is using an untapped data source held in medical records to find new avenues of research.

Until now it's been really hard to use parts of the records that are written longhand. We will know what doctor's handwriting is like. So unless someone manages to read and decode each record and interpret that information into codes, [00:02:00] computers haven't been able to access it. This company uses natural language programming to search records for trends such as symptoms that frequently occurred together, or side effects that may previously not have been documented.

This could literally save lives. But it does involve giving more people access to your medical records. These examples show that gathering warehouses full of data and using it to predict human and nonhuman behavior can create both small and far reaching benefits that will [00:02:30] allow us to live happier, less incumbent lives.

Inevitably though there are trade offs to measure how an individual feels about data privacy researchers use something called the Western privacy index, which uses three simple questions to understand how each individual feels about their data. I spoke to Rachel Williams, a director at research company Popuus about what the index reveals.

Rachel Williams: [00:02:55] Consumers can be grouped into three very broad categories. The first of which [00:03:00] is privacy fundamentalists. These are the people who are fundamentally distrustful of any organization that answered their personal information, no matter how great the value that you're offering, these guys will always go. But my privacy is more important. Uh, in the last study I read that was around 30% of the population, much of the research on this was from the U S but it remains fairly reflective of UK attitudes as well.

So that's quite a sizeable portion of the population. The next group is the pragmatists. And in research, we see a lot of pragmatists. [00:03:30] These guidelines are much more about the value exchange. They really weigh up the benefits on offer in return for their data and the opportunity for them to access the things that they want to access.

They're morally against the intrusiveness of the use of personal information, but they will take a more balanced view of, okay. Well, I feel like they are saying all of the right things about how they're using my data and they're being fairly transparent about it. They might browse the T's and C's, but ultimately they want the benefit from that value exchange.

So they're more likely to agree to things, but they're just going to be a bit more careful [00:04:00] about it. And that's quite a huge chunk. That's 60% of the population. And then finally, you've got this much smaller group of around 10% of the population and they're what we call unconcerned. And they're really what this has on the tin.

They don't care. They won't read anything. They kind of just trust that all like organizations purely by existing or hopefully playing by the rules. And they will just click yes, without looking at anything because they just trust that it's all okay.

Daisy Powell-Chandler: [00:04:26] That means that up to 30% of the public is [00:04:30] extremely suspicious of you gathering any type of their data.

And a further 60% wants to see clear and transparent reasoning for why you are asking the questions that you are. This reluctance to hand over data, shouldn't be dismissed as a knee jerk paranoia, but it also isn't necessarily linked to the big scandals, such as the Cambridge Analytica debacle. Instead consumer site, real life examples, where they felt uncomfortable about how their data was used

Rachel Williams: [00:04:59] Nowadays, people [00:05:00] really pick up on targeted ads and the fact that they follow you through from website to website and people constantly bring this up.

Um, they notice it and it is a little bit creepy. And this is partly because consumers don't always connect their behavior in terms of sharing data with the results. They know that they're giving their data away somewhere, but they don't connect the specifics.

Daisy Powell-Chandler: [00:05:20] I also asked Rachel about spam each day. The average office worker receives well over a hundred emails to their work address.

Now consider any [00:05:30] personal email addresses. Plus the phone calls, unsolicited posts, the internet popups, and even approaches at the front door. So many of these are relevant or spam that we only open around a third of the email that we receive. And we act upon a tiny fraction of that. How do consumers feel about that?

Rachel Williams: [00:05:49] People

Daisy Powell-Chandler: [00:05:49] will quite

Rachel Williams: [00:05:50] commonly tell you that they get sort of upwards of a hundred emails a day and they don't read them. It's more just of an annoyance that they have to go through and empty that inbox every single day. [00:06:00] And some people even have two or three email addresses deliberately set up so that they can offer that email address it's online when asked for it and they don't use it as their main communication in books.

So it's become completely normal to have a spam in books and a communication in books. And maybe another one that's in between. Things that I'm interested in, but I won't often interact with I'll just occasionally check if there's any offers or deals that are relevant, it's become completely normal to do that.

And obviously it's not everywhere, but it does give you the impression that consumers are becoming completely desensitized to all of this. This is the [00:06:30] kind of, you know, mass marketing.

Daisy Powell-Chandler: [00:06:32] There are some important lessons here for communications professionals for start. Consumers may appear to tolerate your behavior, but absolutely no branding team has ever aspired to the brand attribute of creepy.

What's more, it's clear from the workarounds that consumers are employing that eventually they will find a way to defeat your process and waste your resources. While consumers are largely worried about convenience and value, what's worrying the professionals, working in data more [00:07:00] is the ability for more malevolent organizations to piece together, a highly detailed picture of your life.

And to use that, to manipulate your choices. To give you a flavor of the data that's easily available. Here is Helen beverage, who is privacy lead at data oversight.

Hellen Beveridge: [00:07:17] It's too big for our brains to comprehend. Um, what is actually going on with information and how. People and organizations talk to one [00:07:30] another.

They are, we think that when we, we sign up form for something, um, that, um, that's, we we're giving our name of the dress. What we might not comprehend is that in the background, that name would have just been linked to our spending habits and that. Data may then be being attached to some things from open data sources.

So it might be being linked to how many prescriptions are being served in our particular [00:08:00] area. What's the average car ownership in our area and the, all of these little bits and pieces is a little bit grilled in box being built. One on top of the other to form really quite an important picture about you.

And then, then organizations with the cash. This isn't, everybody can then use that to, um, micro nudge you into doing things that you might not ordinarily have done. And where you thought to think [00:08:30] that we are, um, we are our own being and that we aren't affected by anything. But in actual fact, if we're told that.

A little variation on the story that we think we already know, then, Hey, Presto, we can be pushed into behaving in different ways.

Daisy Powell-Chandler: [00:08:47] This process has made even easier for organizations that already know a lot about you. For example, the millions of individuals all over the world, who've been giving Facebook thousands of personal data points for over a decade.

[00:09:00] Once you have enough data about an individual, you can begin to build a model that will predict their behavior and suggest ways in which to alter that behavior. In her book, the age of surveillance, capitalism, academic Shoshana Zuboff makes several compelling and slightly disturbing points about the resulting system of economic incentives surrounding our data.

Firstly, it makes the link between consumers and companies less important. Many of your organizations gathering your data, [00:09:30] don't even have to sell you anything. And that gives them fewer reasons to keep you happy. Second, the companies that control these vast sways of behavioral data in play a remarkably small number of people, meaning that power is concentrated in the hands of very few.

And perhaps inevitably given the first two points, Zuboff both makes the case that those powerful few have strong incentives to undermine any obstruction of their access to personal data. But why should you care about this? If you don't work at [00:10:00] Google or Amazon or Facebook, just start with, there's an overarching concern about how democracy can truly function in a society where a small number of exceedingly wealthy individuals possess enough knowledge about the population to manipulate their behavior.

But this podcast is about corporate reputation. So why should you care?

Rachel Williams: [00:10:21] The worry that I have having sat back and thought about this and all the conversations that I've had with consumers is it generally consumers don't realize the actual [00:10:30] value of their data. So potentially in their minds, the value exchange could be a lot lower than it might be perceived by a brand.

And that means that right now we can offer them so much less because they don't realize how much their data is worth. But the danger is that they will eventually realize, and the day is coming really quickly. And it's important that brands don't underestimate how angry people might be when they start to realize that, especially now that there are increasing numbers of brands and products that are specifically designed for keeping your data safe, and they [00:11:00] have an interest in telling people how important their data is.

And also as a brand, you might be thinking, Oh, well, it doesn't matter. Yeah. If people hate us a little bit, cause you know what, the bottom line still really good. Uh, and it's actually growing. But if the general population increasingly see that whatever it is that you're doing is bad, then the next time you want to diversify or expand your product portfolio, it's going to be so much harder for you to do that.

You're going to have to spend so much more money in order to make it a success, or it may not even be possible for it to be a success anymore because of the, the [00:11:30] damage that has been done by the lack of, uh, perceived care and attention, paid to data collection and privacy.

Daisy Powell-Chandler: [00:11:37] For those of us interested in reputation and in doing the right thing, the question then is how do we gain the startling benefits of data while staying on the right side of ethics and public opinion?

I'm going to suggest five key rules to keep you on the straight and narrow rule. Number one, don't gather too much data. [00:12:00] Imagine this common scenario, a company wants to be able to prove to investors that customer satisfaction with that product is really high. So they stopped writing a survey to begin with as just one question, how satisfied are you with our product?

But then Ed speaks up. If they give us a bad mark, how will we know how to improve? So they had 10 questions to tease out which aspects of the product, the consumer likes or doesn't like, then Tina asks. Well, just [00:12:30] satisfaction varies across demographics. So the ad questions on age, ethnicity, gender marital status, income, health, family size.

Now the marketing team decided that they should check whether anyone noticed the latest TV ads. And so on soon enough, the survey's 30 minutes, long, fewer, and fewer consumers are going to finish the thing. So the sample becomes less representative. Plus the data that the team gets back is full of extremely personal details that need to be held securely.

None of this was necessary to [00:13:00] meet their original goal. That's not to say that you shouldn't explore in greater detail what your customers want and need, but it is worth being mindful about the sheer volume of data we record because there are tradeoffs gathering data and then processing and storing.

It uses resources, time, money, energy, therefore recording and processing. The smallest possible amount of data is sensible in lots of ways. Plus too often when companies get a large amounts of data, they discover one of three things. [00:13:30] They may have no capacity to use. Most of it, it therefore sits around unused.

This happens to nearly every organization I've ever worked with. This is a waste of the initial effort, recording the data and a waste of storage space. Imagine the carbon footprint of your service, if, and when they do finally work out how to use it. Often the data isn't good enough quality or suited to what they do finally decide they need.

This is what American express found when they got serious about analyzing all of the data they had [00:14:00] amassed before they could do anything. They had to completely restructure their data storage to eliminate duplicate data points. This took them years. Oh, the worst case scenario, someone gets tempted to use the data for something they probably shouldn't opening you up to untold trouble.

Therefore as a general rule of thumb, don't gather data that you don't already have a plan useful. In fact, a useful guideline is to gather less data, but to do so at highly [00:14:30] relevant moments, this will make sure that your data is more accurate and your purposes more transparent. For example, a week after someone moves into their new home might be a good time to ask a simple question about how good the removal service was.

Checking in with consumers at multiple touch points means you can be less obtrusive, ask fewer questions each time, get better data rule two is to gather good data and keep it that way. Here's Helen again, to tell you what happens when your databases are full of bad data.

[00:15:00] Hellen Beveridge: [00:15:01] And I had one client who, we looked at all their data and they just, they couldn't quite let go.

And I said, well, why don't you do a test? Why don't you before the regulations come in. Why don't you do a refresh campaign? And see if you can reengage some of that data that is going in the bin, because you've got to keep it and do it in stages. So you can see what's going on because regardless of what people [00:15:30] think emails do cost money.

And so they split the database. They started off with our first sample. I think they did a one in five sample. So every 15 minutes and they called me up. 10 days off at the them. And they said, how, what kind of response rate do you think we got? And I think maybe you got not point not 1% response. And they said we got nothing, [00:16:00] no one even opened the email, nothing.

So we've deleted all that data because it was useless. But having it in our database made us think that it was valuable and made us think that that was our universe that we had because it was stopping us from looking at what was actually the engine of the business.

Daisy Powell-Chandler: [00:16:29] And there is more [00:16:30] than one way in which poor quality data can lead you down the wrong path.

For example, the second, most common way that data scientists try to assuage public fears about data. The first is not telling them, obviously is by seeing the prices of large anonymized data sets, your privacy is safe. They argue because the researchers using 30 million pieces of anonymous data, but the question they should be answering is how representative are those large data sets for an example of this problem at work let's consider [00:17:00] voice recognition software.

The programming of voice recognition, software programs, those skewed that come on, you factors actually acknowledge that the reason you're struggling to talk to your car is probably that you sound female or foreign. One helpful white male VP of a car supplier suggested that many issues with women's voices could be fixed.

If female drivers were willing to sit through lengthy training, women could be taught to speak louder and direct their [00:17:30] voices towards the microphone. Google is one of the leading performers on speech recognition and predict that up to 50% of searches may happen via voice command. As soon as this year.

But even Google the text male speech with 30% more accuracy than female. This seems at worst a bit frustrating until you realize how many assessments are moving to civil or technology, including language proficiency tests used by some immigration services and stories are starting to [00:18:00] surface a female native speakers being failed because the software struggled to recognize that voice patterns.

It has also recently come to light that despite a massive effort, over two decades to sequence the human genome genetic material from people of African descent makes up just 2% of the current data collection. That emission is a massive obstacle to understanding how bodies and diseases function, African genomes are our species oldest and also the most diverse.

And [00:18:30] yet we are now two decades behind in the race to unleash that scientific potential. In summary, launched databases full of records are alluring. In summary, large databases full of records are alluring. They have the potential to unlock great discoveries. Speech recognition is still cool and very useful for example, but bad data is not the friend.

It can seem and dabbling in poor quality data is full of reputational. Tripwires. The third rule [00:19:00] is to conduct gold standard analysis. Unfortunately, even if you have gathered and stored your data sensibly, you cannot now forget about your reputation and rely on the cool, calm, moral neutrality of maths in the analysis phases.

Alas, no, your reputation is still at risk when you get to the analysis and that risk stems from two main problems. The first is the algorithms are written by humans. And the second is that they are often self-reinforcing. Just because an [00:19:30] algorithm is expressed in number or code, doesn't make it neutral, but it does make it opaque difficult to understand, and to an algorithm that decides who should be gone to the credit card or mortgage or school place holds great power.

That code was created by human hands and may well encode the beliefs or assumptions of its programmer. Moreover, it's very hard for individuals whose lives are affected by such an algorithm to understand and. If they need to appeal the outcome. A great [00:20:00] example of this comes from the company, Zest Finance.

They were profiled by Cathy O'Neil in her chilling, but fascinating book Weapons of Math Destruction. Zest Finance was set up by a former chief operating officer of Google, and it was trying to tackle the lack of credit available to individuals with limited credit history. A really important topic. The idea was to use a wide range of data to build a picture of the applicant, thereby allowing the company to get a better idea of the risk involved in lending, and to be able to [00:20:30] offer loans at significantly lower interest rates than payday lending, why to access to finance is a great thing for vulnerable families.

But what does this wider range of data include? Thousands of data sets. Not least of which is the spelling and capitalization on your application form and how long you spent reading the terms and conditions. Their theory is that rule followers are more likely to pay their loans, but it also means that lower [00:21:00] education applicants will be offered higher interest rates that carries a pretty major reputation, risk.

Another all too common side effect of this kind of data analysis is the creation of self-fulfilling cycles. For example, consider an algorithm that uses the current characteristics of a workforce to create a picture of the ideal new employee and then assesses new applicants against it. It would make it very difficult to increase the diversity of new hires [00:21:30] or in a more domestic example, consider Amazon login and you will immediately be offered a sample of products that were bought by other customers with tastes similar to yours.

What could possibly be wrong with this? The first problem is the creation of echo chambers populated with citizens authors and even fashion that makes us comfortable. If I continually order books on international development or women's rights, I am deeply unlikely to be offered Ayn Rand's Atlas [00:22:00] Shrugged, or Rand Paul's The Case Against Socialism, that seems like a good thing. It saves me time. But it also entrenches their very particular worldview by convincing us that few people disagree. And there is little opposition to our perceived mainstream. The second problem is that customers aren't offered the best books.

Just the lucky ones, books that get attention early on are recommended more to other customers. And therefore they're bought more, which means they're recommended more. [00:22:30] A book that was slow out of the blocks has no chance of catching up. Even if it's better quality or a better fit for you. Again, it's clear that judging data and analysis is a matter for more than just mathematical or legal interpretation by one assessment.

Amazon's also liked algorithms are exceedingly successful. Every minute. They are convincing consumers to buy books that they would not have seen otherwise yet. It's also possible that they [00:23:00] disrupt our ability to identify the best books. They may reward authors based on luck rather than merit. And they may even emphasize societal divides and that's just the algorithms offering us books.

Rule number four is invest in good data security.

Hellen Beveridge: [00:23:17] So companies that don't have a really strong procurement strategy, they are the ones that have got themselves into more trouble because, um, I'm going to say facile. I am an ex marketeer, [00:23:30] so I can say what I like about marketing, um, marketeers are notorious for going off and buying the latest, shiny thing.

And they. The technology has moved at such a pace that you don't necessarily know how things fit together. And it's like, I'm going and buying a food mix because you want to whisk X, but it comes with lots of other stuff. Well, in the [00:24:00] real time world, while that other stuff sits in the cupboard and. But in a technological sense, if you buy something, that's got lots of obsolescence in it.

Well, that might actually be working in the background without you realizing. And so when things go right wrong, they go really wrong because you may collecting data. You didn't realize you were collecting you. Didn't. Buy it with with good knowledge. So you buy extra technology. And, [00:24:30] um, the case in point is, um, Ticketmaster.

Daisy Powell-Chandler: [00:24:34] He bought

Hellen Beveridge: [00:24:34] a chat bot and the chat bot was divided for them by a company, perfectly legitimately. It was what it was supposed to do. The problem was that they put it on the same page as people were entering their credit card details. So that opened the door. To the hackers to be able to scrape people's day to their credit card details and say 40,000 people in the UK [00:25:00] alone lost their credit card details on that.

Daisy Powell-Chandler: [00:25:04] As Helen's anecdote demonstrates good data security, isn't always about servers and passwords. It can be about procurement or finance or even marketing. One thing is for certain losing your customers. Data is not a good move. Your reputation. That would be an immediate loss of faith for the 30% of the population.

That's a data fundamentalists and would also undermine the value proposition that you offer to data [00:25:30] pragmatists who make up the next 60% of the population. And that brings us to rule five, do all of this transparently and with consent.

Rachel Williams: [00:25:43] What makes pragmatists bristle is when they feel like things are becoming, um, or being deliberately hidden from them.

Uh, they're likely to say yes, if you provide them with the right kind of information and the right explanation as to why you need it. So something like, uh, we need your home [00:26:00] address because then we can give you offers for the stores in your area. It can be as simple as that. And if you feel like people won't like the reason you're asking for the data, it's probably because they won't. And so that's not going to be a good thing.

Daisy Powell-Chandler: [00:26:12] Interestingly on this topic, the European union has got our back. If your company collects or holds any type of personal data in the European union, then you should have heard of the general data protection regulation. And you've probably been trained to think of it as a major pain in the behind.

[00:26:30] Suddenly you can't use that list of email addresses you had before, or maybe you can, but the need to be saved in a different place. Or you have to ask the list if they still want your emails. But in the context of your corporate reputation, GDPR is actually your friend, not only can compliance, save you a hefty fine and poor press coverage.

It's also trying to guide you towards a relationship with customers that is built on consent and engagement. The principles of the law say that you should collect the minimum amount of accurate data [00:27:00] required. Use it in a legal manner for the purpose. You told subjects, you were gathering it. Not to use the data and then delete it.

When you are finished, all of these things also help you to build a better relationship with your consumers. I asked campaign's expert, Line Kristensen to explain GDPR in a nutshell,

Line Kristensen: [00:27:21] GDPR, is a piece of legislation that was go into effect by the European parliament to give consumers a greater say in how their data is handled, [00:27:30] it's basically all about consent. How you'd like to be contacted. When would you like to be contacted and about what

Daisy Powell-Chandler: [00:27:38] Line works at Nationbuilder, a company that provides software to help organizations campaign. And what she has noticed is that rather than being an obstacle to campaigning, GDPR is actually making campaigns smarter.

Line Kristensen: [00:27:51] We're bombarded constantly with messages, whether it's about different products that companies are trying to sell to us charities, seeking out donations or political party [00:28:00] who want overt. And the reality is there's a lot out there and we have limited time GDPR forces, companies and campaigns, to be much more selective about how to communicate with individuals.

And really meet them where they're at. So I think the great thing about GDPR is that right. We have gone from companies and campaigns, just telling you what they want to when they want to, to dumb, really putting you in the driver's seat. So now it's about your preferences. What are you interested in? And that obviously means that [00:28:30] people pay more attention and more engaged because of being contacted or rather wind up being contacted actually about things that said, yeah, I'm interested.

I want to hear about this is also about muscle memory as well. What I mean is. If you ask a political campaign for an example, a continuously talking to your supporters about the things they care about. Well, don't be surprised that if ever time to actually continue to, um, you know, open your emails and really trust what you have to say about [00:29:00] the mother, because you're respecting, um, you know, their interest and what they want to hear about.

I did some analysis looking at pre and post GDPR with data, different parts of the world. And it's yeah, because what we really found was that in Europe, there was more emails being sent, but more people are opening them. More people were actually engaging with those emails where I should, if you looked at the U S Australia, Asia, and yeah, I went to places that didn't have similar legislation pass, right.

Actually both the opening rates and [00:29:30] engagement rates or, well, what we saw in Europe,

Daisy Powell-Chandler: [00:29:34] What is more organizations that don't follow best practice on this kind of data? Use risk ruining that email reputation. Which can mean consumers never see your communications at all.

Line Kristensen: [00:29:46] Yes. If your email declines, then that can impact your email deliverability. The greater portion of recipients that open your emails and engage with them the more likely you are to end up in people's inboxes, that gives you a great [00:30:00] incentive to send people stuff they actually want to read. Whereas, if you don't get a lot of engagement, if you, you know, if people aren't opening your emails, then in the worst case, that actually means that those emails won't even end up in the spam folder. They just won't get delivered.

Daisy Powell-Chandler: [00:30:24] There is a lot of information in this episode. So here's a quick reminder of the [00:30:30] five rules to safeguarding your organization's reputation around data:

1. Don't gather too much data

2. Make sure that the data you do gather is great quality,

3. Conduct gold-standard analysis

4. Invest in great data security and

5. Do all of this transparently and with consent,

That's everything from us. A big thank you to [00:31:00] my guests, Hellen Beveridge of Data Oversight, Rachel Williams of Populus and Line Kristensen from Nationbuilder for helping me to show a glimpse of the reputation risks, and opportunities of data.

If you've enjoyed this episode, I hope you'll join me in two weeks time for the next one.

To make that easier, please do find us at whyeverybodyhates you.co.uk and click "subscribe" on your favorite podcasting app. I would also be very grateful if you would [00:31:30] leave us a review if you get the chance as reviews help new listeners to find the show. Thank you for listening to Why Everybody Hates You.

And remember... You are not alone!

Copyright Meyland Strategy Ltd 2020