Search
  • Daisy Powell-Chandler

Why everybody hates you: Data – part 2 of 2

Reputation is formed from three main components: your own behaviour, the manner in which the behaviour is communicated, and the context in which that behaviour takes place. The first of these you can change, the second you can influence but you need to understand the third – context – in order to do either of the others well. Many of the issues that affect your reputation are exceedingly complex. Race, gender, inequality, climate change – how can you possibly be expert enough in each of these areas and more in order to stay ahead of your critics and be (seen as) a good company?


This essay is part of a series that explains the context in which your corporate reputation is being formed, so that you can guide your own company safely through. Today we’re looking at data use, misuse and abuse. Is data morally neutral? Can there be such a thing as too much data? I’ll answer these questions and take you on a brief gallop through the politics of algorithms and data breaches.


At the end I’ll share some further resources if you want to go deeper into the topic, and start to show how these issues relate to your reputation. If you would like to read last week’s piece about how data capitalism is shaping our world – for better and worse – then you can find it here.



There is such a thing as too much data

Imagine this common scenario: a company wants to be able to prove to investors that customer satisfaction with their product is really high so they start writing a survey. To begin with there is just one question: how satisfied are you with our product? But then Ed speaks up “If they give us a bad mark, how will we know how to improve?” So they add ten questions to tease out which aspects of the product the consumer likes or doesn’t like. Then Tina asks “What if satisfaction varies across demographics?” So they add questions on age, ethnicity, gender, marital status, income, health, family size. Now the marketing team decide they should check whether anyone noticed the latest TV adverts. And so on. Soon enough the survey is 30 minutes long. Fewer and fewer consumers are going to finish the thing so the sample becomes less representative plus the data that the team gets back is full of extremely personal details that need to be held securely – none of this was necessary to meet their original goal.



This is not to say that you shouldn’t explore in greater detail what your customers want and need but it is worth being mindful about the sheer volume of data we record because there are trade-offs. Gathering data and then processing and storing it uses resources: time, money, energy. Therefore, recording and processing the smallest possible amount of data is sensible in lots of ways. Plus, too often when companies gather large amounts of data they discover one of three things:

  1. They have no capacity to use most of it and it sits unused – this happens to nearly every organisation I have ever worked with. This is a waste of the initial effort recording the data and a waste of storage space – the carbon footprint of your servers.

  2. If and when they do finally work out how to use it, it isn’t good enough quality or suited to what they need – this is what American Express found when they got serious about analysing all of the data they had amassed. Before they could do anything they had to completely restructure their data storage to eliminate duplicate datapoints. This took years.

  3. Someone gets tempted to use the data for something that they probably shouldn’t opening you up to untold trouble. See the Cambridge Analytica example last week. Huge caches of data may be too juicy a morsel for some analysts to ignore. If the threat of congressional hearings and bankruptcy seem a little far-fetched for you, how about the prospect of consumers simply deciding that you are a bit creepy, as happened to Netflix and Spotify?

Therefore, as a general rule of thumb, don’t gather data that you don’t already have a planned use for. See, didn’t I tell you that GDPR was your friend?!



There is such a thing as bad data

Even if you manage to restrain the inevitable urge to gather more data than you require, not all data is equally useful. And data certainly isn’t morally neutral. For example, the second most common way that data scientists try to assuage public fears about data (the first is not telling them, obviously) is by singing the praises of large anonymised datasets. Your privacy is safe, they argue, because the research is using 30 million pieces of anonymous data.


Anonymising datasets is a great tool that allows researchers to use important personal data – eg health or finance – without it being tracked back to specific individuals. But just because these datasets are sometimes huge doesn’t mean they are representative. For an example of this problem at work, let us consider voice recognition software. The programming of voice recognition programs is so skewed that car manufacturers actually acknowledge that the reason you are struggling to talk to your car is probably that you sound female or foreign. One helpful (white, male) VP of a car supplier suggested that “many issues with women’s voices could be fixed if female drivers were willing to sit through lengthy training… Women could be taught to speak louder, and direct their voices towards the microphone”.


Google is one of the leading performers on speech recognition and predicts that up to 50% of searches may happen via voice command as soon as this year. But even Google detects male speech with 13% more accuracy than female. This mostly seems a tad frustrating until you realise how many assessments are moving to similar technology. As speech technologist Joan Palmiter Bajorek explains:


Let’s consider three Americans who all speak English as a first language. Say my friend Josh and I both use Google speech recognition. He might get 92% accuracy and I would get 79% accuracy. We’re both white. If we read the same paragraph, he would need to fix about 8% of the transcription and I’d need to fix 21%. My mixed-race female friend, Jada, is likely to get 10% lower accuracy than me. So, our scorecard would look something like:


Josh (white male) = A-, 92%

Joan (white female) = C+, 79%

Jada (mixed race female) = D+, 69%


Dialects also affect accuracy. For example, Indian English has a 78% accuracy rate and Scottish English has a 53% accuracy rate… These biases have serious consequences in people’s life. For example, an Irish woman failed a spoken English proficiency test while trying to immigrate to Australia, despite being a highly-educated native speaker of English. She got a score of 74 out of 90 for oral fluency. Sounds eerily familiar, right? This score is most likely a failure of the system.

‘Voice Recognition Still Has Significant Race and Gender Biases’, Harvard Business Review, 10 May 2019


So far, so irritating but why is this relevant to us? It is relevant because a lot of these flaws occurred not because the engineers were all white men (although, that might be a factor) but because the datasets they are using to create the software are themselves flawed. Take, for example, this standard speech database, which describes itself as being “designed to provide speech data for acoustic-phonetic studies and for the development and evaluation of automatic speech recognition systems. TIMIT contains broadband recordings of 630 speakers of eight major dialects of American English.” What it doesn’t tell you is that of the 630 speakers 70% are male (compared to 49.2% in the US population), and 92% are white (77% in the whole population). It is no wonder that the results are so skewed when the source data is deeply unrepresentative.


Even if you manage to source a representative dataset, massive lists of data can hide all kinds of errors or inaccuracies. It can take a long time to amass big lists of personal details so often records will be out of date, or administrators will have interpreted the forms differently, or the computer will have malfunctioned, and so on and so on. In a file of 50 million entries that are no longer tallied against a specific individual, who’s to know which bits are right or wrong?


In summary, large databases full of records are alluring: they have the potential to unlock great discoveries (speech recognition is still cool and very useful!) but bad data is not the friend it can seem. Deleting old, excess or unverified data may seem a wasteful wrench but the real truth is that out-of-date email lists, for example, mean you are contacting potential customers who have long since forgotten they had any interest in you and will find your overtures intrusive. And dabbling in poor quality data is full of reputational tripwires.



There is such a thing as bad analysis

You are smart. You have carefully considered what data you require, gathered it accurately, stored it safely and now you are ready to analyse it. Hiring a data analyst can be tricky and expensive but you have managed to do that too. Hooray! Now you can forget about your reputation and rely on the cool, calm, moral neutrality of maths. Alas, no.


Even if you have done everything right in the data gathering stages, your reputation is still at risk when you get to analysis. These risks stem from our worship of maths as a solution, and our deification of the analyst. Broadly the risks divide into two, linked, categories:


Encoding prejudice. Just because an algorithm is expressed in number or code doesn’t make it neutral but it does make it opaque. This fact, combined with our faith in a maths and analysts leaves us vulnerable. An algorithm that decides who should be granted a credit card, or mortgage or school place holds great power. That code was created by human hands and may well encode the beliefs or assumptions of its programmer. Moreover, it is challenging for those who are subject to its outcome to understand the process by which the decision is made. In these circumstances, appealing the outcome is near impossible.

Take, as an example, the company ZestFinance, profiled by Cathy O’Neil in her chilling book weapons of Math Destruction. ZestFinance (now rebranded to Zest AI) was set up by a former Chief Operating Officer of Google to try and tackle the lack of credit available to individuals with limited credit histories. The idea was to use a wide range of data to build a picture of the applicant, allowing the company to get a better idea of the risk involved in lending and therefore to be able to offer loans at significantly lower interest rates than the ‘pay day lending’ sector.


Great! Wider access to finance is a great thing for vulnerable families. But what does this wider range of data include? Thousands of datasets, not least of which is the spelling and capitalisation on your application form and how long you spent reading the terms and conditions. Their theory is that rule followers are more likely to pay their loans but it also means that lower education applicants will be offered higher interest rates. Zest AI’s latest product ‘ZAML Fair’ allows banks to see which factors are causing the biggest inequalities between demographic groups so that they can then choose to reduce the input of that variable to the model. Why? “Models are by nature very biased,” says Douglas Merrill, founder and Chief Executive of ZestFinance.


Creating a vicious cycle. Another all too common side effect of this kind of data analysis is the creation of self-fulfilling cycles. For example, an algorithm that uses the current characteristics of a workforce to create a picture of the ideal new employee and then new applicants according would make it very difficult to increase the diversity of new hires. Likewise, advertising a product only to consumers who resemble your current customer base and then using data on sales to conclude that only this particular demographic likes your wares would be incredibly short-sighted.


For an example of these effects in action you need only log into Amazon, where you will immediately be offered a sample of products that were bought by other customers with tastes similar to your own. What could possibly be wrong with this? Firstly it punishes good books for having bad luck, meaning that customers aren’t offered the best books – just the lucky ones. Mathematician David Sumpter replicated the ‘also liked’ algorithm to understand it better. He started by modelling 25 books each written by a different author. The probability of each being bought was the same at the start of the experiment, but each time a book was bought, this counted as a ‘recommendation’ and the likelihood of that book being bought by the next customer increased ever so slightly – echoing what happens as a book moves up the Amazon rankings or is shown as ‘also liked’. This effect, occurring at random, in a simulated model, meant that after 500 purchases, the top 5 authors had more sales combined than the other 20 put together. And each time Sumpter ran the model, it produced a different outcome – entirely independent of the quality of those books.


The second problem is the creation of echo chambers populated with citizens, authors, even fashion that makes us comfortable. If I continually order books on international development and women’s rights, I am unlikely to then be offered Ayn Rand’s Atlas Shrugged or Rand Paul’s The Case Against Socialism. This seems like a good thing if it saves me time but it also entrenches a particular worldview by convincing us that few people disagree and there is little opposition to our perceived mainstream.


Again it is clear that judging data and analysis is a matter for more than just legal interpretation. By one assessment Amazon’s ‘also liked’ algorithms are exceedingly successful: every minute they are convincing consumers to buy books that they would not have seen otherwise. Yet it is possible they also disrupt our ability to identify the best books, reward authors based on luck rather than merit, and possibly emphasise societal divides. And that is just the algorithms offering us books!



Data breaches

Data breaches pose a clear enough reputation risk that the principle of security nearly sells itself. Of course, the more data an organisation holds, the greater the risk if it falls into the wrong hands. You may not care if an online shop’s security fails and your email address leaks, but what if the data links your email address to your home address? What if your bank details are also included or photographs? The doomsday scenarios are worth consideration. If your organisation gets caught up in this type of event there are four primary effects to be concerned about:


Customer reactions. Of course, one possible impact is a loss of business but other types of recourse are available to disgruntles customers including (but not limited to) legal action and taking to social or mainstream media. None of these are good for you.


Media scrutiny. Once news of the data breach is out – perhaps on your customers’’ social media or else via some kind of required disclosure to either your customers or the markets – then media scrutiny is likely to follow. Coverage tends to focus on the size of the leak and on assigning blame so make sure you are clear on these details. Accuracy sometimes falls by the wayside here and companies need to have a strategy in place for correcting mistaken reports.


Regulator and parliamentary scrutiny. If you make it out the other side of the public and media frenzy, you may find a cross regulator on the other side. Parliamentarians and civil servants don’t like to look as if they did not do enough so you may now be hauled over the coals of a select committee or even face greater regulation or a fine.


Police scrutiny. As if this process hadn’t already been unpleasant, all of this public airing of grievances increases the likelihood that some form of law enforcement will feel compelled to investigate further. Of course, you may have already alerted the authorities at the start of the process but, if not, make sure to act in a manner you would be happy to explain later.



What to do about it? Three key points to consider

In future articles I’ll talk far more about how organisations can navigate these tricky issues (subscribe here if you want to be sent my articles as they appear each Wednesday) but for the time being, these are my three key takeaways on this issue:

  • Ask for less data but ask at highly relevant moments. This will make sure that your data is more accurate and your purposes more transparent. For example, a week after someone moves into their new home might be a good time to ask a simple question about how good the removal service was. Checking in with consumers at multiple touchpoints means you can be less obtrusive, ask fewer questions each time and get more representative data.

  • Don’t treat data or analysis as morally neutral. As an easy morality check, ask yourself what would happen if one entire demographic group was excluded by the algorithm you are creating. What would the lasting impact be? If the answer to that question is at all negative, you need to put safeguards in place that check your analysis for built-in prejudice and (if necessary) work to rectify the problem.

  • Being legal is not enough. Treat GDPR as an opportunity to ‘sniff test’ your own activities and consider mapping the impact of your activities so that your whole team can have those impacts in mind as they work.


Resources

This essay is a starting point that will equip you to understand the debate about data strategies and the role they play in your corporate reputation. There is much, much more to this conversation. What have you read that cast a light on this? How have you tackled this problem in your organisation? I’d love to hear more from you and will add reading suggestions to this resources list so it can improve over time.


Cathy O’Neil Weapons of Math Destruction

Shoshana Zuboff: The rise of surveillance capitalism

Caroline Criado-Perez: Invisible Women

David Sumpter: Outnumbered

How to map your impact

Copyright Meyland Strategy Ltd 2020