The curse of "neutrality"

Here's a situation: we're at work, we've spotted something that seems off, for example a group of people are frequently disadvantaged by a process, and we want to do something about it, to fix it. We need to collect evidence so we run a survey, across a much wider group than we can easily observe. We suspect this process isn't the only area of work that this group of people may be disadvantaged so our survey is quite broad in the questions it poses too.


Some time passes...


That was quite an undertaking! We've put in a lot of effort to get permission to run the survey, write it, circulate it, collect and collate responses, compile the data, cross-check it and analyse it. We want to tell people about this—they need to know about our work, and we should be thorough. We get everything together: tables, charts, some information about what we did and how we did it. Some supporting notes from senior people. We don't want to offend anyone so we stick to the point—we explain what we've done, using technical language so no one can poke holes in it.


We've got a fine-looking report on our hands! But, was that what we wanted?


Let's rewind a little. We wanted to change a process that was disadvantaging a group of people. We saw a human experience that wasn't the best, and we wanted to rectify it. The data was to validate whether our observations were also true for a larger group. But now, somehow, it's the main event, and the people we wanted to help haven't seen anything change.


It's easily done once the cogs are in motion. Numbers lend a sense of legitimacy that text alone may not, and with that comes responsibility. We, quite rightly, want to do the right thing. We're aware of the power of numbers and want to display them in the correct way. Which, often, involves barely touching them and letting others work out whether or not they're important. We describe them in language that is comparable to that in a legal document, so as not to misrepresent them. It's admirable, but adds a lot to the plate of any reader and it really hides our message. So how might we approach this differently?


Most importantly, we need to accept that these numbers aren't neutral. We had an agenda going into this—we wanted to change something for a specific group of people. This informed the way that we pitched the idea in the first place, the way we wrote the survey, the way we circulated it, the groups of people we cared most about responding to it and so on.


But we weren't the only ones.


The people who agreed to let us run a survey will have had an agenda, and may have pushed this on us to a degree throughout the process as well. The people who completed the survey will also have had their own agendas and they will have been led by the experiences they had that day rather than on average. In short, the survey data is as good an indication as we're likely to be able to get, but it isn't perfect.


It's also good to remember that just because we have data on a topic, it doesn't mean we need to use it. We chose to ask the questions we asked, we can equally easily choose not to highlight the responses to those questions if they're not useful to us, or others. Clearly there's an ethical line here, if the data directly contradicts our assertion then we probably should include it—I'm not suggesting ignoring data we don't like. But sometimes there simply isn't much to say about the data we collect, and that's OK—we don't need to find a story where there isn't one, or shoehorn in something that's unrelated to our message purely because we have data available.


So let's allow our data to tell the story. Come down on a side. Draw focus to one element over another. We're not out to misrepresent the situation, but let's not force people to work out for themselves what we're trying to tell them either. State our case, in words and with data. We wanted to change a situation for people—let's not leave it up to chance for the sake of appearing neutral.