Cherry picking metrics in newsrooms

But what interests me is the stat. As someone who cares about how newsrooms use data to information decision making (both for our journalism and our product experience) there is crucial context missing.

Cherry picking metrics in newsrooms
Photo by Isaac Smith / Unsplash

In his daily newsletter Adam included a link to a Times article looking at the impact of their decision to limit commenting to subscribers using their real names. The Times has posted that they've seen a 40% daily decrease in 'toxic' comments (as flagged by their moderation software)

The Times tames trolls with the power of naming
The results of The Times’s experiment in enforcing real names in comments look promising, and the EU starts looking warily at TikTok.

On the face of it, this seems pretty conclusive, you can reduce toxicity by enforcing real names.

Now there is lots of discussion and evidence that Real Names are not the problem and how such policies can exclude various factions whether because of sensitivities around their identity for personal or professional reasons.

But what interests me is the stat. As someone who cares about how newsrooms use data to inform decision making (both for our journalism and our product experience) there is crucial context missing.

By sharing the total number of toxic comments each day before and after the policy change, we can see it's 'worked' - there are fewer toxic comments. However what I am interested is the proportion or ratio. It's possible (even likely) that the total number of daily comments has decreased and as such the number of toxic comments is lower in absolute terms, but it might still be similar as a proportion of total comments.

This speaks to the slightly unmeasurable 'but what contributions do we lose by locking some people out of the conversation'. If commenting has decreased since the change, it's likely some non-toxic voices have also been lost.

Mainly, though, it's a great example of how cherry picking data and communicating it without context can completely alter its meaning.

I could be barking up the wrong tree entirely and all the data could be showing a net gain for civility on The Times. But without being given more context, it's actually impossible to know.

We owe ourselves and our readers more when it comes to our application of data-based insights.