Over on BoingBoing, Cory Doctorow has had some interesting articles in the last couple months about how a number of jurisdictions in the United States have been using computers to model crime patterns in a city based on the police activity record, which the computer then uses to predict future crime patterns and policing needs. It sounds very neutral, objective and bias-free – a computer can’t have a bias, right? Except that when your police force is highly racist (for example, stopping and searching black people at much higher rates than white, and routinely charging black people for offenses for which white people are let go with a warning), then the data is flawed, and the computers can’t help but create a flawed, racist model of crime and policing needs. But because this model is being generated by a computer instead of a person, the powers-that-be have convinced themselves that there is nothing biased or racist about the model, and are trying hard to convince everyone else of the same thing.
Even more interestingly, Doctorow points out that Donald Trump suffers from much the same problem. He bases what he says on the reaction of his rabid fans at Trump rallies, but the rabid fans that go to Trump rallies are not an accurate picture of the actual views and reactions of Americans, just like the police record is not an accurate picture of who is actually committing crimes in a given city. This results in an extremely biased model of America and what Americans want in Trump’s head, for the same reasons that the biased police data gives a biased model of a city’s crime.
I think Doctorow is right about Trump working from flawed data, but I think it’s more than that, Trump is also working with flawed logic.
When any human or machine is doing an analysis and coming up with conclusions, there are two main ways it can go wrong – the conclusions can be wrong because of flawed data, or the conclusions can be wrong because of flawed logic. Or both.
In the case of the computer analysis of crime data, the computer inherently has sound logic – that’s one of the reasons why we use computers for this sort of analysis, it’s their main strength. But a computer analysis can still go wrong if the computer is working from flawed data, and as Doctorow points out, that’s exactly what’s happening with the crime data.
In Trump’s case, he’s not only working with flawed data – the reactions of a small, specific segment of the population – but he’s also using flawed logic, like the logic that a wall along the Mexican border will make any difference to illegal immigration (the idea that illegal immigration from Mexico is a significant threat in the first place is more flawed data) and that getting rid of Muslims in American would actually make anyone any safer.
But this isn’t a political blog, I’m not here to analyze or criticize Trump or Clinton – there are enough people doing that quite well already. What I would like to point out is that Trump’s blindness to both his flawed logic and his flawed data is quite common – most of us don’t do it quite so egregiously or in such a public forum, but we fail to examine the data we’re working from, or we fail to examine our logic, on a regular basis.
I discussed previously the common problem of confirmation bias; people don’t see and don’t look for data or analyses that contradict the ideas or opinions they already have. Not checking for flawed data or flawed logic is one of the ways that many people protect their ideas and opinions and maintain their confirmation bias – if flawed data gives you the conclusions you want, there’s no reason to check whether the data is flawed. And confirmation bias means that you actively don’t see or check or want to know that the data is flawed. They all feed into each other.
Which is why it’s important to apply some scientific rigor to your thinking processes, at least occasionally. As I also discussed previously, trying to prove yourself wrong is one of the best ways of catching yourself in confirmation bias and getting yourself out of the feedback loop of flawed data and flawed conclusions protected and perpetuated by confirmation bias. And as the Dunning-Kreuger Effect has shown, the more confident that you are that you’re right, the more likely you are to be wrong.
But that, I think, is something that Remarkable People do; they are constantly double-checking themselves by trying to prove themselves wrong, and checking for flawed data or flawed logic. It’s not comfortable; I know I don’t especially like doing it. But then I look at an example of someone like Donald Trump, who, in my opinion, is the antithesis of a Remarkable Person, and I go double-check myself anyway.
[sgmb id=”1″]