Category Archives: Science

Flawed Conclusions

Over on BoingBoing, Cory Doctorow has had some interesting articles in the last couple months about how a number of jurisdictions in the United States have been using computers to model crime patterns in a city based on the police activity record, which the computer then uses to predict future crime patterns and policing needs. It sounds very neutral, objective and bias-free – a computer can’t have a bias, right? Except that when your police force is highly racist (for example, stopping and searching black people at much higher rates than white, and routinely charging black people for offenses for which white people are let go with a warning), then the data is flawed, and the computers can’t help but create a flawed, racist model of crime and policing needs. But because this model is being generated by a computer instead of a person, the powers-that-be have convinced themselves that there is nothing biased or racist about the model, and are trying hard to convince everyone else of the same thing.

Even more interestingly, Doctorow points out that Donald Trump suffers from much the same problem. He bases what he says on the reaction of his rabid fans at Trump rallies, but the rabid fans that go to Trump rallies are not an accurate picture of the actual views and reactions of Americans, just like the police record is not an accurate picture of who is actually committing crimes in a given city. This results in an extremely biased model of America and what Americans want in Trump’s head, for the same reasons that the biased police data gives a biased model of a city’s crime.

I think Doctorow is right about Trump working from flawed data, but I think it’s more than that, Trump is also working with flawed logic.

When any human or machine is doing an analysis and coming up with conclusions, there are two main ways it can go wrong – the conclusions can be wrong because of flawed data, or the conclusions can be wrong because of flawed logic. Or both.

In the case of the computer analysis of crime data, the computer inherently has sound logic – that’s one of the reasons why we use computers for this sort of analysis, it’s their main strength. But a computer analysis can still go wrong if the computer is working from flawed data, and as Doctorow points out, that’s exactly what’s happening with the crime data.

In Trump’s case, he’s not only working with flawed data – the reactions of a small, specific segment of the population – but he’s also using flawed logic, like the logic that a wall along the Mexican border will make any difference to illegal immigration (the idea that illegal immigration from Mexico is a significant threat in the first place is more flawed data) and that getting rid of Muslims in American would actually make anyone any safer.

But this isn’t a political blog, I’m not here to analyze or criticize Trump or Clinton – there are enough people doing that quite well already. What I would like to point out is that Trump’s blindness to both his flawed logic and his flawed data is quite common – most of us don’t do it quite so egregiously or in such a public forum, but we fail to examine the data we’re working from, or we fail to examine our logic, on a regular basis.

I discussed previously the common problem of confirmation bias; people don’t see and don’t look for data or analyses that contradict the ideas or opinions they already have. Not checking for flawed data or flawed logic is one of the ways that many people protect their ideas and opinions and maintain their confirmation bias – if flawed data gives you the conclusions you want, there’s no reason to check whether the data is flawed. And confirmation bias means that you actively don’t see or check or want to know that the data is flawed. They all feed into each other.

Which is why it’s important to apply some scientific rigor to your thinking processes, at least occasionally. As I also discussed previously, trying to prove yourself wrong is one of the best ways of catching yourself in confirmation bias and getting yourself out of the feedback loop of flawed data and flawed conclusions protected and perpetuated by confirmation bias. And as the Dunning-Kreuger Effect has shown, the more confident that you are that you’re right, the more likely you are to be wrong.

But that, I think, is something that Remarkable People do; they are constantly double-checking themselves by trying to prove themselves wrong, and checking for flawed data or flawed logic. It’s not comfortable; I know I don’t especially like doing it. But then I look at an example of someone like Donald Trump, who, in my opinion, is the antithesis of a Remarkable Person, and I go double-check myself anyway.

[sgmb id=”1″]

Illusions of Competence: When you can’t see it, and others can’t tell you

After I posted last week about the problem of unknowing ignorance, a friend pointed out to me some material on the Dunning-Kruger Effect. This effect is a very common and pernicious one, where a person who is quite incompetent at something thinks that he is very good at it; he makes the highly inaccurate assessment because the skills that are necessary to judge one’s own competence are also the skills that are necessary to be good at it. Someone who lacks the skills to be competent at something, then, also lacks the skills to be a good judge of their own competence and usually greatly overestimates it. One look on the internet for people defending discredited or outlandish ideas will show you Dunning-Kruger Effect in action; the lack of knowledge and skill required to analyze someone else’s logic is the same lack that results in a person being unable to analyze their own ideas.

Interestingly, the Dunning-Kruger Effect seems to also result in the impostor syndrome, in which highly competent people tend to underestimate (and be very anxious about) their abilities, perhaps thinking that if they can do it well, everyone else must be able to do it at least as well. This is the Effect operating at the opposite end of the spectrum – the highly competent person becomes less confident in their abilities, as the incompetent person becomes highly confident.

It makes you think twice about judging someone by how confident they seem in their abilities, doesn’t it?

This isn’t the same thing as I was talking about previously, but it’s related, and a very interesting bit of research and analysis. The thing that I find especially interesting is the potential for the Dunning-Kruger Effect and the Confirmation Bias to meet and amplify each other.

I talked about Confirmation Bias here a little while ago, the idea that once someone thinks they know something, or that something is correct, they have a tendency to create their own little echo chamber. That is, people stop paying attention to any evidence that they’re wrong, and only see or notice evidence that they’re right.

So potentially, people may fall into the Dunning-Kruger Effect – they’re terrible at something, but they think they’re really pretty good. But because they’re convinced that they’re pretty good, the Confirmation Bias comes into play, and they refuse to see or acknowledge any evidence that they’re actually not as good as they think they are. Even people saying exactly that, to their faces.

There are quite a few classic texts, from Lao Tsu, to Socrates, to many other philosophers from both East and West who express the sentiment that a wise person has some idea of how much they doesn’t know. That is evidence that these effects and biases have been around for a very long time, but it’s also a hopeful indication that it is possible to step out of or avoid these traps of delusion, at least to some extent.

As I discussed before, about confirmation bias, one of the best tools for avoiding that trap is to keep trying to disprove anything you think you know. For the Dunning-Kruger effect, the best approach is likely to be humble, in the first place, and double-check with other people as to how you’re actually doing. That can be a bit painful, as you have to be willing to accept the response that you’re actually not as good as you think you are, but in the end, maintaining illusions of competence does not allow you to learn and grow.

Learning and growing is, of course, one of the key factors in maintaining creative output, growing as an artist and as a person, and living a life that is satisfying and fulfilling. And that’s what we’re all here for, isn’t it?

[sgmb id=”1″]

Confirmation Bias: We see what we believe

Confirmation Bias: it’s when you decide something is the case, then only pay attention to that which confirms what you have decided, to the point where you don’t see, don’t even notice disconfirming evidence. In extreme cases, when confronted with disconfirming evidence, you can downplay, discount, mock or simply refuse to accept said evidence.

It’s a very basic part of human psychology, no one can ever completely avoid confirmation bias, even when you know it happens and know to be on the look out for it. But there are things that can be done to help counteract its effects.

I’ve been reading and thinking about confirmation bias quite a bit, lately. The effects are very common and very widespread, and mostly unacknowledged. It happens all the time in politics, when politicians decide how the world works based on a specific ideology, then refuse to see or consider any evidence to the contrary. It happens in the justice system, when police and prosecutors decide that they have found the guilty party in a crime, then look only for evidence that this is the case, and never pursue any evidence that it is not.

It also happens in science, when a scientist is invested in proving a pet theory, only pays attention to evidence confirming the theory, and even manipulates or ignores data to make it support rather than disconfirm the theory. This is especially disappointing and frustrating, because scientists should really know better.

And they should know better, because the whole process of science is based on methods to counteract confirmation bias. You’ve probably heard, possibly as far back as grade school, that the scientific method is to gather evidence, form a hypothesis, do experiments that prove the hypothesis, and with lots of them, the hypothesis graduates to a theory.

Except that isn’t actually the case. The process of science is really to gather evidence, form a hypothesis, then do experiments designed to disprove the hypothesis. If a hypothesis is not disproven it can eventually graduate to being a theory.

That small difference in approach makes a world of difference in avoiding a confirmation bias that could seriously compromise a person’s work. If you approach the process specifically looking for disconfirming evidence for your hypothesis, you force your brain into actually paying attention to the data that it would otherwise naturally discount or ignore. It’s a psychological trick that can get around most of the problems caused by confirmation bias.

If you keep this trick in mind, and use it in everyday life, it gives a very different perspective on other people’s stated facts or worldviews. Someone recommends a new diet or way of eating? Ask what was done to investigate whether the purported health benefits or weight loss were from some other factor in the lives of the people who said that it worked. A politician is trying to get you to support a new policy, ask whether they investigated the possibility that the policy won’t work, that it won’t have the intended effect.

It will annoy people, if you do this, I can guarantee you that. When people are good and comfortable with their views and decisions, the last thing that will make them happy is someone questioning their validity. But it needs to be done, because if we let people, especially people in power approach the world as “This is the way I think the world should be, therefore this is the way the world is”, we will have a world that is not just, is not effective, and not based on truth. That’s not the kind of world I want to live in, and I hope it isn’t one you want to live in either.

[sgmb id=”1″]

A painted horse or a zebra – same thing, right?

So diversity is good, right? The more diversity the better. Kind of like it’s not possible to be too polite (at least among Canadians), or eat too many vegetables.

Or it is? I’ve been thinking about diversity lately, both as a social scientist, looking at the diversity of ideas and worldviews, languages and ethnicities, and as a biologist, in terms of genetic diversity, and diversity within an ecosystem.

In general more diversity really is better, but it’s not quite as simple as that. In biological systems, introducing a new species to an ecosystem does technically increase diversity – the number of species in the area has gone up, therefore diversity is increased. But if the new species is a noxious weed, for example, something that has no natural predators or diseases or other checks on its growth in the area, it throws the whole ecosystem out of balance, and the increase in diversity has done more harm than good.

It’s the same for social diversity. If, for example, a company makes a big deal about diversity in their workforce, and makes sure to hire plenty of women and people from non-European ethnicities, but hires them only for front-line positions, and the middle and upper management stays middle-aged white males, then what good has the diversity done? If anything, it has perpetuated a lack of power, money and social standing for the women and people from ethnic minorities, rather than helping to create a more equal and diverse society.

Our new prime minister, here in Canada, Justin Trudeau, named a cabinet that was 50-50 men and women. He got some flak for that, which I find infuriating, but his response to the questioning was brilliant in its simplicity – “Because it’s 2015”. Still, if all the cabinet ministers are expected to toe the party line and spout the party ideology, then what does it matter that half of them are women? What has the diversity accomplished?

In all of these cases, what has been lost or lost sight of, is the true function and value of diversity, which is to provide adaptability and resilience. Which means, if there is a lot of species in a ecosystem, with many different ways of filling their physical needs, if the system is stressed, it can compensate. For example, if an area, such as Alberta where I live is very wet for awhile, there will be lots of mosquitoes hatching in the ponds and puddles, and as a result, there will be lots of frogs and dragonflies to eat the larvae and mosquitoes, and lots of birds to eat the frogs and dragonflies – the system will compensate. If any one part of the system dies or disappears, such as from human interference, the others can compensate, up to a point. Once there is high enough stress on the system combined with low enough diversity, the system can’t compensate and it collapses.

The human social system parallels the biological system. Diversity at its best and strongest offers adaptability and resilience, because someone with a different background and different approach or view of things can come up with ideas that can improve how things are done, make things more efficient or effective, help groups, societies or organizations change to deal better with the stresses that come with life, like smaller budgets, more demand for services or products, the need for more value for the dollar.

If everyone thinks the same way, it limits how much any group can compensate for stresses. And more to the point, if people other than middle-aged white men are included in the group but aren’t listened to or allowed to contribute – such as kept as a front-line employee who isn’t allowed to have ideas of their own, or made a minister who isn’t allowed to ever disagree with the party leader – then the diversity is in name only, and is equivalent to painting stripes on a horse and calling it a zebra.

So yes, more diversity is better. But only if the diversity truly does increase resiliency, if more species helps keep balance better rather than throwing it off, if ideas and views are truly listened to, considered and implemented rather than simply different faces for the camera. Or it’s just another painted horse.

Universal values and understanding people

A few years ago as I was doing my Master’s degree, I ran across the Theory of Universal Values. At the time, I found it pretty mind-blowing in its implications, and since then I’ve come to the conclusion that it is one of the most undervalued theories in psychology right now, but one of the most powerful when it comes to explaining people’s motivations and behaviors.

The theory is fairly simple in concept, but fairy complex in its applications and implications. I’m going to leave quite a bit of stuff out (if you want to read the original paper detailing the theory, it’s available free online here) but here it is in a nutshell: A researcher named Schwartz and his team went around to 67 different cultural groups and asked them what they valued. From this, they came up with 10 different values that were cross-cultural, and thus could safely be labeled human values, rather than cultural values. Upon further analysis, they found that the ten values could be sorted into four types that could be arranged in a circle; the types are self-enhancement, self-transcendence, openness to change, and conservation.

The really interesting thing about the circle, is that it is a very specific relationship between the values – if you know what a person or group actively values, then the values on the opposite side of the circle are actively NOT valued.

I find this a very useful tool for understanding other people, and for creating believable and rounded characters, for that matter. Especially when you consider that values and motivations are so tightly linked as to be almost the same thing – if you know what motivates someone, you’ve gone a long way to understanding the person, and not incidentally, to being able to persuade or manipulate someone.

But there is a great deal of ignorance in regard to motivations. Western society has a tendency to see self-enhancement and conservation values to be the only legitimate motivations, and to ignore self-transcendence and openness to change as values and motivations, or consider them flaky, at best. For example, a retail giant I am currently working for, whenever it wants to encourage a behavior in employees, invariably sets up a contest for prizes or bragging rights – an approach that is very much centered on the self-enhancement motivations of power and achievement. It doesn’t even seem to occur to the management that there are some employees (like me, for example), who really don’t care about prizes or bragging rights, because their values and motivations are elsewhere in the circle, and management are never going to get the response from those employees that they are looking for until they broaden their approach to appeal to other values and motivations.

If you want to change someone’s opinion or behavior, you have to understand their motivations. That’s the first step in changing the world.

Homo Connectus (or, getting nerdy)

I finished reading recently the first two books in an excellent trilogy by Ramez Naam, though I haven’t been able to get my hands on the third one, as of yet, which was published last spring. (These are books I looked up after reading Cory Doctorow’s review on BoingBoing, incidentally…) The books are titled Nexus, Crux and Apex, and they deal with a young programmer and inventor, who, with his team, managed to program nanites to create a network within a person’s brain, and once there, the nanites are able to communicate by radio frequency with other nanite networks in other people’s brains. He invents a way for humans to do telepathy, in other words.

The science – and the context – is a little more complicated than that, but Naam does a good job of keeping the science correct and plausible, without falling into the trap of excessive explanations. He also does a good job of predicting and story-ifying the reaction to this invention, both the good uses and the abusive ones that people come up with, and the strong negative governmental push-back against post-humans.

As a biology nerd, though, the thing I find most fascinating is how this is the logical next step in human evolution. Hundreds of millions of years ago, the step of single-celled organisms aggregating into a multi-cellular one was one of the most profound things to ever happen. This didn’t happen over night, of course, and in practice took many, many tiny steps to fully occur, but without this, specialization of cells isn’t possible, including the development of spines and brains.

Complexity, in whatever context you’re talking about, can be defined as increasing specialization combined with increasing integration. For both organisms and societies to become more complex, to become capable of more complex ideas, more complex technologies, both specialization and integration are necessary. The step from unicellular to multicellular allowed several orders of magnitude more specialization, which, combined with integration in the form of electrical and chemical communication between the cells, resulted in organisms with spines and brains, including humans.

But now, after hundreds of millions of years of evolution from the first multicellular being, humans have likely gotten about as complex as is possible for a multicellular organism. Our brains are about as large as we can afford to have them, both metabolically, and in terms of causing birth complications.

So what’s the next step? Multi-being organisms. Humans networked together into a single organism. Not necessarily becoming a physically integrated being, but a mentally integrated one, definitely. The internet is the first slow, clumsy staggering towards this, but what if, as Naam described, we could be linked telepathically to become an organism made up of many beings, like beings are made up of many cells? Computer scientists figured out some time ago that computers worked far better with many small processing units linked together, rather than one big unit. What might a group of networked humans, thinking as a single being be capable of?

It boggles the mind.

There would, of course, be push back, as Naam predicts. (which makes one wonder, were there cells who were violently opposed to becoming part of a multicellular organism? I think I have too active an imagination, as I visualize bacteria marching with signs protesting the post-bacteria.) This would be far from a simple or easy process, and as he also predicts, there would be people who would use the connections for manipulation and abuse, rather than the benefit of the people involved.

But just like the internet can no longer be contained or controlled, despite the valiant attempts of multiple governments and organizations to do so, this change, once it is started, will only be able to be slowed, not stopped. I don’t know about you, but I’m really curious to see when and how it happens.