Rationally Speaking | Official Podcast of New York City Skeptics - Current Episodes - RS 197 - Doug Hubbard on “Why people think some things can’t be quantified (and why they’re wrong)”
"Have you heard “statistically significant sample size”?... people do use the phrase ... They'll object to a measurement, saying that's not a statistically significant sample size. Well, there is no such thing, and I explain that to people. I say, "Well there is no universal magic number of samples that you have to get to, where if you're one short of it, you can make no inferences at all, and once you reach it, all of a sudden you can start making inferences. There is no such number...
People with higher statistical literacy tended to be much more accepting of quantitative methods, and much more excited about the use of them. People much more resistant and skeptical, tended to score much lower in statistical literacy.But it was actually more specific than that. On all of the statistical literacy questions, one of the choices was, I don't know. The people who said “I don't know” a lot, weren't necessarily the ones that were resisting the use of quantitative methods. It was the ones who thought they did know, and were wrong...
'He surveyed a bunch of published scientists, and people who weren't published scientists, just students and so forth. And the fact is, that there are profound persistent misconceptions about how sampling actually works, and what it tell us. What people do is, they kind of remember some things and they'll throw out words like, “That's not statistically significant,” and they didn't really do any math to make that claim.'
'It's just a fancy way to say, “I disbelieve that result”'...
'They'll also say something like, “Well, correlation is not evidence of causation, right?” And I'll say, well, actually that's not quite true. Correlation isn't proof of it, but I can show you a Bayesian proof that says it is evidence of it. And I show that in the third edition of my first book, the Bayesian proof for it. I mean things like that, people are just winging it all the time. They'll say, well, there's this potential bias in this survey, and because this bias exists, that means that no inference can be made...
"Are you saying that there's some variation, randomly assigned variation even, in the population, and that unless we account for all possible variations, you can't make inferences?" He said, "No, you can't."I said, "Well, then all science is wrong. Every controlled experiment in the world doesn't actually control for every varying factor. You misunderstand how it works."...
People run into though, is they hear or see, or they read about situations like that, where it was very difficult to replicate something. This actually happened once. One lab was trying to replicate the results of some study from another lab. And one of them used a different stirring method in a solution than another, and that actually changed the result.But people conclude from that, “Therefore, unless you do all these things perfectly, which are extremely difficult to pull off, I can make no inference whatsoever from observations.”
Well, that's not how you live your life, what are you talking about? Of course you live your life making inferences from observations. If you can't make inferences using the scientific method in statistical inference, well, then how are you doing it with just your life observations? Because you're doing that with selective recall, and flawed inferences, right?"
"I was once teaching a class on, it wasn't exactly calibration, it was just estimating, or trying to quantify your own uncertainty. Put a probability on your beliefs or your predictions.And someone in the class just kept insisting that you can't know what the “right” probability is. So I kept trying to get him in the mindset of how he actually makes decisions in real life. I'd be like, "Well, let's say you buy a sandwich and you eat the sandwich. If you eat it, that implies that you probably put a very low probability on it being poisoned." His response was, "No, no... I'm not worried about it being poisoned, but there's no way to know the probability of it being poisoned.
"And I've seen stuff like this many times, he's just one example. It suggests that people have this compartment that they put anything “quantitative” in, where there's a super high standard, and you're not allow to make any estimate unless it's completely rock solid." Whereas, in your day-to-day life, you just do whatever seems sensible to you. And that's just a different magisterium or something.'...
'The VSL, the Value of a Statistical Life series of surveys, shows that most people behave as if they value their lives at somewhere around 11 million dollars or so. We usually put a range of two million to 20 million on it. Of course, it varies from person to person, but averaged across many people, it looks like it's about 11 million dollars... it feels vulgar [to say that] -- until you realize that people have to make practical decisions about the allocation of limited resources, because we could all save more lives right now by doubling our taxes.
We could pay twice as much in taxes and fund more basic research on fighting disease, et cetera. Right? People are only willing to do so much of that. They've already behaved in a way that puts a limit on the value of a human life. They do not behave in a way that indicates that they believe life is priceless or infinitely priced. Right? As soon as someone says life is priceless, they immediately become hypocritical by virtue of their daily activities'...
'I think the problem is is that somehow, people have this negative connotation to just quantifying things to begin with... we abstract our environment all the time. We reduce things to words. We reduce profound experiences to words. We reduce them to pictures. We reduce them to our emotions. Our emotions are abstractions...
If you look at John Allen Paulos's book, Innumeracy, which has been out for ... It's getting close to 30 years now. He talked about how it's almost a little bit more unique in certain Western cultures, and especially the United States. You don't hear these objections to being quantified in, say, India or China. It's less common there. It's perceived differently, it's perceived as a natural human expression. Right? Here, somebody will say, "Well, I'm more of a people's person. I'm not a numbers person," as if they were mutually exclusive. Right? In India, that might be perceived more, and this is John Allen Paulos's book saying this ... In India, that might be perceived more as, "I'm a people person, not a literate person."'
Tuesday, June 25, 2019
Doug Hubbard on “Why people think some things can’t be quantified (and why they’re wrong)”
Labels:
quoting,
statistics
blog comments powered by Disqus
Subscribe to:
Post Comments (Atom)