This is a great list for people who want to smugly say "Um, actually" a lot in conversation.
Based on my brief stint doing data work in psychology research, amongst many other problems they are AWFUL at stats. And it isn't a skill issue as much as a cultural one. They teach it wrong and have a "well, everybody else does it" attitude towards p-hacking and other statistical malpractice.
SF author Michael Flynn was a process control engineer as his day job; he wrote about how designing statistically valid experiments is incredibly difficult, and the potential for fooling yourself is high, even when you really do know what you are doing and you have nearly perfect control over the measurement setup.
And on top of it you're trying to measure the behavior of people not widgets; and people change their behavior based on the context and what they think you're measuring.
There was a lab set up to do "experimental economics" at Caltech back in the late 80's/early 90's. Trouble is, people make different economic decisions when they are working with play money rather than real money.
Expermential Design is one of the big four adacemic subjects within Statistics. The math is complex even before the issues of the effects of the expermential situation.
As someone who's part of a startup (hrpotentials.com) trying to bring truly scientifically valid psychological testing into HR processes .... yeah. We've been at it for almost 7 years, and we're finally at a point where we can say we have something that actually makes scientific sense - and we're not inventing anything new, just commercializing the science! It only took an electrical engineer (not me) with a strong grasp of statistics working for years with a competent professor of psychology to separate the wheat from the chaff. There's some good science there it's just ... not used much.
How are you going to get around Griggs v. Duke Power Co.? AFAIK, personality tests have not (yet) been given the regulatory eye, but testing cognitive ability has.
I was very surprised at how many statistical methods are taught in undergraduate psychology. Far more statistics than I ever touched in engineering for sure. Yet the undergrads really treated statistics as a cookbook, where they just wanted to be told the recipe and they'd follow it. Honestly they'd have been better off just eyeballing data and collaborating with statisticians for the analysis.
>meta-analyses and systematic reviews have shown significant evidence for the effects of stereotype threat, though the phenomenon defies over-simplistic characterization.[22][23][24][25][26][27][28][9]
Failing to reproduce an effect doesn't prove it isn't real. Mythbusters would do this all the time.
On the other hand, some empires are built on publication malpractice.
One of the worst that I know is John Gottman. Marriage counselling based on 'thin slicing'/microexpressions/'Horsemen of the Apocalypse'. His studies had been exposed as fundamentally flawed, and training based on his principles performed worse than prior offerings, before he was further popularized by Malcolm Gladwell in Blink.
This type of intellectual dishonesty underlies both of their careers.
Um, actually I’d say it is the responsibility of all scientists, both professional and amateur, to point out falsehoods when they’re uttered, and not an act of smugness.
[um], has contexts but is usually a cue, that an unexpected, off the average, something is about to be said.
[actually], is a neutral declaration that some cognitive structure was presented, but is at odds with physically observable fact that will now be laid out to you.
Based on my brief stint doing data work in psychology research, amongst many other problems they are AWFUL at stats. And it isn't a skill issue as much as a cultural one. They teach it wrong and have a "well, everybody else does it" attitude towards p-hacking and other statistical malpractice.