I’ve recently done some reading on the replicability crisis in social psychology. Highlights include Andrew Gelman’s timeline of changes in the field, Schimmack, Heene, and Kesavan’s overview of problems with priming research, focusing on studies selected for Kahneman’s Thinking, Fast and Slow (which is still worth reading, despite Kahneman’s acceptance of the conclusions of the article), and various Data Colada posts, including a recent one that demonstrates how even meta-analyses can go wrong.
It can be discouraging to know that so many peer-reviewed studies are invalid, but it’s great to see that researchers are moving toward more rigorous testing of previously-accepted findings. While all people should avoid jumping to conclusions from insufficient evidence, this is especially relevant to data practitioners — we must be extra careful, because it’s very easy to unintentionally mislead others with data.
Why can’t you divide by zero? Why does the factorial of 0 equal 1? What’s so special about the number 78557? If you, your brother, sister or grand-uncle love numbers but lack the formal math degree, show them this wonderful YouTube channel called Numberphile — a channel that hosts short professionally made “videos about numbers.”
Why does adding more evidence for a particular case decrease our confidence in that case? In the 2016 paper “Too good to be true: when overwhelming evidence fails to convince,” Lachlan J. Gunn and his co-authors start with citing an ancient Talmudic law that requires that one can not be unanimously convicted of a capital crime. They proceed to Bayesian statistics to show how adding unanimous evidence should reduce our confidence, and raise the suspicion of bias:
I’ve been reading about the continued efforts to subvert democracy through fake pro-repeal Net Neutrality comments. We need to start thinking about how we can verify that someone is a real person online. I’ve been following Civic, a startup using blockchain to control and protect online identities.
I was also reading about how artificial intelligence (A.I.) systems pretending to be female are often subjected to the same sorts of online harassment as women.
And, of course, I was reading more about AlphaGoZero’s foray into chess and how flabbergasted Grand Masters (#, #, #) around the world are reacting to AlphaZero’s peculiar genius. Quote from the videos: “Any A.I. smart enough to pass Alan Turing’s test, would be smart enough to fail it.”
I love it when A.I. gets creative. Last week Botnik released some Harry Potter fan fiction generated by a predictive text algorithm. It’s pretty fantastic! Check out Harry Potter and the Portrait of What Looked Like a Large Pile of Ash.
There’s plenty of fantastic one liners such as:
Ron’s Ron shirt was just as bad as Ron himself
The password was “BEEF WOMEN,” Hermione cried.
But I am particularly impressed by the continuity across pairs of sentences. For example,
Harry tore his eyes from his head and threw them into the forest. Voldemort raised his eyebrows at Harry, who could not see anything at the moment.
To Harry, Ron was a loud, slow, and soft bird. Harry did not like to think about birds.
Have you read something great in the data science field recently? Be sure to share your links with us in the comments.
One thought on “We’re Reading About Artificially Intelligent Harry Potter Fan Fiction, Verifying Online Identities, and More”
Reblogged this on Boris Gorelik.