This Week in Data Reading: Experimentation, Tech and the Humanities, and Eliminating Bias in Testing

Carly Stambaugh

Several months ago, Andrea Burbank gave a fantastic talk about building a culture of experimentation at Pinterest. Over the last year or so, we’ve been actively focusing on nurturing a healthy data culture here at A8c. This video has been making the rounds again lately, because it’s such a great example of the level of commitment this takes from the company as a whole, and its rewards. Andrea also highlights that this is an ongoing effort, as we are always striving for better products and more efficient processes.

Demet Dagdelen

Karen Hao’s article “There’s no such thing as a “tech person” in the age of AI,” succinctly goes through all the great arguments for introducing a much more coupled and stronger interaction between tech and the humanities. As someone with a background in both natural sciences and humanities — which is exactly what drove me to data science work — I think that both tech and humanities communities make this sort of collaborative approach more difficult than it needs to be.

“It’s not just thinking about how you learn computation, (…) but it’s also students having an awareness of the larger political, social context in which we’re all living.”

The article also reminded me of one of my friends in academia, who started off as a mathematician but then moved on to philosophy of mathematics. While academics usually do not enjoy teaching introductory classes to undergrads — and he thought he wouldn’t either — he ended up actually being really into it. He said the main thing that changed his mind was that teaching compulsory Introduction to Philosophy or Ethics in AI classes to Computer Science undergrads at an elite US university known for its CS department was the one area of his job where he believed that he was making a very valuable impact. He is now thinking of maybe holding similar types of classes to engineers working in the industry in the future.

I see a lot more of these types of efforts, calls for action and rallies for co-operation between humanities and tech lately, I feel like it is all finally coming together after we have briefly seen the impact of what happens when we ignore one over the other — and I really welcome these initiatives.

Charles Earl

A few months ago, Ben Hutchinson, Margaret Mitchell, and Shira Mitchell gave the thought provoking talk, “A History of Quantitative Fairness in Testing,” at the Fairness, Accountability, and Transparency Conference.

It was a look back at the efforts — beginning in the 1950s — to create fair and unbiased standardized tests. The presenters argued that current efforts to build unbiased AI systems are in profound ways similar to the approaches taken by educational researchers of the past. Like the machine learning and AI developers of today, the educational researchers had lofty goals: they wanted to develop quantitative methodologies that could correct for the deeply ingrained gender, racial, and class disparities.

The persistent inequities evidenced by the on-going U.S. College Admissions scandal is a stark reminder of the failings of the quantitative fairness effort in standardized testing. I don’t know if we’ve been able to learn enough not to repeat the egregious mistakes of the past, clearly we need to.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s