Charles Earl
I’m reading “Artificial Intelligence — The Revolution Hasn’t Happened Yet” by Berkeley computer science professor Michael I Jordan.
It is a quick, light, read but thought provoking. His point is that once the dust settles after the hype around ubiquitous Artificial Intelligence (AI), we’ll realize that what is being called AI is less general purpose intelligence than a collection of powerful tools for augmenting human intelligence (e.g., Siri, what he calls intelligence augmentation or IA) and the intelligent infrastructure (II) that makes these tools possible. There are also some good historical anecdotes in the article.
Whether or not you agree with Jordan’s point in this piece, I’d still encourage you to check out the reading list he’s long suggested to his post docs and grad students, it is a gem.
Boris Gorelik
If data visualization isn’t just a tool for you, I strongly suggest reading “Dataviz as history: the traveller’s guide to Madeira and the West Indies (1815).” (Although, I would call that post “Dataviz as a Story.”) In this post, Michael Sandberg shows a fascinating journey journal from the nineteenth century that was set up as a series of visuals.
In an opinion post, “Are Your Data Scientists Knowledge Alchemists?” Tom Breur asks the question that many data scientist and their managers ask themselves again and again: “Is data science a real thing?” Unsurprisingly, Tom’s answer is “Yes, most of the time.” Or, in his own words: “If your data science efforts seem like “Alchemy,” something is going terribly wrong: I have never seen responsible governance for magic.”
Nabeel Sulieman
I started reading Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems. I’m only about 20% through, but so far I’m really enjoying it. It starts out with the basics of how databases efficiently store and index data on-disk, but also gives a great overview of traditional relational databases, data warehousing, document dbs, graph dbs, and all.
Panos Kountanis
I was reading this paper, entitled, “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings” (which was suggested reading via a deep learning course called Sequence Models, the final — and most interesting in my humble opinion — part of Deep Learning Specialization on Coursera.) The authors demonstrate that word embeddings can reflect undesirable biases of the text used to train the model. They propose a method to “debias” those embeddings. I find it interesting from a technical and social standpoint.
Martin Remy
I listened to the latest episode of the Talking Machines podcast this week. Neil Lawrence talks about AI and religion, and Katherine Gorman interviews Been Kim about interpretability and explainability for neural networks.
The interview with Been Kim starts at 20:30 in the episode and it’s fascinating. We’re at a confluence of increased scrutiny of companies’ privacy practices (i.e., GDPR), concerns about bias in deployed machine learning (ML) models (which my teammate Charles has spoken and written about), and general anxiety and mistrust of AI as black box, because of its inability or unwillingness to explain itself and its decisions.
These things make interpretability and explainability in ML an important and timely research area, which Been is exploring at Google Brain. She talks about local and global explanations (why was a specific person rejected for a loan, and what kinds of things drive a loan approval classifier in general?), truthfulness of explanations (can an explanation from ML be both easily understood by humans and be actually true at the same time?), and using high level concepts like race and gender as language blocks to craft explanations of a neural network for humans. She emphasizes the importance of evaluation for interpretability methods, and always keeping in mind the end task, whether it’s a content recommender or a medical diagnosis.
Sirin Odrowski
User experiences that are built on ML or other data-intensive algorithms are no longer a niche; ML is “a powerful tool for creating personalized and dynamic experiences.” Recommendation engines help us navigate massive catalogs of content like songs and movies and automatic translation helps us understand foreign languages. Some companies use data for product discovery and self-driving cars ask us to trust our safety to algorithms. To build smart products that people enjoy interacting with, we need a holistic understanding of the algorithms, the business opportunity, and the user. “Human-Centered Machine Learning” approaches this from a UX designer’s point of view. For data scientists, it makes an interesting read for a radical change of perspective.
Carly Stambaugh
Long Short Term Memory networks (LSTMs) are really effective at solving problems involving sequences. But training an LSTM model is computationally expensive (and can be quite monetarily expensive too, if you’re paying for cloud computing services!) The unreasonable effectiveness of the forget gate introduces JANET, (Just Another Network), that is derived from the principles of an LSTM, but has only a single gate, the forget gate. This single gate network has slightly higher classifications on standard data sets like MNIST, and requires less computational power! So, it’s a win-win. I can’t wait to try this model for my poetry bot! Stay tuned for a post about the results!
Your turn!
What interesting reads have you discovered in the field of data science? Be sure to share them in the comments!
I keep this open on my laptop to sneak peak every now and then when not coding:
These two I sometimes carry with me when I want to wake my inner scientist (grin):
“The Way We Think: Conceptual Blending And The Mind’s Hidden Complexities” by Gilles Fauconnier. Recommended by one of my favourite scientists, Martin Doerr, whom I had the privilege to meet a while back.
“Causality” by Judea Pearl. I have read Judea Pearl’s first book (“Probabilistic Reasoning in Intelligent Systems”) and was inspired by both technique and overall approach, so this was a natural next purchase. I have not read much further than the first couple of chapters yet.
LikeLiked by 1 person