We’re Reading About Bias in AI, SpaceX, and More

Yanir Seroussi

I have two recent reads I’d like to share. First, “Abandon statistical significance,” by McShane et al., is a well-written argument against the cult of p-values. As anyone who has seriously looked into the matter knows, a result can be statistically significant but nonsensical, or not statistically significant and valid. It’s time to look at the full body of evidence rather than focus on arbitrary thresholds.

Second, “Imposter syndrome” by Brandon Rohrer discusses the meaning of being a real data scientist. In his words: “Our goal isn’t to accumulate answers, but to ask better questions. If you are asking questions and using data to find answers, YOU ARE A DATA SCIENTIST.”

Robert Elliott

I’ve got three pieces I’d like to share. In the first piece, “Forget Killer Robots—Bias Is the Real AI Danger,” “John Giannandrea, who leads AI at Google, is worried about intelligent systems learning human prejudices.” The article talks about the importance of removing bias from training data and in building less opaque/black-boxed models. Giannandrea also doesn’t believe that any kind of super-human computer intelligence is about to take over the world anytime soon.

The second piece is by Lars Blackmore, the principal rocket landing engineer at SpaceX. Entitled, “Autonomous Precision Landing of Space Rockets,” Lars describes how he and his team are using CVXGEN (fast convex optimization problem solver) to generate customized flight code, which enables very high-speed onboard convex optimization. Learn more about it in the video below.

The third piece talks about how DeepMind’s WaveNet is now being used to generate the voice of the Google Assistant in US English and Japanese. It is becoming really tough to tell the difference between a human and artificial voice!

Demet Dagdelen

I enjoyed this piece by Kashmir Hill, called “How Facebook Outs Sex Workers.” The article describes how Facebook users who might want to keep their identities private can be outed by the platform’s People You May Know feature. One of the worrying aspects of the feature (besides not being opt-in) is that the company does not explain what data it uses to make these suggestions. The ramifications of the feature, as described in the article, can be an issue of life and death for anyone from sex workers to political activists or the LGBT communities, especially in oppressive and conservative countries where Facebook usage is still widespread. This also potentially ties in with the new EU regulation surrounding data protection and its “right to explanation” component.

Krista Stevens

I came across this piece on how Spotify uses machine learning to create your Discover weekly playlist. I’ve been happily shocked by what shows up on this list week to week. I often find myself asking, How the heck does it know?!?! Now, I know. The piece looks at the different ways in which music services have done curation over the years and how Spotify has built on and improved earlier methods.

 

2 thoughts on “We’re Reading About Bias in AI, SpaceX, and More

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s