We’re Reading About Simplifying Without Distortion and Adversarial Image Classification

Boris Gorelik

Recently, I heard an interview with Desmond Morris, the author of The Naked Ape. He reveals that the goal of his writing has always been to “simplify without distortion.” This interview reminded me of the (EXCELLENT) blog “Math with Bad Drawings” by Ben Orlin, a math teacher from Birmingham, England. At his blog, Ben Orlin does exactly that: He simplifies without distorting. I highly suggest following this blog. At the bare minimum, read the latest post, “5 Ways to Troll Your Neural Network.”

Do you know of other blogs that educate readers about mathematics, statistics, machine learning, and other, related fields? Share your favorites in the comments.

Carly Stambaugh

In image classification tasks, an adversarial example is one that has been altered in small ways which are imperceptible to the human eye, in a targeted manner with the intention of “fooling” the classifier. Over at labsix, they’ve generated 3D adversarial objects that are consistently misclassified from multiple angles, such as a turtle that is classified as a rifle, no matter which way you turn it. While this may seem hilarious at first, I assure you it’s actually terrifying.

One thought on “We’re Reading About Simplifying Without Distortion and Adversarial Image Classification

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s