The 2019 Fairness, Accountability, and Transparency Conference

I attended the Fairness, Accountability, and Transparency Conference in Atlanta from January 29th — 31st.

This conference brings together a diverse community — academics, lawyers, policy makers, software developers — concerned with the fairness of the socio-technical systems that control our lives.

The most visceral examples of “socio-technical” are the policing and sentencing support systems in use throughout the world. Automattic’s Grand Meetup last year took place in Orlando, Florida — a city that is debating whether to widely deploy a problematic facial recognition system developed by Amazon. There are ethical and human rights issues raised by the development and deployment of this and similar systems. The conference is one attempt to get a constructive, outcome-oriented dialog going.

Research presented at the conference revolves around a number of questions:

  1. Is a given machine learning system negatively impacting a given group?
  2. Can we develop methodologies for auditing software systems for harm?
  3. Can we develop metrics that quantify fairness?
  4. Can we develop methodologies that make the internals of machine learning and A.I. more accessible to policy makers and the citizenry impacted by them?

I think this year’s conference did not have the incisiveness of last year’s, where there were memorable and prescient talks by Latanya Sweeney and Joy Buolamwini. Nonetheless there were gems this year.

One of the award papers truly had a positive outcome — illustrating how incorporating fairness/impact/justice assessment into system design can lead to quality of life improvements. Sendil Mullainathan presented Dissecting Racial Bias in an Algorithm that Guides Health Decisions for 70 million people. It began as an empirical study, examining how the intervention patterns of
a targeted healthcare software used by several insurance companies leaves out significant swaths of African American patients.

Mullainathan thought to contact the software provider shortly after publishing (expecting to be brushed off), but they were able to collaboratively adjust thresholds for intervention, removing 84% of the bias and hopefully improving the lives of patients with chronic conditions.

There were a couple of talks that I thought particularly relevant. Christo Wilson (Northeastern) and John Martin (now in marketing at Hubspot) gave a talk on transparency in A/B testing, studying the structure of A/B tests in the wild by analyzing publicly accessible Optimizely data. The organization that deployed the largest number of tests in the study was the New York Times. I think of the paper as a public service announcement for more transparency in A/B testing.

The second was model cards, presented by Andrew Zaldivar at Google. This was representative of a set of talks aimed at making models and data more understandable to the people who use and are impacted by them. Zaldivar and his collaborators apparently did a trial run using by deploying it as a decision support tool for a comment forum moderators. In their case, the model cards were intended to provide guidance to forum moderators on the use of a “comment toxicity” predictor.

A couple of talks challenged the underpinnings of the whole algorithmic fairness/justice enterprise (#). Particularly sobering was a talk on lessons from failed attempts over the last 50 years to address racial/gender/class inequities from standardized testing.

There were many other thought provoking talks and discussions:

  1. Several toolkits for assessing the fairness of predictors and building interpretable models (#, #, #)
  2. Providing more transparency to those people negatively impacted by automation (# ).
  3. Discussion of an open source tool that was developed in the U.K. with strong involvement from the community, attention to human rights concerns, and continuous vetting (#).

The conference will be held next year in Barcelona.

Ultimately, the central issue is not a technological but societal problem. The long term impact of the work is in the degree to which it enables citizens to make more informed decisions about the role that increasingly autonomous technology should play in their societies.

2 thoughts on “The 2019 Fairness, Accountability, and Transparency Conference

  1. Pingback: news

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s