A few of my colleagues — with tenures at Automattic from just a few months to many years — share their highlights of working here.
Robert, you rejoined us a few months ago. What has been your highlight so far?
I would like to focus on one aspect of Automattic which has been dramatically different than my experience at other companies — onboarding. At other companies, the first few days of work were filled with mandatory corporate training that no one actually valued and filing IT support desk tickets to set up all the accounts I would need to do actual work. At Automattic, the process is entirely different. Since part of the hiring process at Automattic is to carry out a project, I actually made my first contribution to production code for WordPress.com while I was doing my trial project. It was actually just a one-line change in a library, but it fixed a bug which had been known for over a year, which improved search results across most of the products at WordPress.com. It feels really great to be able to make an impact quickly. In this case, it was before I even had my first official day.
But it doesn’t end there. The Automattic creed includes the statement “I won’t just work on things that are assigned to me.” That is a value that I see practiced nearly every day. If I find a bug in a system that I use, but don’t usually work on, and I want to try to fix it, it is perfectly fine (and even encouraged) to do so. While many companies have a culture of certain teams owning certain code, and they view outside collaborators as threats, the open culture at Automattic encourages collaboration. In other words, “pull requests welcome.”
Anna, what has been your highlight of working in our Analytics function?
About a year ago, our entire team undertook a comprehensive reporting and infrastructure project to overhaul an existing suite of legacy topline metrics. Our goal was to pin down canonical definitions for these key metrics and pinpoint (data) sources of truth, design a dimensional database model, build a robust and modular new system to extract, transform, and load (ETL) pipelines and, finally, create brand new reporting in Looker to surface these metrics and track their progress vis-a-vis targets. The project meant about six months of work with my entire team on deck.
It was an amazing learning experience, and I think we all felt a lot of satisfaction finally tackling major legacy issues and sources of confusion systematically. Overall, I think two factors together made it a highlight of my time at Automattic: the chance — the mandate even — to focus on one huge endeavor together as a team, and the potential impact — improving the data user experience for so many colleagues. The pipelines we designed and built now deliver insights to nearly everyone at the company with a Looker account, from C-level through product squads and marketing teams. The central Executive Dashboard has several thousands of views, trust in this data has increased significantly, and both maintenance and further extensions have proven to be straightforward and manageable.
Greg Ichneumon Brown
Greg, you’re looking back at over nine years at Automattic. Can you think of a project that sticks out to you?
It feels strange to pick a project that I completed seven years ago, but it definitely was one of the more challenging problems I have worked on and it enabled a lot of subsequent projects to succeed. In 2013 I spent most of the year working to scale our Elasticsearch infrastructure so that we could build Related Posts for all WordPress.com and Jetpack sites (for free). Today it handles 300 million requests a day querying over 7 billion posts and has been happily working for years, so it is tough to remember that the project almost failed.
We started by thinking a v1 of the project was doable in 3-4 months. It actually took over 12 months. During that time I found a lot of new and exciting ways to break WordPress.com. Scaling systems is hard, and I knew this. Honestly though, at some point it gets fairly depressing to keep solving one problem only to run into yet another problem.
I vividly remember talking to another engineer at the Grand Meetup about eight months into the project. His reflections were really helpful. Unlike most debugging problems, with a scaling problem you can’t know what all of the problems actually are. You have to solve one before you will ever be able to see the next one. So it is very hard to predict how long getting something to work will take because you don’t know what you don’t know.
That advice helped me to push through the next few months.
Vicki, what has been the highlight of your experience at Automattic so far?
I’ve only been here a few months, but one of the most rewarding things about working at Automattic so far has been working on products that build on and encourage the spread of the open web and content creation by both individuals and small businesses. (This is a long way of restating our mission statement to democratize publishing so that anyone with a story can tell it.)
Sometimes, with machine learning products for user-generated content, the challenge is in attracting users in the short-term, but we’d like to keep users discovering new stories and content throughout our platform, and to empower creators to share more of their stories, and it’s been really fun to work on that as a product goal.
So far, I’ve worked on a project focused on helping our Happiness Engineers navigate live chat with machine learning and a project that involves surfacing better recommendations for topical content to users. Both of these have been radically different and equally interesting and rewarding because I’m working directly with the people who benefit from better content discovery on our platform.
Demet, can you tell us about your highlight of working at Automattic as a data scientist?
I have been at Automattic for five years and my time here has been characterized by one of my favorite lines from the Automattic creed: I am in a marathon, not a sprint (but it’s still a race).
I’ve spent most of those years evangelizing a more data-driven and statistically sound approach to our marketing efforts. I’ve held countless workshops on machine learning, causal inference, and the dangers of optimizing for the wrong metrics, taught multiple classes at our all-company Grand Meetups, wrote a plethora of internal documents for different types of audiences, and had many heated arguments along the way.
The rest of the work was pure data science and engineering: We built pipe, our internal machine learning pipeline, I ran many smaller marketing campaigns and large-scale experiments, and gathered hard evidence that moving away from manual segmentation to machine learning (ML) and constant experimentation-led targeting would eventually have an overwhelmingly positive impact on our bottom line.
Recently, all of this work culminated in getting the green light and support to kick off the cross-divisional project of my dreams: Optimizing marketing sales campaigns using ML and uplift modelling. These revamped campaigns are now generating an additional 1 to 1.5 million USD in annualized incremental revenue compared to the pre-ML efforts — while emailing 50% fewer users! It is cutting edge, challenging, and exciting work with measurable impact and I am very excited for its future. Note: If this type of work at the intersection of machine learning, causal inference, engineering and marketing interests you, we are currently hiring into the team.