When to be data-driven, and when data-driven just gets in the way.
I was working as a data scientist at Airbnb when Covid-19 struck. And as you might expect, Covid-19 was a special kind of brutal for a business that relied on good faith human-to-human interaction. When the world is forming insular social pods, it’s going to be hard to get anyone to stay at a stranger’s house. And so, as you might expect, our metrics tanked — our core metrics dropped to single digit YoY values. No one was booking Airbnbs anymore, and sure as hell no one was looking to host new Airbnbs.
And as we faced that precipitous metrics cliff, our CEO Brian interjected with an admirably swift response. While we were all setting up home offices and hoarding toilet paper and canned goods from Costco, Brian held an emergency all-hands. He told us definitively: “travel as we know it is over.” He had no clear answer to what we should do next, but still there was a lighthouse-like directive through the storm: stop everything you’re working on that isn’t critical and figure out how to survive the pandemic.
And what happened afterwards was impressive. The company effectively pivoted, which is a wild thing to be a part of at a company of that scale. We launched Airbnb online experiences in record time. With a new mantra of “near is the new far”, we curated and pushed people towards locales that were great bunker locations for the pandemic. New initiatives that clearly didn’t fit into the future were shut down (I was part of a team called “social stays”, and in spite of the heavy sunk cost, we killed the endeavor quickly). We took on new financing, restructured the company. The company made hundreds — perhaps even thousands of decisions — a day, and, as a result, managed to swim through the worst of the pandemic with as much finesse as you could possibly hope for.
That said, while the business moves were interesting, I’d actually like to spend this post talking about the role of data during this period and what learnings we can glean from that experience. My most shocking realization: data, which had until then been a key driver in almost every strategic conversation, became secondary overnight. At that time, to fight for “data-driven decision-making” would have been laughable — not because data wasn’t useful during this transitionary period, but because data shouldn’t drive in a crisis. In what follows, I’ll discuss root cause of this mindset shift: urgency. Let’s consider different decision-making circumstances, then discuss how we should be leveraging data therein. It’s time to finally talk about what “data-driven” should actually mean.
There are two axes by which you can neatly segment decision-making: urgency of the decision, and importance of the decision. Depending on where your decision resides in the Punnett square, the involvement of analytics can and should differ.
On the one hand, when a decision is extremely important but not particularly urgent, we can proceed with analytics as we ideally would — iterating closely with our stakeholders to better navigate the space of possible actions. Imagine, for instance, your company’s executives wants to overhaul your landing page, but they want your support on deciding what to put there. The ML SWE on your team jumps to a card sort solution, but you and your stakeholders know the more critical decision to make is whether or not you want to apply that sort of solution in the first place.
The current homepage works fine, so the desired change is not urgent, but the decision is high impact — your change will affect the experience of every single one of visitors. And as such, analytics should be leveraged to better navigate the decision space: you can sift through past experiments and collate learnings that might inform the decision at hand; you can run small opportunity size checks to see what the bounds of any change might be; you can provide demographic/channel/other distributional data to better inform what you might best benefit from focusing on.
There is a wide range of optionality that stakeholders must wade through, and you can help them do it in a measured, hypothesis-driven way. You’re buying a car. It’s a good investment to spend some time shopping around.
On the other hand, let’s reconsider the Covid-19 Airbnb situation above. The company is in crisis mode, and leadership has already determined the best course of action forward: we need to identify some markets to push on that would be appealing Covid bunkers. You could apply the same approach as in the previous example — carefully analyzing segments, sifting through past experiments, etc. But every day you delay a choice, you’re losing two things:
- Opportunity to capitalize on the new market.
- Opportunity to run a test and learn something.
Consequently, you formulate a simple hypothesis: if you choose locales that are somewhat proximate to major cities, then you’ll maximize bookings because guests will (a) feel sufficiently secluded from Covid but also (b) close enough to be able to return home to their friends and families in case of emergency. You get back to the executives within a few hours, they launch an initiative to push these forward, and you find that some work better than others, informing what your second batch of choices should look like.
Optimal involvement of analytics here is a bit different than in the low-urgency case — you’re still helping your stakeholders navigate the idea maze, but the decisions being made are largely intuition-driven, so your involvement is necessarily more shallow. This is not to say you should comply blindly, reinforcing a precedent of reactivity — still understand why, but accept that your involvement will be less structured, less rigorous. And as much as you could get stakeholders to a better decision given enough time, you don’t have enough time, and a 80% correct decision now is infinitely more valuable than a 90% correct decision tomorrow.
You’re in a car accident. It’s useful to get some data to evaluate your well-being, the opposing driver’s well-being, and the best route to the nearest hospital, but you probably shouldn’t spend hours reading hospital reviews.
Finally, sometimes decisions aren’t actually that important. You move a button on a user support page, the experiment doesn’t converge, but your stakeholder wants to know the truth of the result. This is where you push back — analytics can certainly provide an answer here, but what actions will change as a result? Will you learn anything? Stakeholders already know this is a better experience, they ask to be certain, but you know certainty at this level of experimental exposure is impossible.
If our decisions don’t change because of our data work, or at minimum, we don’t learn something from exploring our data, we probably shouldn’t be doing the work in the first place. Learn to predict what the impact of your work might be — what’s the potential lift if you help make this decision? — then act accordingly.
To be clear, I’m not advocating a harsh cutoff here, but that speed and importance ought to be considered when choosing the right analysis for a task. When a decision is urgent, data should almost always take a backseat to intuition. When the decision is extremely important, data should be used more diligently to validate assumptions and keep intuition in check. When the decision isn’t important, you shouldn’t be spending a lot of time worrying about the decision anyway, and so any analytics work should be reconsidered before done.