Building A Data-Driven Culture: Behavioral Foundations

I think that data-driven decision making is often confused with making decisions while standing near enormous datasets. I'm not kidding. There is something about using lots of data—whatever that means to a team—that can lull people into the complacent assumption that their behavior is data-driven, even if they're not tying their decisions back to data at all.  

For me, the simplest test of a data-driven culture happens when it's confronted with a significant decision for which no data is available. If the answer includes some flavor of gathering the required data, most likely by running an experiment, then the team is on the right track. If the answer is to move ahead with a plan based on instinct or experience alone, then the team is not data-driven, regardless of how many petabytes they sling. 

Culture is built on behaviors. To me, a data-driven culture is one that demonstrates four behaviors:  

  • Insistence on evidence
  • Willingness to test hypotheses
  • Commitment to an experimental mindset
  • Bias for data

The more an organization embraces these, the more it increases control and minimizes risk. 

Insistence on Evidence 

One of my all-time favorite colleagues earned this title in part because he was great at following evidence and pushing back on conjecture. He was a program manager who would go wherever the data led. He was full of opinions and ideas, but he had no ego whatsoever. His work was strong because he was open to changing direction on the basis of new evidence. His work was also strong because his convictions were impersonal. Everyone on the team was comfortable approaching him with ideas because we knew that as long as we had evidence, we would be heard. He was never defensive or territorial, and he knew how to use evidence to rally the team behind the best ideas. He was a joy to work with. 

Evidence.jpg

Modeling a data-driven culture starts with backing your own assertions with evidence. It thrives on expecting and encouraging others to do the same. This means eliminating phrases like, "I just know it'll work," or, "Trust me." This also means accepting pushback and skepticism gracefully when you're the one with the great idea that isn't quite proven yet. If your idea is really great, then there's no harm in demonstrating this to others in a small way before asking them to join you. Data-driven cultures run on evidence, not on faith. 

For some, this transition can be difficult, especially if requests for evidence are interpreted as a lack of respect for experience or authority. In such cases, I've found it helpful to reframe the discussion. I'll ask follow-ups like, "What do you think is the biggest risk?" or, "Help me understand how you reached that conclusion instead of another," or, "How will you know when this is successful/finished?" These are really just different ways of asking for evidence, but they approach the discussion from a place of curiosity instead of judgement and without directly referencing data. 

Remember, too, that an assertion that isn't backed by evidence is just an experiment waiting to happen. 

Willingness to Test Hypotheses 

I joined my last company while it was in the middle of transitioning its customers to a new product configuration. The new configurations were being manually personalized for each client, by hourly contractors hired for this purpose. This process was time-consuming and expensive, and it was scheduled to continue for another 9 months. It was being done this way because the project leaders were extremely risk-averse and did not want to create even a slight dip in customer satisfaction.  

Every neuron in my engineering-trained brain told me that these configurations could be successfully automated. I ran an analysis to identify a large segment of our client base whose accounts could be configured algorithmically, and with little risk to client satisfaction. The project's leadership was skeptical, but after some pushing they agreed to let me auto-personalize 50 clients as an experiment. We messaged these clients twice in the week ahead of the experiment. We lined up their client success representatives to assist with any issues on the day of transition. On that day, everyone held their breath. Experimentation was not yet part of the corporate DNA. It was tense. 

Happily, not one of that first set of 50 auto-configured clients had a problem. I broadened the experiment to several hundred additional clients. Still zero problems. My automation eventually configured over ten thousand accounts. These configurations spawned virtually no negative feedback. The automation saved tens of thousands of dollars in contractor costs, and it cut months off our transition schedule.  

All of this was possible simply because we made room to test my hypothesis that the configurations could be automated. We made room for experimentation. One hallmark of a data-driven organization is that its practitioners turn to experimentation, rather than instinct or experience, when faced with a lack of evidence. Data-driven decision makers minimize the uncertainty of an instinct-based plan early through low-risk experiments. By gathering evidence, they turn an instinct-based plan into a data-backed one. They gain the ability to observe consequences and course correct as necessary. By the time the plan is executed at scale, there are few to no surprises in the outcome. 

HypothesisTesting.jpg

To get there, the first behavior to cultivate is the simple habit of creating hypotheses. A good hypothesis has the form, "I believe that doing X will result in Y because Z." Sometimes, just the act of requiring someone to frame and quantify a hypothesis this way leads them to the discovery that their idea is nonsense. Many times, it leads to discussions or analyses that refine the experiment before it even begins. Stronger hypotheses lead to faster progress. 

Articulating a concrete hypothesis also helps a team understand the link between action and outcome. When expected and actual outcomes differ, there's always something to learn from investigating the gap. Exercising prediction muscles makes them stronger. 

The second behavior to cultivate is the practice of articulating of a clear set of success criteria for every experiment. This ensures that there will be no post-facto rationalizing about the outcome. Early in my career, I worked on a team that didn't pre-establish success criteria before product releases. After each release, contributors who had conceived the updates found usage data that showed value. Skeptics focused on different data that showed problems. These situations were divisive for the team, unhelpful for the product, and impossible to resolve objectively.  

Pre-establishing guidelines shifts disagreements away from the outcome, and onto the framing of the experiment itself. These discussions become impersonal and up-front. They provide a framework for getting everyone into the same mindset before a specific outcome unfolds. 

This recent blog post from Twitter about their decision to increase their character limit to 280 paints a great picture of what hypothesis testing looks in a data-driven culture. The opening paragraphs alone speak about decisions, hypotheses, and experimentation in language that takes these principles absolutely for granted (the rest of the article is interesting, but mostly technical). Common use of language like this is how you'll know you're on the right track to becoming data-driven. 

I would be remiss if I didn't mention A/B testing as the most commonly-used method for validating hypotheses. Indeed, in many enterprises, the term "experimentation" is synonymous with A/B testing. I believe that experimentation should extend well beyond the constraints of formal A/B testing, though. Any data-gathering exercise undertaken in good faith to test a clear hypothesis is an experiment, and can provide valuable feedback.  

Commitment to an Experimental Mindset 

Many organizations still use the language of failure to describe experiments that disprove a hypothesis. Failure is absolutely the wrong term, because in fact these experiments offer valuable feedback. Effective experimentation can only happen in an environment in which these "failures," or disproven hypotheses, are accepted and expected. Articulating a hypothesis requires vulnerability. Nobody is going to do it earnestly if there are negative consequences—real or perceived—to being incorrect.  

Looking back on the day I executed my first auto-configuration experiment, I recognize that the tension in the room was a failure on my part. It reflected a perception that the value of the experiment hung on it achieving a specific outcome, rather than on learning. I should have done more coaching on the mindset that experiments don’t fail; they teach. I should have laid the groundwork for a shared understanding that even massive client dissatisfaction, in this case, would still have reflected a successful experiment. It would have provided data backing up the decision to use manual configuration. It would have provided justification for the high cost and long timeline. It would have freed me up to tackle other problems.   

ExperimentalMindset.jpg

Modeling an experimental mindset begins with recognizing experiments as opportunities to learn. Experiments themselves are not intended to achieve any outcome beyond learning. They test hypotheses that are absolutely focused on achieving a goal. But the experiments themselves are executed in order to learn, not to achieve. 

I find that celebrating the right kinds of successes is another important part of sustaining an experimental mindset. People pay attention to what is valued in an organization and they'll align their behaviors accordingly. 

The successes to be celebrated in a data-driven culture are the use of data and experimentation to improve an outcome. This might mean recognizing a small experiment which leads to the cancellation of what was previously thought to be a great plan. It might mean recognizing someone who improved a plan/hypothesis by bringing additional data or analysis to bear. It might mean recognizing someone who consistently (and respectfully) pushes others to clarify and refine their hypotheses before they act.  

It's equally important to refrain from celebrating accidents, even when they have high business value. This is tough. But, doing so undermines the message that surprises are failures in a data-driven environment. It drives me batty to watch people congratulate themselves on a product success that was not intended. Happy accidents like this are the equivalent of hitting the bullseye of the target next to the one you're aiming for. Don’t celebrate the person who hit that bullseye when their next arrow is just as likely to land on the ground. Instead, celebrate someone who is running experiments to improve their aim. 

Bias For Data  

Modeling a data-driven culture requires demonstrating a bias for data, both in data collection and consumption.  

On the consumption side, a bias toward data manifests in a technical infrastructure that makes data accessible and easy to use. Nobody can support their assertions with evidence if they don't have access to, or trust in, the relevant data. A technical environment that supports individuals in researching hypotheses, executing experiments, and examining the results is table stakes for a data-driven culture. 

On the collection side, a bias toward data means developing the habit of collecting more data than is required right now. It means accepting that the type of information you'll need tomorrow isn't something you can predict today. It means accepting that, on the whole, the cost of collecting data now is smaller than the cost of not having it later. You'll know you're on the right track when the default position is, "Why not?" rather than, "Why?"  

I have been shocked a few times by how controversial this sentiment can be. The instinct to collect data is so strong in me—and in most of my peers—that I am truly baffled when it meets resistance.  

I used to work on a product that provided customers with daily alerts about events matching their search criteria. (The model was similar to Redfin/Zillow, for example, who will alert users to new real estate listings matching their saved criteria; or Google Alerts). Our engineering team kept a record of which customers we alerted about which events each day. But, they purged these records on a rolling 30-day basis.  

I first discovered this limited history as it was hampering a historical analysis I had undertaken. I was told that the purge was meant to cut back on data bloat and storage costs. There were no privacy/security/legal concerns. I requested that we start retaining the alert histories, and I was rejected. Twice! I finally corralled an engineer and a product manager at a whiteboard, where I led us through an estimation of the total volume of the information I wanted retained. It came to...5GB per year. I outpace that with pictures of my kids, and those photos all fit on my phone. Case closed. 

Except that it wasn't. I still got pushback over concerns that we would forget to delete the data later, at which point we'd be saddled with...10GBs? 50? It didn’t matter. I was baffled by the original concerns, but was patient because I was new and I trusted that my reasoning would prevail. The continued pushback, though, was completely irrational. It betrayed a deeply entrenched and deeply anti-data mindset. At this point it became clear that the culture was a bigger obstacle for me than any particular pre-existing storage concerns or purging requirements. 

BiasForData.jpg

I wish I could write this paragraph about our subsequent tidy and happy journey to becoming a data-driven culture. We certainly made progress, and I'm proud of that. But by the time I left we still had work to do.  

How did we make the progress we did? I became extremely vocal, though still polite (mostly). I started painting a picture of the opportunities lost by our lack of a bias for data. Whenever an analysis was hampered by data limitations that I found unreasonable, I explained how additional data could have improved it. As my aggressive instrumentation plans for new product features came under question, I explained the types of analyses that we could do if we had the instrumentation data, and I explained the ways we'd be blind without it. 

After several months of this, two things happened. First, my insistence became normalized. People started to expect it. They got bored by it. The path of least resistance shifted, and one day it became easier for the engineers to accommodate my requests up-front rather than sit through yet another meeting in which I justified them. Over time, some of them even started anticipating what I'd want, and more importantly, why I'd want it. The initial pushback was rooted more in cultural inertia than in principle, and over time I was able to shift the inertia. 

The second thing that happened is that people started to tease me. My insistence on instrumenting every damn action in our product became a running joke. It got to the point that our weekly team meeting would screech to a halt, all eyes expectantly on me, if a product owner even hinted at de-prioritizing instrumentation. I'm not sure what they expected me to do. But the fabulous part was that, by that time, I didn't have to do anything. I'd already made my point, and it was sinking in. "Don't tell Julie you're doing that!" was a common joke, but also an effective way for allies to advocate on behalf of data collection even when I wasn't in the room, and without incurring social costs. Mission accomplished! 

Data-driven culture isn't something that is achieved and then forgotten. It's a constant process. I've never worked on a team with no room for improvement on this (if you have, tell me how!). As with any other principle, there will be constant compromise, and that's ok. Personally, I care most about whether an organization has the right mindset in place, and whether they have a desire to improve. If these are present, I'm less concerned with exactly how far along a team is on its journey to becoming data-driven.  

Aiming to develop or improve a data-driven culture means fostering the behaviors and infrastructure that support evidence-backed decision making, experimentation, and data availability. With those foundations in place, I believe any organization can inch its way toward becoming data-driven, one experiment at a time.