- 3 min read
- September 04, 2020
In the summer of 2019, Eric and I grabbed seats at Old Town Bar. Pre-COVID, we used to meet for drinks reasonably often to chat, decompress, and have a good time. This time, however, we wanted to talk to each other about starting a project. We weren't hatching a plan to disrupt an industry (and we still aren't). We knew we wanted to build something together, and that our interests, skill-sets, and values mesh. It wasn't until well into the fall when we picked up the thread that would eventually lead us to where we are today.
Eric and I are both immense believers in data-driven decision making. Product teams of every size use qualitative and quantitative data to inform and validate their understanding of their users. We were interested in organizations that promote and democratized that data. Are those organizations any more effective at building products? How are they collecting that information; how are they using it; how are they building on it? Collectively these questions are referred to as Product Analytics, and that was the thread we started pulling on.
We wanted to understand how product organizations were collecting and using their analytics data. We learned that startups begin using quantitative data to understand their users very early. Quickly, that data collection grows and becomes more sophisticated. As their pool of data increases, so does the complexity of what it enables. It's not long before data that's collected to describe how a user interacts with the product becomes business data. And business data becomes company-wide objectives and key results.
We started talking to other software engineers, product managers, and founders across the industry about how they use data to inform their decisions. We learned about their processes, how they structured their analytics, and what types of insights they were trying to gain. We learned about team sizes, structures, and approaches. What surprised us was the amount of pain organizations experienced around product experimentation. Everyone we spoke to wanted experimentation to be better, needed it to be better.
We heard about cobbled together and custom purpose-built solutions. We heard from companies with entire teams of data scientists validating, running, and analyzing experiments. Teams of all sizes and complexities felt that experimentation was painful, but so critical they were willing to wade through that pain. What was it that was so painful?
We expected the struggle to be around developing meaningful experiments, but what we discovered was too much uncertainty. Teams had well-thought-out hypotheses but had to do countless extra steps to end up with an unreliable answer in the end. Could they trust the results? How many impressions would they need? What is an impression anyway? And what would it cost to run all those experiments?
We've heard from Amazon, Netflix, and Uber about how experimentation is the heart of product organizations, yet even these organizations with enormous resources and teams, find it painful. How can you be experiment-led and data-driven if your tools aren't working for you?
The Ah-Ha Moment
At this point, we were grasping the actual problem. It's not that these organizations don't have enough people, that the teams aren't asking the right questions, or that there isn't enough data. It's that the tools aren't working for them. The tools are the problem. That's the foundation of Context, to solve the fundamental issues that make experimentation so frustrating and painful. To enable teams to painlessly create better experiences through data, analysis, and experimentation.
Fast integration. Early-stage startups and hundred-person teams alike spend weeks, or even months, integrating experimentation tools. We're aiming to start providing results in hours. We're doing this by integrating directly into the preferred product analytics tool.
Flexibility. Integrating directly means that we provide an incredible amount of flexibility. Teams can use our SDK to run experiments and then choose to use our Bayesian Statistics Engine to calculate results, use their own methodology or combine our results with theirs.
Pricing. Finally, everyone we spoke to found impressions-based pricing incredibly confusing. Our math needed to be straightforward: the number of experiments multiplied by the number of users who see them.
All of that brings us to today—the start of a start. Reflecting on what I've learned, what we've learned, and what we've accomplished so far, I can't hide my excitement. But this is just the beginning. The real story is what's to come.