Ask Women in Product: How do I improve my analytical skills and interpret data to build a feature?

Women in Product
9 min readJun 22, 2020

--

Amy Lin offers practical advice for product managers who struggle to incorporate data analysis into their product management toolkit.

Photo by Luke Chesser via Unsplash

Answer from Amy Lin

Amy Lin works on product management and strategy at Verizon Media. You can find her on Twitter at @amywtlin.

Introduction

Data is such an integral part of a product manager’s daily job. It backs you up when you suggest an approach and helps you prioritize by validating or invalidating a “gut feeling” so your initiatives are based on more than just hunches; data also helps you evaluate if a feature should be rolled out to general release.

However, data can be biased and sometimes even misleading. This article aims to help you understand what to consider when you’re examining numbers with the objective of using data to arrive at fair, unbiased insights. Throughout the article, I’ll also cite a few examples that show how and where you can potentially find data (note, though, that organizations treat the usage of data differently).

How are data used in product management?

Data can be used in multiple stages in the product development cycle. Specifically:

  • Ideation: to explore unknown opportunities.
  • Prioritization: to evaluate an opportunity so that its impact becomes comparable with others; and to set a clear, measurable goal for an initiative that represents real impact.
  • Post-launch evaluation: to measure the performance of a feature, or to evaluate results from an experiment to decide on the next course of action.

Types of data & where to find them

  • Transactional data: These are data generated organically as your application functions, such as order data or user data. They are the data necessary for the application to function properly. As long as you can gain read access to these databases (or a copy of them), you will already have them.
  • Web Traffic data: These can usually be tracked automatically with tools like Google Analytics.
  • Behavioral data: These are data that may not be typically tracked, like the clicks on a specific button or the number of impressions on a certain module. Behavioral data can be tracked using a custom event server or through tools like Amplitude or Google Tag Manager. In my experience, you’ll typically need to manually specify what you want to track, though there are tools that claim to do this automatically.

Approach

In this section, we have two scenarios where we want to use data either to evaluate an opportunity or to assess the performance of a launched feature.

1. Start with a question to answer

Start with a question that you want answered and work backward to find what data you’d need to answer your question.

Having a specific question to answer helps you stay focused. If you have ever tried to analyze data yourself, I believe you must know too well how you can get distracted and eventually lost in the ocean of data, feeling data fatigued. There are so many things to explore! While free-roaming is always fun and sometimes insightful, it’s better to focus on one thing at a time. Here we focus on actionable insights; we aim to help you decide if you want to push through with an action.

Let’s say you are planning for the next quarter, evaluating several potential initiatives. One of them involves an onboarding revamp that you believe will increase the number of active users, which is aligned with one of the goals for the next quarter. However, you are unsure how impactful the revamp can be. You start with a general hypothesis: “A better onboarding experience will activate more new users.” Then you refine the hypothesis several times, with each round becoming more specific than its ancestor, until you figure out what data will help you confirm (or disprove) your hypothesis.

  • “A better onboarding experience will activate more new users.”
  • “A better onboarding experience will turn more new users into active users (where active users are users with ≥10 content clicks/day).”
  • “A better onboarding experience will turn more new users with < 5 content clicks/day into users with ≥ 10 content clicks/day.”

Notice what has happened here. With each round, you asked yourself the definition of each word in your hypothesis and used that insight to arrive at a statement that is more specific and more measurable. Now you will want to pull together these data points:

  • How many new users turn into active ones historically?
  • How confident are you that a new onboarding experience will successfully convert more new users into active users?
  • How many new users, at a minimum, would you need to convert to active users to achieve the goal?

You’ll have a much clearer idea now where this opportunity stands amongst others and you’ll know whether or not you should pursue this course of action more aggressively.

2. Make sure you get the context right

Aside from gathering the data to answer your question, you will want to get more context on these numbers. One of the ways to gain context is to compare one number to another. I know it is tempting to compare the same metric across different components. For example, it might be the conversion rate of different pages or the click-through rate (CTR) of different buttons. You should proceed carefully whenever you try to do this. In such a scenario, you have to ask yourself: is it fair that I am comparing the performance of this metric at two different places?

For example, if you try to compare the conversion rate on two different pages, have you considered if a user’s readiness to convert is the same across these two pages? Are the two pages situated at the same point on the user journey? If a page appears near the top of the funnel in the users’ journey, it makes sense to expect it to have a lower conversion rate or no conversion at all. Naturally, you would expect the conversion rate to be higher as we get closer to the bottom of the funnel in the journey. While the same metric (conversion rate) can be collected from different components and compared, you must make sure they are contextually similar enough to be worth comparing.

3. Beware of potential external influences

Aside from accounting for internal factors (like users’ intentions on a page, as mentioned in the last section), you should also familiarize yourself with what is happening outside of your application. If you see unexpected numbers, check for external factors that might have caused this phenomenon.

For example, if a landing page is doing better than the others, is it doing well intrinsically because of everything on that page, or is it being used as a landing page for an ad campaign that your product marketing intern is working on? Accounting for these possible external influences will help you stay on top of how your product is doing, and may even help incubate more collaborations with your product marketer that would grow the product even further!

4. Don’t lose sight of the human factor

As you probably have noticed, I mentioned the word intent multiple times in the above sections. I do so because behind these numbers stand real people behaving and interacting with your product. Put yourself in their shoes and empathize with your users.

While data offers a good way to capture the general behavior, you will inevitably bump into situations where you just don’t know why users behave a certain way. That’s why qualitative data are just as important as quantitative data. I will not go into depth on this topic, but an important side note: user interviews may inspire you to find the real underlying issue, but don’t fall into the trap of thinking you have to build everything your users have asked for.

Troubleshooting / Q&A

In this section, I address two common pitfalls and offer a few workarounds.

1. What if I don’t have the data?

The previous section assumed you have the data at hand to analyze. But what if there’s no data? What can you do?

  • Take a step back and look for alternatives: do you have anything that is close enough to what you are looking for? Sometimes behaviors manifest themselves in another form in other data sources available to you. For example, if you want to find the CTR of a specific button but that metric is not tracked, you may still be able to infer the CTR if you also know that the page this button leads to can only be landed through this specific button (assuming direct traffic — visits via typing URL directly — can be ignored). In such a scenario, the number of pageviews of the landing page should be roughly equal to the number of clicks on that button. That sounds good enough to get a ballpark of how the button performs. When looking for proxies, don’t be afraid to get creative and think wild!
  • Plan for the future: If you cannot find a viable alternative, can you make an estimate and add tracking to see if you need to adjust your estimate? All too often, you find yourself wishing that some numbers are tracked when they are not. What you can do instead is to evaluate if it is worth the while to add tracking for this component for the long-term good. Also, if your development resources are tight and you don’t mind getting your hands dirty, check out Google Tag Manager, which lets you set up tracking events on a web UI after a one-time setup (very similar to Google Analytics’ JS tracking code) so you can see reports in Google Analytics. It does require some understanding of HTML/CSS and JavaScript, but you’ll be thrilled by how much more you’ll get to explore with the data that it collects.

2. What do I do when the data doesn’t prove (or disprove) my hypothesis?

If you are testing a new feature and evaluating if it should be rolled out to general release, you might be asking yourself: What if it is not good enough? What if the treatment and the control seem to behave the same way or achieve the same outcomes?

With every experiment, you should have at least one main metric you expect to improve, accompanied by several supportive metrics to confirm that the treatment doesn’t result in unwanted behavioral changes. For example, you might not want to sacrifice the overall engagement for the CTR of a button. Success can be straight-forward — namely, you see this main metric improved. Hooray! However, there will be times where the metric doesn’t improve and in fact worsens. Below are some approaches that will help you decipher and decide what you should do in this scenario:

  • Differentiate the seemingly indifferent: it is entirely possible that the change doesn’t make a difference to the users. It is also possible that it does, but the difference is not manifested in the main metric you’ve chosen. It’s in situations like this when the supportive metrics and other behavioral data can help you see if the users are secretly finding the new treatment more likable. For example, if you believe the new copy in the headline is much clearer than the original, what behavior should you see? Does the call to action (CTA) below the headline get more clicks because the users now know what they can do with your product? Do users traverse more pages in a session? Does session time lengthen? Factoring in the technical effort/debt of keeping it or removing it, you can conclude if the change is worth keeping.
  • Find out what isn’t working and what you want to do about it: your experiment is not working. Do not despair! There is still hope. Carefully examine what went wrong: did the treatment actually prevent users from doing what they were doing? What might have led to an unwanted result? Once you have a few hypotheses about what might not work, can you iterate on them? Can you address those issues in a timely, impactful manner?
  • Avoid the same mistakes: even if you decide not to roll out this feature in the end, you still take away new learnings from this experiment. What was it that you thought would work but turned out to be a dud? What did you learn about the users? How can you and your company avoid making the same mistake down the road? What should we do differently next time? Even if this treatment doesn’t end up serving our users, it is not in vain if we can bring lessons learned into the next one.

Conclusion

Some product managers say data analytics gives them headaches and they have no idea where to start. It is actually not that intimidating as long as you know what you want from data.

If you’re lucky, the data you need to do analysis is available at your fingertips. And in those instances where all the data might not be available, you can usually still find a proxy metric — something that’s close enough to support your decision. We as product managers make the most out of what we have to make decisions as best as possible, don’t we? Just stay curious with an analytical mindset and keep calm. You will be just fine.

Do you have a question? Ask Women in Product!

--

--

Women in Product

A global community of women working in Product Management.