How to grow your product with Impact Analysis

Once upon a time in startup land there was a founder who was afraid of numbers. They worked hard day in and day out, slept for 3 hours each day and managed to get clients for their very important and useful product. One fine day they decided to release v8.3.[5] with a new and revamped app experience. But alas, something seemed to go wrong and clients seemed to disappear, success seemed so far yet, so near.

You cannot improve what you cannot measure.

Some people have a finger on the product’s pulse. But it’s never easy or always sheer luck. There can be a methodology which if followed will help you towards measuring the impact of your product releases. Instead of releasing the new version of your product to your entire audience at once, A/B test it with a control group.

Suppose you make changes to the deposit experience based on internal feedback. Sure, but humans are complex, a bit finicky. For extra judicious predictability, as you asked your internal users, wouldn’t it be better to ask all your users directly how they feel about the new change? In the case of deposits, we expect a change in the conversion rate obviously, good or bad we will find out. We will need to benchmark the previous daily rate of initiated vs failed and initiated vs successful transactions to help us compare. Now let’s create 2 new versions of the app update, one has the new deposit flow, and the second is a dud, a placebo which has the exact same deposit experience as the current version. We segment 2% of our audience to receive the new flow while a different 2% will receive the dud version. We now measure both the ratios over a fixed length of time.

% changeInitiated vs SuccessfulInitiated vs Failed
Variation Group+1.7%-2.4%
Control Group+0.4%-0.2%

This is impact analysis. If you can release every individual feature separately, you will be able to measure the impact that every new feature release has on your metrics. In case of negative impact, you still have control and can decide to recall the version to your variation group and postpone rollout to other users until you find and fix the root cause of the negative impact.

You can further measure the short and long term impact of your release by measuring it over a period of 2 days, 7 days and 14 days. You will easily be able to realize if you are going towards or away from your short and long term goals.

Product Health Metrics

The primary metrics that define the performance of the app and which other metrics rely on will constitute the product health metrics. These need to be defined based on the analysis of own historical data and that of the competitor apps in the same industry set. E.g: Avg Num of Sessions, DAU, ARPU, Avg Session Duration¸ Avg Num of Games Played etc.

Impact Events

The events that need to be monitored to trigger impact analysis will have to be
defined for all teams, some of which are listed below.

ProductMarketingRetentionITSupport / Customer ExperienceFraudAccount
App version updatesNew campaignsNew campaignsServer maintenanceNew support channelsChanges in KYC verification process
Backend git pushesIntroduction of new toolsIntroduction of new toolsInfrastructure changesIntroduction of new communication formatsChanges in fraud detection process
Introduction of new toolsIntroduction of new toolsIntroduction of new toolsIntroduction of new fraud rules
Introduction of new toolsIntroduction of new tools

Monitoring

After the initial event, the health metrics will have to be monitored and analyzed on
a timeline to identify a relationship, if any, between the event that triggered the impact
analysis and the health metrics.

Event: New
Version
Update
Metric: Avg
Session
Duration
4 hours2 days7 days14 days
Magnitude (%
change)
+2.8%+1.6%+0.2%0%

We can deduce that the new app update had an immediate impact but had no effect in the
long term on the metric ‘avg session duration’. Note: As we go further away from the point of
event origin on the timeline, there is a greater chance of inaccurate data due to the metric
being affected by other events. In such cases, we still get other ‘probable’ events that are
being monitored in the pipeline while the ‘most probable’ event will always be the one which is
closest to the impact on the timescale.

Magnitude of Impact

The magnitude value can be objectively measured as the Percent (%) change that has occurred between now and average of last 30 days. It also clearly suggests
if the impact was positive (+) or negative (-) and to what extent.

E.g: Assuming the DAU in the last month was 1M (10,00,000). A new event (Retention
Campaign) triggers an impact analysis.

Event:
Retention
Campaign
Metric: DAU
4 hours2 days7 days14 days
DAU10,01,00910,11,04210,13,56810,15,833
Magnitude (%
change)
+0.10%+1.09%+1.33%+1.55%

In this case, we can safely assume that the ‘retention campaign’ event had a positive (+)
impact on the DAU by looking at the ‘Magnitude’ values.

Accountability

The dedicated data analytics team (assumed as part of the product team)
will be responsible for the impact analysis.

Coordination with business stakeholders

We need coordination with other teams on two fronts as mentioned below. A single point of contact (SPOC) will be assigned from each team to co-ordinate with the other teams.

a. For other teams to communicate impact events to the product team: Each team should know the impact events of their own teams beforehand and communicate them to the product team at least 48 hours before the event. The communication can be using email.
b. For the product team to share impact magnitudes with the management and other teams: The results of the impact analysis can be shared with the other teams at each checkpoint or once at a weekly review meeting. In case the impact is negative and requires rollback, the analysis can be shared immediately when the product analytics team arrives at the conclusion and decides to escalate.

Get a pulse on your product and watch it succeed.

Featured Image by vectorjuice on Freepik

Leave a Reply

Your email address will not be published. Required fields are marked *