Boosting Plotline's CSAT to 9.1 with a new Dashboard Experience

Boosting Plotline's CSAT to 9.1 with a new Dashboard Experience

Boosting Plotline's CSAT to 9.1 with a new Dashboard Experience

Team Composition

2 Designers, 1 Frontend developer, 1 Backend developer

Team Composition

2 Designers, 1 Frontend developer, 1 Backend developer

Team Composition

2 Designers, 1 Frontend developer, 1 Backend developer

Team Composition

2 Designers, 1 Frontend developer, 1 Backend developer

Project Duration

2 weeks

Project Duration

2 weeks

Project Duration

2 weeks

Project Duration

2 weeks

Categories

UX Design, UI Design, Data Visualization, User Research

Categories

UX Design, UI Design, Data Visualization, User Research

Categories

UX Design, UI Design, Data Visualization, User Research

Categories

UX Design, UI Design, Data Visualization, User Research

Context

What is Plotline?

Plotline is a SaaS product that enables users to publish tooltips, modals, and bottom sheets (collectively known as in-app engagement) without requiring developer involvement

What is in-app engagement?

Typically, these would be onboarding journeys for users or driving new feature adoption. These campaigns could have tooltips, modals, embeds - or any combination of those.

What was my role?

Re-designing the layout and data visualization to address issues with comprehension and consumption of campaign data, along with my fellow designer and the engineering team.

The previous dashboard - more difficult to consume data and no visual preview of the campaign

The redesigned dashboard

The Problem

What was the problem?

Initially - the time we spent explaining the metrics to users on call or Slack.

Over time - lack of confidence in campaign performances, since users didn't understand how metrics were calculated.

Why was this important?

Lack of confidence in campaign performance caused a lack of confidence in the product and dipping CSAT (Customer Satisfaction) scores.

This also threatened our Net Revenue Retention (NRR) on our path to $1M in ARR.

TL;DR

Users assumed their campaigns weren't peforming well despite performing well in reality, which led to dissatisfaction with Plotline. Essentially, we did a poor job of explaining and representing the metrics.

Goal-Setting

Since difficulty in understanding the data and lack of trust in the data were the key pillars of the problem, we articulated the goal or guiding principle as follows:

How might we make the data easier to consume and inspire confidence in campaign performance for users?

Don't enjoy walls of text in case studies?

Don't enjoy walls of text in case studies?

I hear you. Skip to the final output with the button below!

I hear you. Skip to the final output with the button below!

The Process

The process I followed is roughly summarized below. It doesn't come close to capturing the chaos and despair of having to convince the engineering team to build the feature though.

Understanding the Problem Space
Understanding the Problem Space
Understanding the Problem Space
Understanding the User
Understanding the User
Understanding the User
Research of Similar Products
Research of Similar Products
Research of Similar Products
Collating Problems, Observations and Opportunities
Collating Problems, Observations and Opportunities
Collating Problems, Observations and Opportunities
Ideation and Concepts
Ideation and Concepts
Ideation and Concepts
Hi-Fidelity Prototypes
Hi-Fidelity Prototypes
Hi-Fidelity Prototypes
Handoff to Developers
Handoff to Developers
Handoff to Developers

Research and Feedback

A collection of the insights we had gathered from scattered interviews for the Dashboard overhaul

primary users

Our primary users were product managers and growth managers who:

  • responsible for growth metrics such as feature adoption and user conversion

  • using Plotline for in-app nudge campaigns

  • looking to find obvious insights in data - wins and losses in impact metrics

  • looking to easily share info about winning campaigns with their stakeholders

Research for the project came from continuous discovery with users instead of a dedicated subproject, though largely speaking these were semi-structured interviews over Google Meet.

key observations

App Opens, Eligible Users, Target Users, Clicks and CTRs were all part of a funnel but the existing implementation obscured this relationship.

Users had a strong preference for visual explanations over text-based explanations due to limited mental bandwidth in their work.

Growth managers were used to certain standards such as bar charts with color coding for conversions and dropoffs that were being violated by use of a line chart.

Final Design Output & Decisions

1 | Redesigning funnel metrics, temporal views and Unique numbers

observations

Existing implementation obscured the funnel relationship between App Opens, Eligible Users, Target Users, Clicks and CTRs. Also, some metrics were accessible only via CSVs.

decisions

Used Gestalt principles to establish a visual relationship (left to right - like timelines) and gave more importance to Unique users - what users ultimately looked for - over Total counts.

Before

Before

Before

Before

After

After

After

After

2 | Explaining Key Metrics

observations

Plotline's dashboard contained metrics that users did not fully grasp and explanations were buried deep inside the product help center (which would be break their flow)

decisions

Moved explanations into the product and represented metrics with a Sankey diagram of a hypothetical campaign, establishing relationship to other metrics.

Before

Before

Before

Before

Users had to either search for the answer on the Help Center - or contact us

After

After

After

After

The current version - explanations sit within the product to not break users' flow

Metric definitions - Expanded

3 | Linking Impact Metrics

Impact Metrics were terminal goal events used to measure impact between control and target users at the end of campaigns.

observations

Impact metrics were hard to scan in one pass. Also, users were more interested in impact percentage in control vs. test than completion rates or journeys triggered.

decisions

Visualizations were made more cohesive and intuitive with color encoding of positive vs. negative impact as well auto-sorting in decreasing order of impact.

Before

Before

Before

Before

After

After

After

After

4 | Visualization of Stepwise Dropoffs

Stepwise drop-offs showed the percentages of conversions and drop-off in campaigns that utilized multiple UI nudges to guide users.

observations

  • Users were used to bar chart repesentations of conversion funnels and struggled to grasp relative magnitudes in a line chart.

  • Data points were titled with the step number, but users wanted to know which UI element was used for that particular step.

decisions

  • Drop-offs are now visualized as stacked bar charts. Also, followed a convention Growth Marketers were used to on tools such as Clevertap, Mixpanel, etc.

  • Both conversion and drop-off have separate, selective details on hover, including the UI nudge type used.

Before

Before

Before

Before

The previous version of the explanations - for which users had to go digging or message use on Slack.

After

After

After

After

5 | Easier Access to Campaign Preview

Campaign Previews gave users an idea of the content of the nudge campaigns and the types of UI elements that were used.

observations

Users could not check which UI nudges were used for the campaign or a particular step from the conversion funnel without going into Edit mode

decisions

Campaign Preview was incorporated into the dashboard to prioritize recognition over recall. Users were now able to click through all steps and check them against conversion funnel.

Before

Before

Before

Before

After

After

After

After

Impact of the Project

The redesign was integral to Plotline’s retention of some of its biggest clients and conversion of new ones.

I focused on keeping design fundamentals front and centre, to increase trust and reduce calls for explanation of metrics.

Summarizing, we cut down the time taken to access the performance details and content of a campaign, reduced time spent by Plotline explaining the funnel metrics while increasing satisfaction and trust of our users.

Boosted Customer Satisfaction (CSAT) from 7.2 to 9.1 (Arrested a downtrend and flipped to an uptrend with most users being promoters)

AT LEAST 5 hours per week per engineer saved in time spent explaining metrics over calls or sending CSVs to client teams

Prevented any reduction in Net Revenue Retention (NRR) or churn of our biggest clients well before the dashboard became a universal issue

A learning from the project - local components to speed up future changes

Learnings

01

More to client and user dissatisfaction than meets the eye

One of the most satisfying 'Aha!' moments for us was discovering that there wasn't a problem with user campaigns but their understanding of the impact. Something as vague as ‘I am not happy with the performance of my campaigns’ could have much more to do with the information being presented to users than the actual value provided by a product.

02

Design is an integral part of driving business metrics

And it's great when both of them can work in tandem. This project showed the importance of something as simple as a dashboard rethink to metrics such as the CSAT in the short term and Net Revenue Retention (NRR) over the long term.

03

It's faster (sometimes) when design can own the QA process

In a fairly fast-paced project with changes shipped over multiple iterations, designers must don the QA hat and make sure both design and tech were marching in lockstep. Taking ownership of the QA in implementation allowed the team to ship this critical feature faster (with some compromises).

04

There's always scope for improvement (after you ship it)

We prioritized shipping this over numerous design reviews. Perhaps the UX Copy could have been changed to something more intuitive, to lend support explanation of vague terms such as ‘Eligible Users’. Not testing the solution with users first carried inherent risks but in this case, didn't lead to noticeable consequences.

Learnings

01

More to client and user dissatisfaction than meets the eye

One of the most satisfying 'Aha!' moments for us was discovering that there wasn't a problem with user campaigns but their understanding of the impact. Something as vague as ‘I am not happy with the performance of my campaigns’ could have much more to do with the information being presented to users than the actual value provided by a product.

02

Design is an integral part of driving business metrics

And it's great when both of them can work in tandem. This project showed the importance of something as simple as a dashboard rethink to metrics such as the CSAT in the short term and Net Revenue Retention (NRR) over the long term.

03

It's faster (sometimes) when design can own the QA process

In a fairly fast-paced project with changes shipped over multiple iterations, designers must don the QA hat and make sure both design and tech were marching in lockstep. Taking ownership of the QA in implementation allowed the team to ship this critical feature faster (with some compromises).

04

There's always scope for improvement (after you ship it)

We prioritized shipping this over numerous design reviews. Perhaps the UX Copy could have been changed to something more intuitive, to lend support explanation of vague terms such as ‘Eligible Users’. Not testing the solution with users first carried inherent risks but in this case, didn't lead to noticeable consequences.

Aww, shucks! That's it?

Aww, shucks! That's it?

For now. More case studies coming soon!

For now. More case studies coming soon!