A/B Testing - Customer feedback strategy

When creating an actionable A/B test, I set out to find the following answers to these questions:

What do I want to test?

Why do I want to test it?

How much time is it going to take me to run this test?

What am I going to do with the results? (The most important question in my opinion)

And with the above answered, I can feel confident to move forward with a solid testing strategy knowing that I’m not wasting time.

Introducing, A/B testing Customer feedback strategy.

What do I want to test?

As a Product Marketing Manager, I want to understand how people use Parcel so I can drive customer feedback into action (which, if you know Avi Goldman 💌, you know he has no chill with implementing feedback into product updates). Please someone, give me feedback so we can create more cool things. This is the first role where I will blink and 5 new things have been implemented. Product launching things before marketing is aware of the updates is a super unique problem, I know (🤥), but at the rate Avi does it, and for email-specific features which I love as an end-user, it's kind of fun? 🤣

A problem I’m facing is that sending out emails to customers asking for feedback or wanting to hop on a call to discover customer needs can sometimes look like a cold sales email which is my absolute nightmare as an email marketer (no offense my lovely salespeople out there). This type of email is also a frequent presence in my own inbox from other brands doing the same, so I know I’m not alone as a marketer looking for customer insights directly.

So, in my quest to gather feedback, I’m running a "Catch-all vs. Specific" A/B test where I’m going to see if an email that asks multiple questions from users with the hypothesis that at least one will resonate with them and hoping they’ll be more inclined to answer, is higher engaging than asking only one question. Its counter Variant B will focus on just asking one question, which compared to Variant A, could be more approachable and less overwhelming to answer but may not resonate with everyone.

Why do I want to test it?

To see if giving subscribers more areas of opportunity to provide feedback, results in a higher reply-to rate (ie. more feedback!). Or does the short and sweet approach come off as more “approachable” to reply-to?

How much time is it going to take me to run this test?

I’d like to run this test until I have a healthy sample size to receive both variants, and to where I don’t have to take any results with a grain of salt. I’m going to monitor this test and check back in one months time.

What am I going to do with the results?

I’m going to scale the results into future ways I communicate with customers. Whether it be generic questions in the multiples to hopefully appeal to a wider audience or specific questions to keep copy short and sweet!

My hypothesis

I am torn on this one, I really want to think that Variant A is going to drive higher engagement with the opportunity to resonate with at least one of the questions.

But on the contrasting side, I also understand why it could come off as a lot to digest and maybe one question is the way to go!

The results

Stay tuned! This test was launched on September 20th, 2022.

Previous
Previous

AI Copywriting - A Review

Next
Next

Deep linking for email marketers