Who remembers Marathon bars, now lovingly known as Snickers? Or Opal Fruits, renamed to Starburst? Rejected at the time, these brand names are now accepted and loved by consumers.
But it takes a lot of effort to build and win consumer trust, so it’s essential to monitor reaction. Whether it’s a name change, logo update, look and feel refresh or all of the above, we’ve worked through numerous brand updates on our customers’ conversion campaigns, testing the impact on consumer perception along the way.
Email is central to many of our campaigns and as a direct consumer channel, it’s a great test bed for answering those burning rebrand questions; Will they still recognise the emails are from us? Will our new look affect how customers engage with our emails? Will we see a ripple effect on conversions?
To help you get the most out of your rebranding efforts, we thought we’d share a few of our top tips when it comes to measuring consumer perception, so you can gain the most valuable insights to build effective campaigns and maintain user trust.
Define your benchmarks
Put a stake in the ground. It sounds obvious, but without a starting point, you won’t know how far you’ve come. Typical perception barometers include metrics like email open rates, click-throughs, unsubscribes, and changes in sales patterns. Before you implement any changes and start testing, record the performance against each of your chosen metrics.
Make incremental changes
Focus on the journey, not just the destination. It’s only by introducing one change or variable at a time that you can truly measure the impact of each, and pinpoint the elements driving the most impact, positive or negative. For a clear view, map your metrics on a testing plan and slowly introduce the changes you want to make, rather than multiple at once. A balanced approach means you can manage the risk.
Hold back a bit
When testing two variations of one element against each other – typically in an AB test - it’s useful to hold back a small proportion of your ‘sends’, as a ‘control’ group not exposed to any changes, similar to a hold out test we ran for EE. This means you can test the impact of both variations and keep a protected segment from which a baseline can be established, providing a useful benchmark within the same test parameters. Perhaps you’re looking at testing two different versions of a new header banner in your email. You can split test these variations to 90% of your user base, but hold back 10% to receive the original.
Watch out for the pendulum swinging too much in the wrong direction. You may be testing your ‘from’ email address – if those in the ‘new from address’ segment show a significant decrease in open rates, this implies your customers don’t recognise you and have become less receptive.
Think about next steps, an email to customers with more information on your upcoming brand changes may help manage expectations and lead to a more positive response when you roll out. We also recommend you pinpoint the decrease and analyse the types of customers causing the open rate drop. Perhaps a specific segment needs more attention, to make them more aware and open to the changes.
Test done. Results in. So, what next? The test is just one part of the process. Interpreting the data and taking action (usually) is next. You may have set a statistical difference target, to trigger rolling out a change. Has the winning version won by enough for you to be satisfied that the change can be applied to 100% of your database. If yes, you can move on with confidence. If not, you may want to test again. Or dig deeper into the data to look at various customer segments that might be swaying things before you do.
These are just a few guiding principles that should help focus rebrand test efforts. If you’d like to discuss your tests in more detail, please feel free to get in touch with one of our analysts.