Loading Form...
Thank you
Apr 20, 2015 | 3 minute read
written by Linda Bustos
I had the pleasure of joining Robert Kilzono's Ecommerce Marketing Podcast recently on one of my favorite online marketing topics: conversion optimization.
Listen to the audio recording below, and/or read the full transcript here. And while you're at it, check out the other great ecommerce podcast recordings in the archive (or subscribe by email/iTunes).
Top takeaways
Maybe for your business, it’s not the best place to start at the checkout funnel. And I’ll give you an example of a software site that I ran some tests for. And this is a very unique software product where once a person is sold on it, there’s not really any competitors. The check out process committed so many usability sins in terms of "best practice", so we whipped up a new check out process. We thought this was gonna lift business a ton-- how could it fail? This is just low hanging fruit all over the place.We ran the test and found it wasn’t a huge gain, and we realized this is a product that people will bang their head against the wall to get through that checkout, because there is no other option. It’s like if you had your favorite concert tickets and you could only get them through Ticketmaster, you will put up with a frustrating process no matter how bad it is because you need those tickets.So the better area for a business like that would actually be to shift the testing focus up-funnel towards when they’re actually describing the product and driving traffic to a landing page, and creating demand, persuasion, value propositions and price testing -- all that kind of stuff. That’s gonna get more people to that checkout and then you’ll see a high percentage of them will check out.
Maybe for your business, it’s not the best place to start at the checkout funnel. And I’ll give you an example of a software site that I ran some tests for. And this is a very unique software product where once a person is sold on it, there’s not really any competitors. The check out process committed so many usability sins in terms of "best practice", so we whipped up a new check out process. We thought this was gonna lift business a ton-- how could it fail? This is just low hanging fruit all over the place.
We ran the test and found it wasn’t a huge gain, and we realized this is a product that people will bang their head against the wall to get through that checkout, because there is no other option. It’s like if you had your favorite concert tickets and you could only get them through Ticketmaster, you will put up with a frustrating process no matter how bad it is because you need those tickets.
So the better area for a business like that would actually be to shift the testing focus up-funnel towards when they’re actually describing the product and driving traffic to a landing page, and creating demand, persuasion, value propositions and price testing -- all that kind of stuff. That’s gonna get more people to that checkout and then you’ll see a high percentage of them will check out.
Something that I see a lot is people saying “Yeah, we tested that and that didn’t work.” But what they did was simply test feature A and B. So, with and without.Here’s another example. I was on a mobile site the other day, and it has this overlay. So when you hit the home page, it pops up a light box that says “If you want a faster experience, download our app.” And the two buttons are “Download the app” and the second one is “Continue.”Now, if they had tested that against having no popup, they might have assumed that, you know—say that the light box underperformed. They might say “Well light boxes don’t work so let’s not use it.” But what they might have missed is those calls to action are very confusing. What does “continue” mean? Does continue mean continue to the app or the mobile site? And that might be what was going through people’s heads.So, if you just test it with and without, you might conclude that the light box is a conversion killer, when really if they had tested that and a more clear button with “Return to site” instead of "Continue," or “No Thank You," they might have been able to see that the light box actually does work better.
Something that I see a lot is people saying “Yeah, we tested that and that didn’t work.” But what they did was simply test feature A and B. So, with and without.
Here’s another example. I was on a mobile site the other day, and it has this overlay. So when you hit the home page, it pops up a light box that says “If you want a faster experience, download our app.” And the two buttons are “Download the app” and the second one is “Continue.”
Now, if they had tested that against having no popup, they might have assumed that, you know—say that the light box underperformed. They might say “Well light boxes don’t work so let’s not use it.” But what they might have missed is those calls to action are very confusing. What does “continue” mean? Does continue mean continue to the app or the mobile site? And that might be what was going through people’s heads.
So, if you just test it with and without, you might conclude that the light box is a conversion killer, when really if they had tested that and a more clear button with “Return to site” instead of "Continue," or “No Thank You," they might have been able to see that the light box actually does work better.