With increasing access to customer behavioral data, it is natural to look for patterns and draw conclusions, feeling confident in what the data is telling us. In fact, that is exactly some of our job descriptions. Marketing Analysts are valued because we test assumptions and return data that appears to support or refute these assumptions. Statistically, with data on millions of individual Facebook users, for example, each with thousands of individual actions, we feel confident that trends equal facts. That may be, but are we measuring the truth about the data collected, or the data collection process? In Quantum Physics, there is a concept that says that the act of observing a phenomenon influences the phenomenon. If a tree falls in the woods and no one is there to hear it, does it make a sound? Now, if the rate of falling trees during a given period increases, is it because of a measurable cause, or is it because we are watching? When we act on these conclusions, we forever change the landscape we are measuring.
UNC professor and Princeton CITP fellow and author of Big Data: Pitfalls, Methods and Concepts for an Emergent Field, Zeynep Tufekci, sums this up for us, “A lot of big data research is done in an isolated, one-shot, single-method manner with no way to assess, interpret or contextualize the findings,” she said. “There is great potential for error and misunderstanding; worse, with a lot of money flowing into this space, there is a lot pressure to produce “results” and overlook the fact that methods that were not developed to study humans, and do not necessarily work the same way, but are being applied widely. The online imprints that create these large, aggregate datasets are not just mere ‘mirrors’ of human activity; rather, they are partial, filtered, distorted and complex reflections.” 1
Measuring Success: E-Commerce KPIs
To test success, we measure performance by Key Performance Indicators (KPIs), or easily measured events that have a strong correlation to the desired result. An example in online advertising is Clicks as an instant indicator for future revenue. We know that not all clicks are the same. Some users bounce, while others convert. Historical data may tell us that a certain percentage of clicks eventually convert, and we call this KPI the Activation Rate. But, a rise and fall in Activation Rate does not tell is about the quality of each conversion; some people spend a lot, others spend the minimum they can. So we look to Average Order Value (AOV) to measure the quality of a conversion in terms of revenue. Yet, one transaction is not the entire story. Our best customers repeat, and this Repeat Rate combined with the previous KPIs provides a more complete picture of our results.
Once we know what we are looking for to measure the quality of conversions, the process for optimizing performance is full of treasure and traps. Below are some of the common traps that are found when we place too much faith in short term indicators.
E-Commerce Optimization Traps
1) A/B Testing – Am I Optimizing to Impulsiveness?
This form of testing attempts to optimize performance by testing slight variations of an ad or user experience (UI) on two groups of randomly selected users. Typically, one (we will call it “A”) is the current standard, and the other (“B”) is a slightly altered one. Many say that the trick is to only alter one factor per test, to make sure that we can trace any performance changes to a single change. This form of testing requires a fairly quick measurement of success in order to iterate, so we have to pick a short-term KPI like opens, clicks, time on page, etc. Optimizing to instant actions has one major downfall… we can easily end up optimizing to people who are likely to act impulsively vs. getting the qualified long-term participants we can take to the bank. Impulsiveness yields faster results. Faster can be better, but this may be at the expense of repeat purchases and long-term retention. Using longer tests with long term KPIs like the 7 Day, 30 Day, or 90 Day Activation Rate will give us a better sense of the true impact of subtle changes. Often, time is a luxury we do not have, so at the very least, hedging bets with tried and tested methods can save the day.
2) Click Tricks – Am I Optimizing to Gullibility?
I have heard a client say, “It is our job to trick people into clicking.” If short-term KPIs are overemphasized, it can lead to setting up experiences that trick people into acting the way we want them to. This can be done through misleading interfaces, or hidden agendas (i.e. Click here for a free iPad => Site selling gym memberships). If a customer can be tricked into participating, we inflate our short term KPIs and might get a pat on the back from the folks upstairs today. Next quarter, we will be explaining to the back patters why AOV, Engagement and Retention are down. A better experience should be the ultimate goal, thus increasing likelihood people will be more willing to click AND find the destination holds the treasure they were seeking. Integrating psychology of users and behavioral statistics allows us to deliberately engineer a better experience.
3) Surveys – Who has time for that?
The KPIs listed above are great for measuring value in terms of dollars, but how do we measure the intangibles like how people feel when they act. Why are they purchasing? How do they feel about the purchase 30 days later? Why did they consider the brand? Marketing teams love to use lengthy surveys for this. The problem with surveys is that responders are almost never a representative sample. First, we only get responses to people that are willing to invest their time in a survey. If we add an incentive to the completion of the survey to get more responses (like a coupon or store credit), then we get responses from people who are attracted to incentives. Another example, if we send an email survey to those on our email list about their experience of the emails they receive, we are not learning anything from the people who already unsubscribed. Survey design is an art and a science, and one that can yield amazing results as long as we are willing to be careful to weigh our results against the potential biases within the group surveyed. Often, the most valuable information is held by the people who abandon carts, or unsubscribe from email lists.