Last week was an eventful one for all of us here in the US, with perhaps the most surprising election results in the history of modern political poling.

While others who specialize in data analysis explore what went wrong, I am reminded of similarities to the challenges we face in attempting to measure the performance of digital platforms.

As our industry talks increasingly about becoming more data-driven, it is increasingly critical that we also recognize that data does not always equal truth. Data can be wrong. Data can flat-out lie. And on the web, it can happen without us ever realizing.

The Myth of Measurability

Most early entrants to online marketing came from backgrounds in other forms of media where measuring results had always been a challenge, if not a complete crap shoot. This problem was as old as advertising itself. The famous quote “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” is attributed to marketing pioneer John Wanamaker. He was born in 1838.

160 years later, as we entered the digital era, even the primitive devices of server log analysis and hitcounters seemed scientifically accurate compared to what we’d worked with before. But from those earliest days it has been a challenge to separate out numbers from meaning. Was that a new visit or a page reload? Was that visitor a customer? Your employee? Yourself? And that was when we still believed all the visitors to our site were human.

As web analytics tools and practices have evolved, we’ve gotten much, much better at filtering out non-meaningful data. It’s still not perfect, and I don’t think it ever will be.

But even trickier is understanding how the things we can measure relate to the things we can’t. We know where users click and even hover, but we don’t know where they look on the screen. We know where they navigate but we don’t know why. We may know how they arrived at our site, but not what motivated them to click that link vs. the one next to it.

Or, as I remind my clients, analytics can tell us what users do. It cannot tell us what users think.

The Quality Challenge

The Internet is a big, busy place. All kinds of humans are sending billions of http requests around the world, alongside a variety of bots and spiders and other automatons. Employees access the company website in ways that makes them indistinguishable from external visitors. Spammers inject false entries into our website data without ever visiting our actual site. Users use—and share—multiple devices, applications and accounts, each in their own idiosyncratic ways. Analytics systems are complex, and burdensome to maintain across our expanding collection of digital properties.

So you might think you’re getting a spike in traffic when you’re really getting a spike in spam. You might think you’re getting visits from 40-year-olds when you’re really getting visits from 12-year-olds using a parent’s login. Or all that traffic from Los Angeles could be remote employees of a company with a Los Angeles-based VPN. Are all those people really exiting your site, or are they navigating to another part of your digital presence where tracking is misconfigured?

And don’t assume these situations are outliers. I’ve seen setups where filtering issues created 25% discrepancies, or where half the presumed traffic wasn’t even real. The errors can be significant.

This means that if your analytics are not set up properly, a significant portion of the data will be utterly false. But even with the best possible set up, it is impossible to get it 100% right. There are just too many unknowns.

Knowing Where To Look

Data without analysis is useless. While it’s uncommon (but not unheard of) for companies to set up website analytics but never bother to read the reports, it’s a fairly common practice to put reporting on auto-pilot.

Analytics tools encourage this, by allowing users to save reports settings and have them automatically emailed on a regular basis. In theory this is highly efficient. In practice, it can lead to site data being seen as a ho-hum routine, easily lost in a crowded inbox.

Here’s the rub: The secret to getting to insightful data is to optimize the reports themselves. The actionable insights are in there, but you may need different metrics, filters, grouping or segmentation to see what your data is trying to show you.

In fact, if your reports feel routine and boring, there’s a good chance you’re looking at the wrong ones.

Actually, It’s Still Subjective

However accurate and scientific your processes, ultimately it is humans who decide how to interpret your data and what actions to take. This is where it can really help to look beyond pure data. Customer feedback, user research, and your own professional intuition are not as cut-and-dried as numbers and pie charts, but they are a powerful source of insight. Combine those sources with data for more informed results.

Still, sometimes you’ll get it right, sometimes you won’t.

How will you know?

And how will you know that you know?

This is where cross-validation and checkpoints are key. Measure your online decisions against the offline result you trying to achieve. If leads are up but sales are down, if web traffic is up but foot traffic is down, then you might not be measuring the right things, or your data might be telling you lies.

The Unknowably High Cost of Measuring Wrong

This hypothesis is difficult to prove, but I believe inaccurate measurement to be the most expensive error a digital team can make. We don’t really know what we don’t know. But the potential for mis-investment, wasted resources, opportunity costs—it surely adds up. For large brands, it can cost millions.

By all means, make decisions based (in part) on data. But always remember, data does not equal truth.

Share This