Over-Tracking and Under-Testing

 
Chances are you've heard this question in requirement gathering meetings:

Can’t we just track everything?”


Whether it's about clicks on a page, taps on an app, interactions with a form, or mouse-overs  

¯\_(ツ)_/¯


Shawn Reed of Nabler, a digital analytics and consumer agency, has seen more than his fair share of such questions over his decade-long career in the digital analytics space. We sat down with him recently to discuss two trends that often seem to go hand-in-hand:

Over-Tracking

"I encounter this most frequently with companies that have less maturity in their analytics organization," says Reed in reference to the Track Everything phenomenon. The question usually betrays a lack of understanding of what an analytics tool can (or should) actually do.

More importantly, such discussions speak volumes about a company's approach to analytics in general. Instead of focusing on the top three, five, or even fifteen data points that best describe the organization's key performance indicators, a cloud of uncertainty overcasts the organization's core ideas about what should be captured and which metrics are really important.

Defenders would argue that over-tracking is a safe harbor, a hedge against the unexpected. In fact, such an approach is anything but safe.

While promising potential safety, over-tracking delivers a thousand cuts with absolute certainty:

A common symptom of the Track Everything approach is to throw everything in the data lake, expecting that one day it will almost certainly enable some extended analysis or a richer (360-degree, at least) view of the customer. But unless there is a clear understanding of what all this data is going to be used for, such implementations usually end up quite messy, says Reed. The collection of data should be driven by actionable questions, based on understanding the organization's business and the metrics on which the business is judged. Frivolous data collection ends up taking cycles of everyone's time, with no guarantees of any material benefit.

Two recent turns in Software and Internet technologies seem to have all but poured oil on the fire of over-tracking. Agile development often makes it easier to release code (including tracking) at a much greater frequency. But when it comes to analytics, the impact of Agile pales in comparison to the evolution of Tag Management Systems (TMS). Such systems have taken the Agile philosophy to a rapid extreme, allowing marketers and technologists alike to enable tags at the click of a button, independently from any release cycle.

More tags does not necessarily equal better data or better decisions based on the data. Data (and even site functionality) can be jeopardized by granting TMS access to stakeholders with limited technical knowledge or lack of governance skills. Companions may include limited code reviews and approval processes that are often just a formality.

 

Under-Testing

Three factors seem to drive the tendency to under-test:


None of these issues are particularly easy to solve. When it comes to QA-ing tags, Shawn Reed suggests that prioritization can make a big difference:

"If you have one page buried on your site that is missing a few tags, you have to ask if that page is key to conversion. Is it worth wasting your time on it, or is it better to focus on the pages that are part of the conversion funnel or generate leads or drive other KPIs?".


Building an Analytics-Minded Culture

If analytics is an afterthought, the two tendencies of over-tracking and under-testing often creep up together. The starkest consequences may include inferior end user experience, but the costliest ones may well be flat out wrong business decisions based on inferior data.

In order to avoid this, clearly defined measurement questions should precede each site or feature development effort. Such questions should over-arch the discussion of what should be tracked, and at what priority level. The QA of data collection tags should then emerge organically from the exact same priority order, and should always begin with KPI and key user journeys.