Over-Tracking and Under-Testing

 
Chances are you've heard this question in requirement gathering meetings:

Can’t we just track everything?”


Whether it's about clicks on a page, taps on an app, interactions with a form, or mouse-overs  

¯\_(ツ)_/¯


Shawn Reed of Nabler, a digital analytics and consumer agency, has seen more than his fair share of such questions over his decade-long career in the digital analytics space. We sat down with him recently to discuss two trends that often seem to go hand-in-hand:

  • extensive tracking specifications attempting to capture as many data points as possible
  • sparse testing of said implementations.

Over-Tracking

"I encounter this most frequently with companies that have less maturity in their analytics organization," says Reed in reference to the Track Everything phenomenon. The question usually betrays a lack of understanding of what an analytics tool can (or should) actually do.

More importantly, such discussions speak volumes about a company's approach to analytics in general. Instead of focusing on the top three, five, or even fifteen data points that best describe the organization's key performance indicators, a cloud of uncertainty overcasts the organization's core ideas about what should be captured and which metrics are really important.

Defenders would argue that over-tracking is a safe harbor, a hedge against the unexpected. In fact, such an approach is anything but safe.

While promising potential safety, over-tracking delivers a thousand cuts with absolute certainty:

  • First come the operational costs of enabling, collecting, maintaining additional collection calls or additional parameters.
  • Then, there are also financial ramifications as some analytics vendors charge based on tracking requests.
  • Further damage is often dealt by lower data quality due to human error in tag configuration. QA resources spend valuable cycles testing less important tags at the expense of important KPIs.
  • Ultimately, over-tracking can also impact adversely the end user experience by increasing page load times and introducing latency issues when interactions are told to wait for tracking to complete.

A common symptom of the Track Everything approach is to throw everything in the data lake, expecting that one day it will almost certainly enable some extended analysis or a richer (360-degree, at least) view of the customer. But unless there is a clear understanding of what all this data is going to be used for, such implementations usually end up quite messy, says Reed. The collection of data should be driven by actionable questions, based on understanding the organization's business and the metrics on which the business is judged. Frivolous data collection ends up taking cycles of everyone's time, with no guarantees of any material benefit.

Two recent turns in Software and Internet technologies seem to have all but poured oil on the fire of over-tracking. Agile development often makes it easier to release code (including tracking) at a much greater frequency. But when it comes to analytics, the impact of Agile pales in comparison to the evolution of Tag Management Systems (TMS). Such systems have taken the Agile philosophy to a rapid extreme, allowing marketers and technologists alike to enable tags at the click of a button, independently from any release cycle.

More tags does not necessarily equal better data or better decisions based on the data. Data (and even site functionality) can be jeopardized by granting TMS access to stakeholders with limited technical knowledge or lack of governance skills. Companions may include limited code reviews and approval processes that are often just a formality.

 

Under-Testing

Three factors seem to drive the tendency to under-test:

  • Limited resources - When a site or an app releases new features, the established process of tagging and QA-ing is generally observed, and at least initially the quality of the data may be adequate. However, few companies have resources or processes in place for regression testing tags through sprints or TMS configuration changes. Lack of continuous testing often leads to gaps in the data or wild fluctuations that jeopardize not just the specific reports but the overall credibility of the analytics solution. Often-times, tag QA turns into a dreaded hot potato nobody wants to end up with: developers await very specific instructions on what tags to implement, product stakeholders are primarily interested the final reports/data, dedicated QA resource lack the knowledge of how to properly QA analytics tags, and analytics professionals are overwhelmed with analysis or other specialized tasks.
  • With tracking changing at breakneck speeds, there are clear risks of jeopardizing data quality due to basic lack of understanding. It is not uncommon for a TMS system to populate a tag value that looks OK on the surface and does not impact the site functionality, but this may be the wrong value 90% of the time, or the correct value always sent at the wrong time. Similarly, competing data layer standards and evolving vendor recommendations can add complexity to an existing implementation, opening the door for data capture deficiencies. 
  • Communication gaps - A product team or a development team may introduce a site change (tweak a particular flow, add a new page, create a set of new fields in a form) without communicating the change to the analytics stakeholders. As a result, a set of tags may get dropped or altered, introducing inaccuracies in the data. Such defects can go unnoticed for weeks, sometimes even months. Some analytics solutions have alerting capabilities, but often they require additional configuration. Even then, such alerts can trigger false positives or lead to alert fatigue by crying wolf too often.


None of these issues are particularly easy to solve. When it comes to QA-ing tags, Shawn Reed suggests that prioritization can make a big difference:

"If you have one page buried on your site that is missing a few tags, you have to ask if that page is key to conversion. Is it worth wasting your time on it, or is it better to focus on the pages that are part of the conversion funnel or generate leads or drive other KPIs?".


Building an Analytics-Minded Culture

If analytics is an afterthought, the two tendencies of over-tracking and under-testing often creep up together. The starkest consequences may include inferior end user experience, but the costliest ones may well be flat out wrong business decisions based on inferior data.

In order to avoid this, clearly defined measurement questions should precede each site or feature development effort. Such questions should over-arch the discussion of what should be tracked, and at what priority level. The QA of data collection tags should then emerge organically from the exact same priority order, and should always begin with KPI and key user journeys.

Subscribe to our quarterly data quality newsletter