Lesson Objective
Understand how to move from coherent cross-platform measurement to a more integrated measurement by author, format, series, and product, incorporating criteria that better align newsroom, audience, and business.

By the end of this lesson, the participant will be able to:

  • understand how to analyze performance by author, series, section, or format
  • recognize the value and risk of measuring editorial performance at a granular level
  • distinguish between quantity metrics and quality metrics
  • understand the need for governance and more robust definitions when data is used for recurring decisions

From cross-channel comparison to granular measurement

In Module 2, the focus was on organizing cross-platform measurement and understanding what could be compared across channels. At this level, the next step is to go deeper and analyze performance within the editorial product:

  • by author
  • by series or editorial franchise
  • by section
  • by format
  • by product or owned channel

This granularity can be very powerful, but also more delicate. The more the analysis is broken down, the more important it is to have consistent definitions and sufficient context.

Measuring by author, format, or series

Analyzing by author or format can help detect:

  • engagement patterns
  • differences in recurrence
  • thematic specialization
  • contribution to registration or loyalty

However, it can also lead to poor interpretations if we forget that not all journalists cover equivalent topics, nor do all formats serve the same function. For this reason, granular data must be read with editorial context.

Quantity metrics vs. quality metrics

One of the risks at this level is using indicators that reward only volume. To avoid this, it is important to distinguish between:

  • quantity metrics: publications, users, views, production frequency
  • quality or value metrics: reading time, recurrence, conversion, satisfaction, depth of consumption, retention

A mature Data Driven culture needs both dimensions, but must avoid letting the former override the latter.

Guardrails and incentives

When an organization measures more granularly, it must also consider what behaviors it may be incentivizing. If only volume is rewarded, short-term logic, clickbait, or unbalanced production may be reinforced.

For this reason, it is useful to introduce guardrails—metrics or balancing criteria that prevent simplistic interpretations. For example:

  • not evaluating only by traffic, but also by recurrence or conversion
  • not comparing authors without considering the type of coverage
  • not treating all formats as equivalent

Minimum data governance for recurring decisions

As data becomes part of stable routines, the need for governance grows:

  • shared definitions
  • more carefully designed taxonomies
  • minimal but useful documentation
  • responsibility for data quality
  • clear rules for interpreting sensitive comparisons

At a Data Driven level, measurement stops being an optional layer and becomes shared infrastructure.


Try it yourself

Your newsroom is rolling out author-level performance tracking for the first time. The data team shares this table with editorial leadership:

AuthorArticles per monthAvg. page viewsAvg. reading timeRecirculation rateSubscriber conversions
Writer A284,2001:058%2
Writer B918,4007:4534%19
Writer C226,1003:2018%8
Writer D62,8009:1041%14

The editor-in-chief glances at the first two columns and says: “Writer A is our most productive journalist. Writer D needs to step it up.”

Consider:

  1. What is she reading correctly — and what is she missing entirely?
  2. If your goal is subscription growth, which writer is contributing most? Which single metric makes that clearest?
  3. What perverse incentive could author-level traffic measurement create for Writer A — and how might it affect editorial quality over time?
  4. Propose two guardrails you would add before sharing this dashboard with the full team. What would they be, and why?

The more data is used to evaluate people, the more carefully it needs to be framed.

Lesson Conslusion
At this stage, more detailed measurement requires greater care in interpretation. By combining granular data with clear definitions and guardrails, teams can avoid misleading comparisons and support better, more balanced decisions.