Lesson Objective
To introduce basic comparison and experimentation techniques that allow teams to move from descriptive reading to a more analytical and hypothesis-driven approach.

Learning outcomes

By the end of this module, participants will be able to:

  • formulate more precise analytical questions
  • propose simple, testable hypotheses
  • understand the basic logic of A/B testing
  • recognize common limitations in interpreting results
  • use segment and time-period comparisons more rigorously

From describing to asking better questions

In Module 1 the emphasis was on describing data correctly. In this block the additional step is learning to formulate better questions.

Some differences illustrate this shift:

Weak question:
“Did this content perform well?”

More analytical question:
“Did this content perform better among returning users than among new users?”

More mature question:
“What characteristics do the pieces that generate more recirculation among returning readers share?”

The quality of analysis depends largely on the quality of the question.

Hypotheses and comparison

A hypothesis is a provisional explanation that can be tested. In an editorial environment, hypotheses may relate to headlines, formats, content placement, recommendation modules, newsletters, or paywalls.

Examples:

  • Long explanatory articles generate more recirculation among returning users than short breaking-news pieces.
  • A recommendation module placed at the end of the article increases consumption depth.
  • Users arriving through newsletters show a higher propensity to register than those arriving from social media.

Introduction to A/B testing

An A/B test compares two versions of the same element to observe which performs better according to a defined objective.

In a newsroom environment it can be applied to:

  • headlines
  • newsletter subject lines
  • module placement
  • registration or subscription messages
  • homepage formats

What matters is not only launching the test but clearly defining:

  • what improvement is being sought
  • which metric will be observed
  • how long the measurement will last
  • which population or segment is being analyzed

Common mistakes in experimentation

Some of the most common mistakes include:

  • stopping the test too early
  • changing more than one variable at the same time
  • failing to define a main metric
  • interpreting small differences as if they were conclusive
  • ignoring the editorial context of the moment

A mature analytical culture does not confuse experimentation with improvisation.

Careful interpretation of results

Not every result leads to a strong conclusion. Sometimes a test shows no clear difference. In other cases a change works only in certain segments. Occasionally an apparent improvement may not compensate for other side effects.

For this reason, a useful practice is to combine three questions:

  • What happened?
  • What could explain it?
  • What should we test next?

Try it yourself

Your team runs an A/B test on a newsletter. Everything is identical except the subject line:

Version A: “The data behind Europe’s declining newsrooms”
Version B: “Why are newsrooms closing? The numbers explained”

Results after 7 days (12,000 recipients, 50/50 split):

Version AVersion B
Open rate21.4%28.1%
Click rate3.2%3.0%
Avg. reading time (clicked users)5:402:10
New registrations from this send86

Your colleague who ran the test concludes: “Version B wins — higher opens, let’s always write subject lines this way.”

Consider:

  1. Is that conclusion valid? What does the full picture actually say when you look at all four metrics together?
  2. What might explain the gap between open rate and reading time across the two versions?
  3. Name at least one methodological question you’d want answered before treating this result as conclusive. (Hint: think about the test conditions, not just the numbers.)
  4. Based on what this test revealed, what hypothesis would you design the next test around?

(There is no single correct interpretation — and that’s precisely the point.)

A test with a clean winner teaches you something. A test with mixed results teaches you more.

Lesson Conslusion
At this stage, analysis is no longer limited to observing results, but focuses on testing ideas and interpreting outcomes carefully. By asking better questions and applying simple experimentation methods, teams can make more informed and reliable decisions.