Winning teams missed 32% more tackles than losing teams. That's the kind of stat that stops you. It took me a while to understand why.
Last season I worked with the Kintetsu Liners through a 14-match run: 10 wins, 3 losses, a draw, a 71.4% win rate that sat at the top of Division 2. I had access to full match data, GPS files, and the tape. So I went looking for what actually separated wins from losses, not what conventional wisdom said should.
The findings broke a few things I thought I knew. Some of them broke things almost everyone in rugby thinks they know.
This is the first post in a series I'm calling Rugby Moneyball: applying the same kind of "measure what matters, not what's traditional" thinking that Beane brought to baseball, to professional rugby data. Later posts cover the machine-learning treatment of the same dataset, an interactive win-probability tool, and how the findings translate to player recruitment. This one is the foundations: what the data actually says.
What I had to work with
Fourteen matches, both sides of the ball, around 270 distinct metrics per match. Possession, territory, carry stats, breakdown, set piece, discipline, tackle data, GPS load files, line breaks, defenders beaten. The standard professional dataset, plus our own training-side numbers.
I split the matches into wins and losses (the draw sits in its own bucket; with one observation it doesn't tell you much). Then I ran every metric through the same comparison: average in wins, average in losses, difference, statistical significance.
A note on sample. Fourteen matches isn't a season-long career study; it's a single-team season. The findings here are stable within Kintetsu's 2025 run, but I wouldn't extrapolate them to "rugby" until they reproduce on other teams' data, which is what part 5 of this series is for. Read what follows as "what it took to win, this team, this season."
The single biggest differentiator was discipline
If I had to point at one number, it's the yellow card rate.
In wins, Kintetsu averaged 0.10 yellow cards per match. In losses, 0.67. That's an 85% reduction.
To put that in context: a yellow card costs you 10 minutes with 14 players against a structured XV who came expecting that opportunity. The opposition usually puts up between 7 and 14 points in those windows. A yellow card is, on average, a one-score swing, and it tends to happen at moments when momentum is already brittle.
Penalties conceded showed the same direction but smaller (9.8 in wins vs 11.3 in losses, a 14% reduction). The yellow card is the thing.
What does this mean for how you coach? Discipline isn't a soft skill that lives in team-meeting platitudes. It's a measurable variable with a quantifiable cost. The team that beats itself the least wins. In the Kintetsu data, no other single metric came close to yellow cards as a wins-vs-losses separator.
The turnover differential is the second lever
Possession was almost identical in wins and losses: 46.3% vs 46.1%. Territory was within 1.2%. The myth that you win by holding the ball didn't hold up at all.
What did hold up was what each team did when they had it, and how often they got it back.
| Wins | Losses | |
|---|---|---|
| Turnovers won | 6.4 | 5.0 |
| Turnovers conceded | 12.2 | 14.3 |
| Differential | −5.8 | −9.3 |
A 3.5 swing in the turnover battle. Each turnover, in modern professional rugby, has an expected-points value somewhere between 7 and 10 depending on field position. A 3.5-turnover swing across a season is genuinely material.
Side point: jackal counts were actually slightly higher in losses (3.3 vs 2.8). The turnover battle wasn't won by sending more players to the breakdown. It was won by securing your own ball more often (fewer concessions), forcing errors through defensive pressure, and capitalising when you got it back.
If I were prioritising training time on the back of this number, ball security work would sit ahead of jackal technique, and counter-attack patterns from turnover would sit ahead of generic phase-play.
Volume is misleading. Conversion is what scores.
This is where the data gets interesting, and where the tackle paradox starts to make sense.
Look at the carry stats:
| Wins | Losses | |
|---|---|---|
| Points per carry | 0.46 | 0.24 |
| Metres per carry | 6.59 | 7.14 |
| Dominant carries | 24.6 | 37.0 |
| Post-contact metres | 239.5 | 280.1 |
In losses, Kintetsu generated more dominant carries, more metres per carry, and more post-contact metres. By the eye test, those are the games that "looked good." But the points-per-carry number flips on its head. In wins, every ball carry produced almost double the points (0.46 vs 0.24).
The takeaway isn't that dominance doesn't matter. It's that dominance without conversion is a particular failure mode of professional teams: bashing through contact without finishing. Wins came from quicker recycle, better support, smarter decisions in the 22.
The line break conversion supports the same point. Wins had 7.7 line breaks against 6.0 in losses, but the ratio of line breaks to defenders beaten, the cleanness of the breaks, was 40% better.
So why did winners miss more tackles?
This is the stat I led with. Wins had 26.9 missed tackles per match against 20.3 in losses, a 32% increase. Same number of total tackle attempts, lower success rate.
Once you put it next to the rest of the data, the explanation isn't tactical, it's situational. Winning Kintetsu matches saw the opposition with less ball overall, and forced into desperate, broken attacking play when they got it. Broken-field tackles (second efforts, tackles after offloads, scrambles) are harder to complete than structured ones. The missed tackle count goes up specifically because the team is winning the rest of the contest hard enough that the opposition can't run set play at them.
Reported as a count, it looks like a defensive failure. Reported as a rate against the kind of attack faced, it disappears.
And this is why the missed-tackle headline number, used in isolation, is a misleading metric in coaching contexts. The metric to use is missed tackles per minute defending, and missed tackles in own 22 vs opposition 22.
What I'd actually measure
Out of 270 metrics, the ones I'd put on a single-page coaching dashboard for a Division 2 squad, based on the wins-vs-losses signal, are these.
Stop tracking as primary indicators:
- Possession % (no signal)
- Dominant carry count (inverse signal)
- Missed tackle count (raw)
- Metres gained
- Territory %
Start tracking:
- Yellow card rate (target: <0.15 per match)
- Turnover differential (target: better than −6)
- Points per ball carry (target: >0.40)
- Penalties conceded in own half (target: <5)
- Line breaks per defender beaten (target: >0.25)
- Conversion rate in opposition 22 (target: >65%)
- Missed tackles per minute defending (rate, not count)
That's the dashboard.
What I'm holding back on
A few honest caveats before I overstate things.
This is one team's season. The findings are stable within it but not generalisable until they reproduce on other teams' data. That's part 5 of this series.
Fourteen matches is a small sample by any statistical standard. The 85% yellow card reduction is striking but the confidence interval around it is wide. I'd want triple the data before I bet on the exact magnitude.
And I haven't done the multivariate work in this post; I've reported pairwise differences. The next post in the series uses logistic regression and gradient-boosted trees against the same dataset, with proper cross-validation, to ask which features survive when you put them all in the model together. The eye test has held up so far in my work; the ML check is what tells you it's not coincidence.
The principle
Rugby has a long-standing love affair with possession-based, dominance-based metrics, partly because they're easy to film and partly because they make highlight reels. The Kintetsu 2025 data, for what it's worth, suggests those aren't the metrics that decide matches. Discipline, turnover security, and conversion efficiency are. The team that gives up fewer points and converts more chances wins, even when the eye-test stats favour the opposition.
If you're coaching, that probably reorders some of your training priorities. If you're analysing, it should reorder your post-match dashboard. If you're recruiting, it changes what kind of player you'd pay for, which is what part 4 of this series gets into.
Coming next: machine learning on the same data, and what the model tells you that pairwise comparison misses.