The Brutal Math Behind Open Source PR Backlogs | AI Bytes
0% read
The Brutal Math Behind Open Source PR Backlogs
AI News
The Brutal Math Behind Open Source PR Backlogs
A viral blog post applies queuing theory to Jellyfin's 200-PR backlog, proving that review wait times grow exponentially as utilization increases. The math explains why your contribution sat ignored for months.
April 14, 2026
6 min read
69 views
Updated April 15, 2026
Jellyfin, the popular open-source media server, currently has around 200 open pull requests on its web client repository. It has roughly one active code reviewer. And a new analysis shows this isn't just a "we need more maintainers" problem. It's a math problem. The math isn't kind.
This caught our eye. Why do open source pull requests take so long to merge? Because most projects have no flow controls, and without them, wait times don't grow linearly. They grow exponentially.
The situation at Jellyfin Web is a textbook case. One reviewer. Around 200 open PRs. A monthly merge rate of 20 to 35 code PRs. Thirty PRs sitting with merge conflicts. Another 31 labeled stale.
But Jellyfin isn't an outlier. At the time of the analysis, CPython carried 2,200+ open PRs despite having roughly 30 core developers. Vue.js, with about three full-time contributors, had over 100 open PRs. Even Angular, backed by Google's engineering resources, showed a similar pattern. The open source backlog problem is everywhere, and raw headcount clearly isn't solving it.
Queuing Theory Explains the Exponential Curve
The core insight comes from the M/M/1 queue model, a standard formula in operations research. The relationship between system use and wait time follows an exponential curve, not a straight line.
The numbers tell the story:
Reviewer Use
Wait Time Multiplier
50%
1x
80%
4x
90%
9x
95%
19x
For a solo reviewer trying to keep pace with incoming PRs, operating at 90%+ use isn't heroic dedication. It's a mathematical trap. Every percentage point closer to 100% doesn't add a little more delay; it multiplies the wait for everyone in the queue.
Using Little's Law, the blog post calculates Jellyfin Web's average PR cycle time at 6.7 months. Submit a PR today, expect it merged sometime around late fall. The author experienced this personally: three PRs for a subtitle offset feature sat for over a year. Two received approvals that never led to merges. The smallest (49 lines of code) never got a single human review.
That's not a review problem. That's a systems failure.
The Batch Size Death Spiral Makes It Worse
The queuing math alone is bad enough. But a compounding feedback loop makes everything worse.
When review times stretch from days to months, contributors adapt by bundling changes. Why submit three small PRs if each one takes half a year? Better to combine them into one larger PR. But larger PRs take longer to review. Which creates more backlog. Which pushes wait times higher. Which encourages even larger PRs.
This cycle is vicious and self-reinforcing. A study of 265,000+ GitHub PRs (cited in the original post) found that contributors abandoned their work due to process obstacles and reviewer bottlenecks rather than outright rejection. People don't rage-quit open source because their code was bad. They leave because the system ground them down.
A 2024 Tidelift survey found that nearly 60% of open source maintainers have quit or considered quitting. A Ford Foundation report showed the vast majority of projects depend on one or two people. So the individuals most likely to burn out are the exact same ones shouldering the entire review burden.
Why "Just Add More Reviewers" Misses the Point
The reflexive answer to backlog problems is always "recruit more reviewers." More capacity helps, sure. But the blog post makes a stronger argument: flow controls matter more than raw throughput.
The post's central claim is worth quoting directly:
"The bottleneck isn't that there's one person, it's that 200 PRs of different quality and size all funnel through one point with no flow controls."
Without queue management, adding a second reviewer might double throughput temporarily. But if the incoming rate keeps climbing (and in growing projects, it always does), you're just pushing use from 95% back down to 90%. You've bought yourself time before the exponential math catches up again.
7 Workflow Fixes That Actually Address the Math
The post proposes specific changes borrowed from lean manufacturing and software delivery practices:
Gate quality before review so maintainers don't waste cycles on PRs with failing CI or missing descriptions.
Limit work-in-progress to 5-10 concurrent reviews, forcing completion before new items enter the queue.
Prioritize by value using Weighted Shortest Job First, dividing cost of delay by effort required.
Set a predictable cadence with weekly triage blocks for new PRs and stale cleanup.
Build a reviewer tier granting repeat contributors approval rights without merge access.
Require feature proposals before code hits the review queue, similar to Rust's RFC process.
None of these require hiring. They require discipline. And they directly target the use rate driving those exponential wait times.
What This Means for the Broader Ecosystem
This analysis matters far beyond Jellyfin. The entire open source ecosystem runs on volunteer labor with almost zero process engineering. (For context on how open source projects compete with commercial offerings, see our coverage of GLM-5.1 hitting 95% of Claude's coding score and Google's $12.5M open source security push.) Most projects adopt GitHub's default PR workflow and never question whether it scales.
It doesn't.
Feature PRs made up only 21% of Jellyfin Web's merged contributions. The rest were bug fixes and maintenance. That ratio actively discourages the exact kind of contributor who adds the most long-term value: the person building something new. When your feature PR disappears into a six-month queue while small bug fixes sail through, you learn fast where to direct your energy. Or you stop contributing altogether.
The post is a wake-up call for maintainers who believe their backlog is a people problem. The queuing theory says it's a systems problem. And systems problems have systems solutions, if you're willing to implement the constraints that make the math work in your favor instead of against you.
Open source has spent years talking about maintainer burnout as a human issue. Maybe it's time to treat it as an engineering one.
Does queuing theory apply to projects with multiple reviewers?
Yes, but the model shifts from M/M/1 to M/M/c (multi-server queue). The exponential blowup still occurs; it just kicks in at higher total utilization. A project with three reviewers at 95% combined utilization will still see dramatically longer wait times than three reviewers at 80%. The fix remains the same: reduce utilization through flow controls rather than just adding more people.
What is a good PR size limit for open source projects?
The blog post recommends 300 lines for feature PRs, which aligns with research showing reviewer effectiveness drops sharply above that threshold. Industry research (including a widely cited SmartBear study) suggests review effectiveness drops sharply above 200-400 lines. Google's published engineering practices recommend around 100 lines as a reasonable CL size. Kubernetes uses automated size labels (XS through XXL) to flag oversized PRs before review. Bug fixes and documentation can be more flexible since they're typically faster to review.
How do you calculate average PR cycle time using Little's Law?
Little's Law states: average items in system = arrival rate × average time in system. Rearranged: cycle time = work in progress / throughput. For Jellyfin Web with 200 open PRs and 30 merges per month, cycle time = 200/30 = 6.7 months. You can apply this to any project using its open PR count and monthly merge rate from GitHub Insights.
Can GitHub Actions automate PR quality gates before review?
Absolutely. You can configure GitHub Actions to auto-label PRs as 'not ready for review' when CI fails, add size labels based on line count, check for missing PR descriptions, and flag merge conflicts. The probot/stale bot can auto-close PRs with unresolved conflicts after a set period. These automations reduce the reviewer's triage burden significantly without requiring any human intervention.
What open source projects handle PR backlogs well?
Rust is often cited for its RFC process that filters feature proposals before code is written. Kubernetes uses automated size labels, required approvals from code owners, and a well-defined contributor ladder. The Linux kernel uses a mailing-list review model with subsystem maintainers that distributes review load. Each of these projects enforces structural constraints rather than relying on reviewer willpower alone.