Productivity in Product Development: Diving Deep into Reinertsen's Principles of Product Development Flow
This publication is broken up into three sections:
TL;DR - For those wanting a quick take
Summary - For those wanting a bit more context and high level points
Article - Main body of work containing fully detailed article and explanations that you might want to consume over several readings
TL;DR
Productivity in product development is fundamentally different from manufacturing productivity. You are not reproducing a known recipe at scale. You are developing the recipe itself—under conditions of uncertainty, ambiguity, and variability. Applying manufacturing productivity logic to product development is one of the most common and costly mistakes organisations make.
Donald G. Reinertsen’s Principles of Product Development Flow provides a rigorous, economics-based framework built on 175 principles across eight domains: economic decision-making, queue management, batch size reduction, WIP constraints, cadence and synchronisation, flow control, fast feedback, and decentralised control.
Invisible queues are the single largest source of waste in product development. Reinertsen reports that approximately 85% of product managers cannot quantify the Cost of Delay for their work. Meanwhile, most product development teams operate at above 95% capacity utilisation—a level that causes queue sizes to explode exponentially.
A synthesised view on productivity integrates Reinertsen’s economic lens with the Logic Model Framework and Mik Kersten’s Flow Framework to move organisations from measuring busyness to measuring economic throughput—the rate at which you convert investment into customer and business value.
The path from Insight to Action to Impact requires you to quantify the economics of your product development system, make queues visible, reduce batch sizes, constrain WIP, accelerate feedback, and decentralise decisions to the people closest to the information.
Summary
Product development deals in unknowns, not repetition. Unlike manufacturing—which reproduces a proven recipe at scale—product development is the process of discovering and creating those recipes. This means variability is inherent and sometimes valuable, not always something to be eliminated.
Reinertsen’s core argument is that the dominant paradigm for managing product development is wrong. He asserts that organisations fail because they do not quantify economics, they are blind to queues, they worship efficiency over effectiveness, they are hostile to variability, and they work in dangerously large batch sizes.
Cost of Delay (CoD) is the “golden key” that unlocks better economic decision-making. It quantifies what it costs your organisation in lost value for every unit of time a feature, product, or decision is delayed. Most teams have never calculated this number—and when they do, they discover that intuitive estimates across a team can differ by a factor of 50 to 1.
Weighted Shortest Job First (WSJF) is a prioritisation method derived from Reinertsen’s work that sequences work based on the ratio of Cost of Delay to job duration. It has been adopted by SAFe and many scaled agile frameworks as a practical application of flow economics.
Queue theory reveals a non-linear relationship between capacity utilisation and cycle time. Moving from 80% to 90% utilisation doubles queue size. Moving from 90% to 95% doubles it again. This exponential curve explains why seemingly small increases in workload create disproportionate delays.
Reducing batch size is the single most powerful lever for improving flow. Smaller batches reduce cycle time, accelerate feedback, lower risk, and reduce variability. The optimal batch size is an economic trade-off between transaction cost and holding cost.
WIP constraints function like traffic metering on a highway. By controlling the rate at which new work enters the system, you prevent the congestion that destroys flow. Little’s Law (Cycle Time = WIP / Throughput) provides the mathematical foundation for why limiting WIP reduces lead times.
Decentralised control is essential because product development decisions must be made rapidly by the people closest to the information. Centralised decision-making introduces queue time that destroys economic value. Like the Internet’s packet-routing or military manoeuvre warfare doctrine, effective product development pushes authority to the edges of the organisation.
A synthesised productivity framework combines the Logic Model’s structured planning (Inputs → Activities → Outputs → Outcomes → Impact), Reinertsen’s economic flow principles, and Kersten’s Flow Framework metrics (flow velocity, flow time, flow load, flow efficiency) into a coherent system for measuring and improving what matters.
The key mindset shift: stop measuring how busy people are and start measuring how quickly value flows through the system. Productivity in product development is not about utilisation. It is about the rate of value creation, delivery, and capture relative to the investment made.
Article
Introduction — Why Productivity in Product Development Requires Different Thinking
With all the discussion about efficiency, velocity, and output metrics in product development organisations, it is easy to lose sight of a more fundamental question: what does productivity actually mean when your job is to create something that has never existed before?
Peter Drucker once wrote that it is the customer who determines what a business is—that what the customer thinks they are buying, what they consider ‘value’, is decisive. It determines what a business is, what it produces, and whether it will prosper. This framing is critical because it reminds us that productivity in product development cannot be divorced from the concept of value. Being productive is not about producing more—it is about producing more of what matters to customers and the business.
In my prior article on Productivity in Product Development, I outlined a theoretical foundation built on three pillars: the Logic Model Framework (popularised by the W.K. Kellogg Foundation) for structured planning, Donald G. Reinertsen’s economic principles for flow optimisation, and Mik Kersten’s Flow Framework for transitioning from project-based to product-oriented thinking. In this article, I want to go significantly deeper on Reinertsen’s contributions and synthesise how a Product Manager can practically incorporate his insights into a coherent view of productivity.
The reality is that the dominant approach to measuring and improving productivity in product development is borrowed wholesale from manufacturing. This is fundamentally misguided. Manufacturing is about reproducing a known recipe at scale with minimal variability (i.e., Remember Panasonic’s ‘zero defects’ tagline and their use of Six Sigma approaches). Product development, by contrast, is about discovering those recipes under conditions of uncertainty, risk, and ambiguity. Reinertsen understood this distinction more deeply than almost anyone, and his work provides what I would argue is the most rigorous economics-based framework for thinking about flow and productivity in knowledge-based product work.
The Economic View — Reinertsen’s First Principle
“Why do most product development organisations make poor economic decisions, and what can we do about it?”
The root cause is that most organisations lack a shared economic framework for making trade-off decisions. Reinertsen argues that every product development decision involves multiple competing objectives: speed, cost, quality, scope, and risk. Without a common unit of measure—typically expressed in terms of lifecycle profit or economic value—teams resort to gut feeling, politics, or simplistic proxy metrics that frequently lead to suboptimal outcomes.
Consider a common scenario: your testing process is running at 80% capacity utilisation with a 2-week queue, and someone proposes increasing it to 90% utilisation, which would create a 4-week queue. Which is better? You are comparing two weeks of cycle time against 10 percentage points of utilisation—two different units of measure. Without an economic framework that converts both into a common unit (such as the cost of delay in monetary terms), you simply cannot make a rational decision.
This is where Reinertsen’s concept of Cost of Delay (CoD) becomes what he calls “the golden key that unlocks many doors.” Cost of Delay quantifies the economic impact of delaying a product, feature, or decision by a unit of time. It answers a deceptively simple question: what would it cost your organisation if this was delayed by one month?
Reinertsen reports that approximately 85% of product managers cannot answer this question. When teams do attempt to estimate it, individual estimates within the same team typically differ by a factor of 50 to 1. This is a staggering level of misalignment. And yet, organisations make prioritisation, resource allocation, and sequencing decisions every day without this critical economic input.
What You Can Do: Practical Steps for Establishing an Economic Framework
Calculate Cost of Delay for your top 10 backlog items. Start simple. For each item, estimate the monthly revenue impact, cost savings, or strategic value that would be lost or deferred if the item ships one month late. You do not need precision—Reinertsen emphasises that the U-curve of economic trade-offs has a “long flat bottom,” meaning rough estimates still dramatically improve decisions over having no estimate at all.
Adopt Weighted Shortest Job First (WSJF) for prioritisation. WSJF sequences work based on the ratio of Cost of Delay to job duration (CoD / Duration). This ensures you are maximising the rate of economic value delivery, not just working on the highest-value items in absolute terms. A £50,000/month feature that takes two weeks to build should be prioritised over a £100,000/month feature that takes three months.
Create a shared “economic scorecard” that your team references in every planning session. This should translate all key trade-offs into the same economic unit. When someone proposes cutting scope to hit a deadline, or adding headcount to reduce cycle time, the economic impact of each option should be quantifiable.
Use the Sunk Cost Principle ruthlessly. Reinertsen is explicit: past investment should not influence future decisions. If a project has consumed six months of investment but the remaining work no longer justifies the expected return, stop. WSJF naturally supports this because it only considers remaining duration, not sunk effort.
"If you only quantify one thing, quantify the Cost of Delay." — Donald G. Reinertsen
The Hidden Enemy — Queues and Their Exponential Cost
“What is the single largest source of waste in product development that most organisations completely ignore?”
Queues. Invisible, unmeasured, unmanaged queues. Reinertsen makes the compelling case that queues are the underlying root cause of poor product development performance. They increase cycle time, inflate costs, amplify variability, increase risk, slow feedback, degrade quality, and demotivate people.
Yet 98% of product developers do not know the size of the queues in their development processes. This should not surprise us—unlike manufacturing inventory which sits on a warehouse floor, product development inventory is information, and information is invisible both physically and financially.
The mathematics behind this are grounded in queueing theory, a field that originated with mathematician Agner Krarup Erlang’s work on telephone network congestion in 1909. The core insight is devastatingly simple: the relationship between capacity utilisation and queue size is exponential, not linear.
What this means practically is that increasing capacity utilisation from 80% to 90% does not increase queue size by 12.5%. It roughly doubles it. Moving from 90% to 95% doubles it again. At 95% utilisation, the queue size at the 95th percentile can reach 58 items, compared to just 10 items at 75% utilisation. The economic impact at 95% utilisation can surge to over ten times the cost at 75% utilisation.
And here is the uncomfortable truth Reinertsen reveals: most product development organisations operate at above 95% capacity utilisation. Some are above 98%. They load every person, every sprint, and every team to near-maximum capacity because idle workers look expensive. But this is a local optimisation that only appears rational because organisations are blind to the invisible, catastrophic cost of the queues they are creating.
Think of it like highway traffic. A four-lane motorway at rush hour is running at near 100% capacity. Now remove one lane—a 25% reduction in capacity. Does this produce a 25% increase in average travel time? No. It doubles or triples it. Do the same thing at 3am and there is no impact whatsoever. The non-linearity depends entirely on the existing capacity utilisation. This is precisely what happens in your product development process when you overload teams.
What You Can Do: Making Queues Visible and Manageable
Measure your queues today. For every stage in your development process—requirements, design, development, testing, review, deployment—count the number of items waiting to be worked on versus items actively being worked on. If you have never done this, the results will likely shock you.
Control queue size, not capacity utilisation. This is one of Reinertsen’s Principles. Capacity utilisation is a powerful predictor of queue behaviour, but it is a poor control lever because you cannot accurately estimate demand and capacity in product development. Queue size, however, is directly observable and controllable. A wide control band on queue size will force the system into a tight range of capacity utilisation automatically.
Implement visual boards that show WIP and queue states. Kanban boards, cumulative flow diagrams, and queue size charts make the invisible visible. When your team can see 47 items sitting in a ‘ready for development’ queue, it changes the conversation from “are people busy enough?” to “why is so much work stuck?”
Target 70–80% capacity utilisation for response-critical work. If fast cycle time and responsiveness matter to your business—and they almost always do in product development—you need slack in the system. This is not waste. This is the strategic capacity that enables flow. Like a hospital emergency department, the value of having available capacity when it is needed far outweighs the apparent cost of idle time.
Batch Size — The Most Underrated Lever in Product Development
“If there is one operational change that could have the biggest impact on our productivity, what would it be?”
Reduce your batch size. Reinertsen treats batch size as an economic trade-off between transaction cost (the cost of processing each batch) and holding cost (the cost of delay while work waits in the batch). The optimal batch size sits at the minimum point of the total cost U-curve, where marginal transaction savings equal marginal holding costs.
This is a critical departure from first-generation lean thinking, which treated “one-piece flow” as an article of faith. Reinertsen’s second-generation approach says: it depends on the economics. Use smaller batch sizes when holding costs are high (high Cost of Delay), and accept larger batch sizes when transaction costs dominate. The practical implication is to invest in reducing transaction costs so that smaller batches become economically viable.
We see this principle at work in modern software development. Automated testing, continuous integration, and continuous deployment have dramatically reduced the transaction cost of releasing software. This has enabled the shift from quarterly or monthly releases to daily or even continuous deployment. Companies like Amazon and Google deploy code thousands of times per day—not because of ideology, but because the economics of small batches have become compelling through investment in automation infrastructure.
The benefits of smaller batches cascade through the entire system. Smaller batches reduce cycle time because they move through queues faster. They accelerate feedback because you learn whether something works sooner. They reduce risk because each increment represents a smaller bet. And they reduce variability in the system because smaller items are more predictable than large ones.
What You Can Do: Reducing Batch Size Systematically
Audit your current batch sizes across every stage. How many items go into a sprint? How large is a typical release? How many requirements are gathered before design begins? Identify where work is being batched up unnecessarily.
Invest in reducing transaction costs. If the reason you deploy monthly instead of weekly is that deployments are painful and error-prone, the answer is not to accept monthly batches—it is to automate and streamline the deployment process until it becomes trivial. Every investment in CI/CD, test automation, and infrastructure-as-code is an investment in enabling smaller batch sizes.
Decompose features into thinner vertical slices. Instead of building a complete feature across all layers of the stack before shipping, build a thin end-to-end slice that delivers a small but complete unit of user value. This is one of the most practical applications of batch size reduction for Product Managers.
Shorten your planning horizons. Reinertsen makes an insightful observation: short planning horizons produce more stable requirements. As he notes, “we developed the product so fast that marketing didn’t have time to change their mind.” The shorter the horizon between commitment and delivery, the less churn you experience.
Exploiting Variability — The Counterintuitive Insight
“Should product development teams strive to eliminate all variability from their processes like manufacturing does with Six Sigma?”
No. This is one of the most important and counterintuitive insights in Reinertsen’s work. In manufacturing, variability is almost always the enemy. Six Sigma and traditional lean thinking are designed to stamp it out. But product development is a fundamentally different domain. Variability in product development is not always bad—it is sometimes the very source of the value you are trying to create.
Think about it this way: if there were no variability in the outcomes of product development efforts, there would be no possibility of breakthrough innovation. The same uncertainty that creates risk also creates opportunity. Reinertsen uses the analogy of options pricing from financial theory—specifically the Black-Scholes model—to illustrate that variability has asymmetric payoff potential. When the downside is bounded (you can kill a failing experiment early) but the upside is unbounded (a breakthrough feature can generate massive returns), variability is your friend.
The practical implication is twofold. First, only eliminate variability that is economically harmful (what Reinertsen calls “bad variability”). This includes unnecessary rework from unclear requirements, avoidable defects, and process inconsistencies that create unpredictable delays. Second, when you cannot eliminate variability, minimise its cost by reducing batch sizes, lowering capacity utilisation, and accelerating feedback. The cost of variability is heavily influenced by the state of your queues and WIP.
One of the failure modes I have seen:
Organisations that adopt Six Sigma or heavy process standardisation in product development often end up optimising for predictability at the expense of innovation. They create rigid stage-gate processes that smooth out variability but also smooth out the creative exploration that generates breakthrough ideas. Reinertsen’s framework helps you distinguish between process variability (which should be managed) and outcome variability (which should be embraced and exploited).
WIP Constraints and Cadence — Controlling the System
“How do we prevent our product development system from becoming overloaded without micromanaging every team?”
Through Work-in-Progress (WIP) constraints and cadence. These are two of Reinertsen’s most practically applicable principles, and they work in concert to create flow without requiring centralised control over every detail.
WIP constraints function like the ramp meters you see on motorway on-ramps in cities like Los Angeles. By controlling the rate at which new vehicles (work items) enter the motorway (the development process), you prevent the congestion that would otherwise destroy flow for everyone already on the road. The mathematical foundation comes from Little’s Law: Cycle Time = WIP / Throughput. If you hold throughput constant, reducing WIP directly reduces cycle time. If your team has a throughput of 10 items per sprint and you currently have 30 items in progress, your average cycle time is 3 sprints. Reduce WIP to 15, and cycle time drops to 1.5 sprints—without changing anything about how fast people work.
Cadence creates predictable rhythms in the development process that reduce transaction costs and coordination overhead. Sprint ceremonies, regular release trains, weekly architecture reviews—these are all cadence mechanisms. Reinertsen distinguishes cadence (doing things on a regular schedule) from synchronisation (coordinating events at the same time). Both reduce the overhead of coordination, but cadence has the added benefit of creating predictable planning horizons.
I would suggest the following practical approach to implementing WIP constraints, based on both Reinertsen’s principles and what I have seen work in practice:
Start by making WIP visible. Before setting limits, simply count how many items are in progress at each stage. Most teams are shocked to discover they have 3–5x more work in progress than they have capacity to actively work on.
Set initial WIP limits slightly below current WIP. Do not cut WIP in half overnight. Start by constraining it to 80–90% of current levels and observe the impact on flow and cycle time. Tighten gradually as the team adapts.
Use WIP limits as policy, not punishment. When a WIP limit is reached, it should trigger a conversation: “We need to finish something before starting something new.” It is a signal to swarm on blocked items, help a colleague, or address a bottleneck—not a reason to blame someone.
Establish cadence for activities with high transaction costs. If coordinating a release requires significant effort, do it on a regular schedule. If stakeholder reviews create bottlenecks, schedule them at fixed intervals. Cadence converts irregular, high-overhead events into predictable, lower-overhead ones.
Accelerating Feedback — The Engine of Learning
“Why is fast feedback so critical to product development productivity, and how is it different from manufacturing feedback?”
In manufacturing, feedback confirms whether you are executing the recipe correctly. In product development, feedback tells you whether you even have the right recipe. This is a fundamental difference. The value of feedback in product development is not about quality control—it is about learning velocity. The faster you learn whether your hypotheses about customer value, technical feasibility, and business viability are correct, the faster you can converge on solutions that generate real value.
Reinertsen frames this through the lens of information economics. Every experiment, prototype, and user test generates information that reduces uncertainty. The economic value of that information depends on how quickly you receive it, because delay allows you to continue investing in potentially wrong directions. This is why Reinertsen advocates aggressively for fast, frequent feedback loops at every level of the system: from unit tests that provide feedback in seconds, to sprint reviews that provide feedback in weeks, to market-level signals that provide feedback in months.
One thing I have realised in my time developing product is that the most effective teams I have worked with share a common trait: they are obsessive about closing feedback loops quickly. They deploy to production early and often. They watch user behaviour in real-time. They talk to customers weekly, not quarterly. They instrument everything so they know what is happening with their product. As I have written before in Data Data Data: “if you do not instrument to track user behaviour, you will not know what is happening with your product or systems. If you do not know what is going on, you cannot possibly be a great operator.”
Building Faster Feedback into Your Product Development Process
Reduce the time between writing code and getting production user feedback to the absolute minimum. Every day between “code complete” and “user is using it” is a day of delayed learning. Invest in deployment automation, feature flags, and canary releases.
Build “fast failure” into your discovery process. Use prototypes, Wizard-of-Oz tests, concierge MVPs, and smoke tests to validate demand and usability before committing to full development. The cheapest and fastest feedback comes from testing ideas before building them.
Implement automated testing at every level. Unit tests, integration tests, end-to-end tests, and performance tests should run automatically on every commit. This is feedback on technical quality that costs almost nothing once the infrastructure is in place.
Create a cadence of customer contact. Weekly or bi-weekly customer conversations, usage analytics reviews, and NPS/satisfaction surveys create a steady stream of market-level feedback that prevents the dangerous drift between what you are building and what customers actually need.
Decentralised Control — Pushing Decisions to the Edge
One of the most powerful—and for many organisations, most challenging—of Reinertsen’s principles is the case for decentralised decision-making. His argument draws on an unexpected range of analogies: the Internet’s packet-routing architecture, computer operating systems, military manoeuvre warfare doctrine, and transportation network design. The common thread is that all of these systems have learned to manage flow in the presence of variability by pushing decisions to the point closest to the information.
In product development, centralised decision-making introduces queue time. Every time a team needs to escalate a decision to a committee, a steering group, or a senior leader, work sits idle waiting for approval. This is directly analogous to a traffic system that routes all routing decisions through a central control tower instead of letting individual vehicles make local navigation choices. The central system cannot process information fast enough to keep up with real-time conditions, and congestion is the inevitable result.
Reinertsen does not advocate for no control—he advocates for the right kind of control. Like a military commander who sets the mission objective and rules of engagement but lets field commanders make tactical decisions in real-time, product development leaders should set the strategic intent, the economic framework, and the constraints—then trust teams to make the day-to-day decisions about how to achieve the objectives. This is context-dependent and requires investment in developing the judgement and economic literacy of every team member.
The Synthesised Productivity Framework — Bringing It All Together
“How do we systematically build a productivity measurement and improvement framework that integrates these principles into a coherent whole?”
The challenge today is that we have a wealth of valuable material on product development productivity—from Reinertsen’s flow principles, to Kersten’s Flow Framework metrics, to the Logic Model’s structured planning approach—but this content does not explicitly show how it all connects into a unified organisational perspective. What follows is my synthesis of how these three pillars integrate into a single, actionable framework.
Pillar 1: The Logic Model — Structured Alignment
The Logic Model provides the structural backbone. Adapted from the W.K. Kellogg Foundation’s programme evaluation framework, it defines a clear chain: Inputs → Activities → Outputs → Outcomes → Impact. For product development, this means aligning your resources (people, tools, budget), your activities (discovery, design, development, delivery), your outputs (features, products, services), your outcomes (user adoption, satisfaction, business metrics), and your impact (strategic objectives, market position, revenue growth) into a coherent chain of reasoning.
The Logic Model’s value is that it forces you to be explicit about how you believe your activities create value. It is, in essence, a theory of change for your product development organisation. Without it, you risk measuring outputs (how many features did we ship?) without connecting them to outcomes (did those features create user value?) or impact (did that user value translate to business results?).
Pillar 2: Reinertsen’s Flow Principles — Economic Optimisation
Reinertsen’s principles provide the economic engine of the framework. They tell you how to optimise the activities in your Logic Model for maximum economic throughput. The key principles, as we have explored, include: establishing an economic framework (Cost of Delay, WSJF), making queues visible and managing them actively, reducing batch sizes through investment in transaction cost reduction, constraining WIP to enable flow, exploiting beneficial variability while mitigating harmful variability, accelerating feedback to increase learning velocity, and decentralising control to reduce decision-making queue time.
Pillar 3: Kersten’s Flow Framework — Product-Centric Measurement
Mik Kersten’s Flow Framework provides the measurement system. His four key flow metrics—flow velocity (how many items completed per time period), flow time (how long from start to finish), flow load (how many items in progress), and flow efficiency (percentage of time spent in active work versus waiting) — map directly onto the principles Reinertsen describes. Flow velocity relates to throughput, flow time to cycle time and queue time, flow load to WIP, and flow efficiency to the ratio of value-adding time to total time.
The Integrated View
When you combine these three pillars, you get a framework that answers the three essential questions of product development productivity:
Are we building the right things? (Logic Model: are our activities connected to meaningful outcomes and impact?)
Are we building them efficiently? (Reinertsen: are we optimising flow economics, managing queues, and reducing waste?)
Can we see and measure what matters? (Kersten: are we tracking flow velocity, flow time, flow load, and flow efficiency across our value streams?)
Failure Modes — Where This Goes Wrong
Despite the power of these ideas, I have seen several recurring failure modes when organisations attempt to implement flow-based thinking in product development. Some things to watch out for:
Treating metrics as targets rather than diagnostic tools. As soon as you make flow velocity a target, teams will game it by reducing the size of items or cherry-picking easy work. Metrics are for learning, not for performance management. Reinertsen himself warns that metrics are open to abuse and misuse if not developed in a balanced way—recall the Wells Fargo performance measurement scandal as a cautionary tale.
Implementing WIP limits without addressing the root causes of overload. WIP limits reveal problems—they do not solve them. If your organisation keeps starting new initiatives without finishing existing ones, a WIP limit will create visible friction. The failure mode is to then remove the WIP limit rather than addressing the underlying prioritisation and governance problems.
Trying to eliminate all variability. Product development teams that import Six Sigma thinking wholesale risk creating processes that are highly predictable but produce mediocre outcomes. The goal is not zero variability—it is economically optimal variability management.
Confusing busyness with productivity. The most dangerous failure mode of all. An organisation where every person is 100% utilised and every sprint is packed to capacity will feel productive. But if cycle times are long, queues are growing, feedback is delayed, and value delivery is slow, you have a system that is optimised for the appearance of productivity while destroying actual economic throughput.
Adopting the tools without the thinking. Kanban boards, WSJF scoring, and flow metrics are tools. Without the underlying economic thinking that Reinertsen provides—without understanding why these tools work—they become bureaucratic ceremony rather than genuine productivity improvements.
Closing Remarks
If there is one thing that you remember from this article, let it be this: “productivity in product development is not about how busy people are. It is about how quickly and reliably your organisation converts investment into customer and business value.”
Reinertsen’s work provides the most rigorous economic foundation I have encountered for understanding why traditional approaches to productivity fail in product development and what to do instead. His 175 principles across eight domains— economics, queues, batch size, WIP constraints, cadence, flow control, feedback, and decentralised control — offer a comprehensive toolkit for any Product Manager who wants to move beyond intuition and ideology toward evidence-based, economically grounded decision-making.
The path from Insight to Action to Impact in the context of productivity requires you to do the following: quantify the economic impact of your decisions (starting with Cost of Delay), make your queues visible and manage them aggressively, reduce batch sizes by investing in the infrastructure that lowers transaction costs, constrain WIP to enable flow, embrace beneficial variability while mitigating its costs, build fast feedback loops at every level, and push decision-making authority to the people closest to the information.
As Reinertsen himself observes, these methods have produced 5x to 10x improvements even in mature product development processes. The principles are sound, the mathematics are proven, and the practical applications are well-documented. What remains is the organisational will to adopt them. I would argue that for Product Managers seeking to build genuinely productive product development organisations, there is no more important body of work to master.
"The dominant paradigm for managing product development is wrong. Not just a little wrong, but wrong to its very core." — Donald G. Reinertsen
Postscript — Additional Considerations
On the relationship between Reinertsen and Agile/Scrum: Reinertsen’s work underpins many of the practices that Agile and Scrum have popularised—sprints are a batch size mechanism, retrospectives are a feedback loop, the product backlog is a queue, and story points attempt to quantify effort for economic trade-offs. Understanding Reinertsen’s principles helps you understand why these practices work (when they do) and why they sometimes fail (when the underlying economic conditions don’t support them). His work also explains concepts like WSJF that have been incorporated into scaled frameworks like SAFe.
On counter metrics: As I have emphasised in previous articles, despite metrics being important to measure they are open to abuse and misuse if not developed in a balanced way. Reinertsen’s framework does not exempt you from the need for counter metrics. If you optimise aggressively for flow velocity, you may sacrifice quality. If you minimise cycle time at all costs, you may ship incomplete work. Every metric needs a counter metric to ensure you have not over-optimised your north star metric to the detriment of your customers and your business.
On context dependence: Reinertsen himself emphasises that his principles must be applied contextually. A hardware product development team will face different economic trade-offs than a SaaS team. An enterprise selling to regulated industries will have different batch size and feedback loop constraints than a consumer app startup. The principles are universal; their application is always context dependent.
The key take-away from this article is:
“Stop measuring how busy your people are. Start measuring how quickly value flows through your system. Quantify the economics, make the queues visible, reduce batch sizes, constrain WIP, accelerate feedback, and push decisions to the edge. That is what productivity in product development actually looks like.”
Before you go, please could you do the following?
Subscribe
Thanks for reading Prod Dev! Subscribe for free to receive new posts and support my work.
Share
Survey
Star
If you got value from reading the article a star liking would be highly appreciated!
Resources
Reinertsen, Donald G. The Principles of Product Development Flow: Second Generation Lean Product Development. Celeritas Publishing, 2009.
Reinertsen, Donald G. Managing the Design Factory: A Product Developer’s Toolkit. Free Press, 1997.
Kersten, Mik. Project to Product: How to Survive and Thrive in the Age of Digital Disruption with the Flow Framework. IT Revolution Press, 2018.
W.K. Kellogg Foundation. Logic Model Development Guide. W.K. Kellogg Foundation, 2004.
Drucker, Peter F. The Practice of Management. Harper Business, 1954.
Scaled Agile Framework (SAFe). Weighted Shortest Job First (WSJF) — framework.scaledagile.com/wsjf
Adventures with Agile. Interview with Don Reinertsen on Queues, Measuring Agility, and Variability.
Lean Magazine. Cost of Delay — Interview with Don Reinertsen.
Machele, Tshepo. Productivity in Product Development. Prod Dev (Substack), September 2023.
Machele, Tshepo. Measure Twice and Cut Once Products. Prod Dev (Substack), November 2023.
Machele, Tshepo. Data Data Data. Prod Dev (Substack), May 2022.

