Skip to content

Operations Management

Why This Matters

Operations management is the engine room of every business. While strategy decides what to do and finance decides how to fund it, operations decides how to actually deliver it -- reliably, efficiently, and at scale. Every euro of revenue passes through an operations system before it becomes profit. A restaurant with brilliant marketing still fails if the kitchen cannot serve food on time. A factory with cutting-edge products still bleeds cash if its inventory ties up working capital needlessly.

This course builds the quantitative toolkit to diagnose, measure, and improve any operations system -- whether it is a hospital emergency room, a pizza delivery chain, a luxury goods manufacturer, or an airline. The core question is always: How do we deliver the right product/service, at the right time, at the right cost, at the right quality?

How It All Connects

Operations Management sits at the intersection of every other MBA discipline:

  • Financial Accounting -- Inventory appears on the balance sheet as a current asset; WIP and finished goods directly affect working capital and cash flow ratios
  • Managerial Accounting -- Cost systems (variable vs. absorption), make-vs-buy decisions, and variance analysis all depend on understanding process capacity and utilization
  • Marketing Management -- Demand forecasts feed capacity planning; service quality (wait times, stockouts) shapes customer satisfaction and retention
  • Corporate Finance -- Capacity investments are capital budgeting decisions (NPV of a new production line); inventory financing affects WACC through working capital needs
  • Operations Strategy (S3) -- This course provides the analytical foundation; Operations Strategy builds on it to address competitive positioning through operations (trade-offs, focus, operational excellence)

The Big Picture Flow:

Demand Forecast --> Capacity Planning --> Process Design --> Flow Management
     (Marketing)      (CapEx/Finance)     (This Course)    (This Course)
                                               |
                                    +----------+----------+
                                    |          |          |
                              Throughput   Inventory   Queues
                              Analysis     Management  Analysis

Lesson 1: Introduction -- Efficiency vs. Effectiveness

Case: Benihana

Core Concept

Operations management is NOT just about cutting costs. It is about aligning the operations system with the business strategy. Two fundamental dimensions:

  • Efficiency = doing things right (low cost, high utilization, minimal waste)
  • Effectiveness = doing the right things (meeting customer needs, quality, speed)

The tension between these two drives most operational decisions.

The Operations Pipeline Analogy

Think of any operations system as a pipeline through which items (products, customers, information) flow. The pipeline has three elements:

Element Definition Example (Benihana)
Items Things being processed Customers/diners
Activities Basic units of work that add value Seating, cooking, serving, payment
Processors/Servers Resources that perform activities Chefs, tables, bar area

Five Competitive Dimensions of Operations

Dimension What It Measures Key Parameters
Capacity Revenue-generating ability Throughput, max throughput
Flexibility Variety of items + ability to scale up/down Product mix impact, idle capacity
Agility Speed of response Throughput time
Efficiency Investment and cost per unit of output WIP, utilization, labor efficiency
Quality Ability to meet specifications Defect rate, rework cost

Benihana Insight

Benihana brilliantly aligns efficiency with effectiveness: the teppanyaki grill is both entertainment (effectiveness -- the customer experience) and an operations system that maximizes table turns, minimizes food waste, and reduces kitchen space (efficiency). The chef IS the server AND the show.

Cross-reference: This efficiency-vs-effectiveness tension reappears in Operations Strategy (S3) as the productivity frontier and strategic trade-offs.


Lesson 2: Process Analysis -- Introduction

Case: Pizzas Dani

Core Formulas

Throughput (T): The quantity of items actually processed per unit of time.

T = items completed / time period

Cycle Time (CT): The average interval between successive completions (inverse of throughput).

CT = 1 / T

If a barber serves 3 customers/hour, CT = 20 minutes.

Maximum Throughput = Capacity:

Capacity = Processor Availability (hours/day) / Processor Consumption per item (hours/item)

Throughput Time (TT): Total time an item spends in the system (entry to exit), including all waits, processing, and transport.

Work-in-Process (WIP): Items that have entered but not yet exited the system.

Little's Law -- The Universal Operations Equation

WIP = T × TT

Variable Meaning Units
WIP Average work-in-process items
T (or TH) Average throughput items/time
TT Average throughput time time

Intuition: Imagine a highway. If 60 cars enter per hour (T) and each takes 0.5 hours to cross (TT), there are always 30 cars on the highway (WIP). More cars on the road (higher WIP) at the same entry rate means each car takes longer (higher TT).

Back-of-Napkin: Little's Law

  • If 10 people are in a system and throughput is 2/hour, average time = 10/2 = 5 hours
  • A restaurant has 60 seats, average meal = 1.5 hours --> max throughput = 60/1.5 = 40 diners/hour
  • A factory has 500 units of WIP and produces 100 units/day --> average throughput time = 5 days

Variable Definitions

Variable Name Units
T (TH) Throughput items/time
CT Cycle time time/item
TT Throughput time time
WIP Work-in-process items

Work Content and Labor Efficiency

Work content = total processor time actually spent producing one item (the real value-added time).

Labor content = work content attributed to human labor only (excludes machine-only time).

Labor Efficiency = Total labor content of items produced / Total labor availability (hours paid)

Quick Mental Math: If 2 hairdressers work 8 hours each (960 min total) and serve 20 customers at 27 min labor content each (540 min), labor efficiency = 540/960 = 56.25%. The rest is idle time or non-value-added work.


Lesson 3: Process Analysis -- Product Mix

Case: Arlanzones

The Product Mix Problem

When a system processes multiple types of items, capacity depends on the mix. A hairdresser takes 20 min for men, 30 min for women. With a 50/50 mix:

Average processing time = (20 + 30) / 2 = 25 min = 0.417 hours

Max throughput = 8 hours / 0.417 hours = 19.2 customers/day

Capacity Analysis: The 3-Step Method

Step Action Formula
1 Consumption: Define item mix, determine processor time per item Weighted average if multiple products
2 Availability: Hours available per processor × number of processors Total availability = individual × count
3 Capacity: Divide availability by consumption Capacity = Availability / Consumption

Bottleneck Principle

For a given mix of items, the maximum throughput of a system is constrained by the processor with the lowest capacity. That processor is the bottleneck.

The bottleneck is always a processor, not an activity. One processor may perform multiple activities.

Utilization

Utilization = Actual Throughput / Processor Capacity = (Consumption × Demand) / Processor Availability

Key insight: In a multi-processor system, the bottleneck runs at 100% utilization. All other processors are underutilized. If ALL processors ran at 100%, items would pile up infinitely in front of the bottleneck.

Product Mix Shifting the Bottleneck

The bottleneck can change when the product mix changes. A processor that is the bottleneck for one mix may NOT be the bottleneck for a different mix.

Implication for managers: Before investing in capacity (buying machines, hiring workers), always check whether the bottleneck will remain the same under realistic demand scenarios.

Cross-reference: Product mix decisions connect to Managerial Accounting (contribution margin per unit of bottleneck resource) and Marketing Management (which products to promote).


Lesson 4: Process Analysis -- Process Improvement

Case: Privalia

Removing Bottlenecks

Once the bottleneck is identified, there are several levers:

  1. Add capacity at the bottleneck (more processors of same type)
  2. Reduce processing time at the bottleneck (training, technology, process redesign)
  3. Offload activities from the bottleneck to non-bottleneck processors (task reallocation)
  4. Reduce variability in processing times
  5. Reduce setup/changeover time (increases effective capacity)

Crucial Insight: Whack-a-Mole

As soon as one bottleneck is removed, a new bottleneck appears elsewhere in the system. Capacity improvement is an ongoing process of eliminating bottlenecks, one after another.

Example: Adding a second assembly worker (bottleneck at 16 chairs/day) increases assembly capacity to 32/day, but the SYSTEM capacity only rises to 20/day because varnishing (20/day) becomes the new bottleneck. Doubling bottleneck capacity does NOT double system capacity.

Effect of Batch Size on Capacity

When activities have setup time (preparation independent of batch size):

Capacity = M / (p + s/Q)

Where M = available time, p = unit processing time, s = setup time, Q = batch size. As Q increases, the setup time per unit (s/Q) shrinks, and capacity approaches M/p asymptotically.

Minimum batch size to achieve required capacity C:

Qmin = C / [(M − C × p) / s]

If Qmin is negative, the required capacity is unachievable regardless of batch size.

Back-of-Napkin: Batch Sizing

  • Setup = 60 min, processing = 1 min/unit, 8-hour day (480 min)
  • Batch of 1: capacity = 480/61 = ~8 units/day
  • Batch of 60: capacity = 480/(1 + 60/60) = 480/2 = 240 units/day
  • Batch of 480: capacity = 480/(1 + 60/480) = ~437 units/day (approaching max of 480)

Cross-reference: Process improvement connects to Corporate Finance -- capacity investments are NPV/IRR decisions. Every bottleneck removal has a cost-benefit trade-off.


Lesson 5: Input/Output Analysis -- Introduction

From Static to Dynamic Analysis

Process analysis tells us whether capacity fits demand on average. But even when average capacity > average demand, predictable fluctuations (time-of-day, day-of-week, seasonality) create temporary imbalances that cause waiting.

The Input/Output Curve

Place two observers: Observer A counts cumulative arrivals over time. Observer B counts cumulative departures (completions). Plot both on the same time axis.

Reading the I/O curve: - Vertical gap at any time = number of items waiting (queue length / inventory) - Horizontal gap at any cumulative count = waiting time for that item - Area between curves = total waiting done by all items (units: item-hours)

Key KPIs from I/O Analysis

Avg. queue length = Area between curves / Total time period observed

Avg. waiting time = Area between curves / Total number of items

Worked Example: Ski Lift

Lift capacity: 3,000 skiers/hour. Arrivals vary: - Morning (9:00-12:30): 3,600/hr --> excess of 600/hr - Lunch (12:30-14:00): 1,600/hr --> surplus capacity of 1,400/hr - Afternoon (14:00-18:00): 3,200/hr --> excess of 200/hr

Queue builds during morning at 600 skiers/hr for 3.5 hours = 2,100 skiers max queue at 12:30.

Maximum waiting time (for skier arriving at 12:30): Cumulative arrivals by 12:30 = 3,600 × 3.5 = 12,600 Time for lift to process 12,600 skiers = 12,600 / 3,000 = 4.2 hours Wait = 4.2 − 3.5 = 0.7 hours = 42 minutes

Total waiting (area of triangle) = 1/2 × 2,100 × (3.5 + 1.5) = 5,250 skier-hours

Intuition: The Bathtub

Think of a bathtub: water flows in (arrivals) and drains out (processing). If inflow > outflow, the water level (queue) rises. Even if total daily inflow = total daily outflow, timing mismatches create temporary buildup.

Why This Matters

I/O curves let managers prototype operational changes before committing resources: test different staffing levels, operating hours, pricing policies, or capacity upgrades by simply redrawing the curves.


Lesson 6: Input/Output -- Services

Case: Vall d'Hebron Hospital

Service-Specific Challenges

Services differ from manufacturing in I/O analysis: - Customers ARE the items -- they experience the wait directly - Cannot inventory service -- unused capacity in one period is lost forever - Simultaneity -- production and consumption happen at the same time

Managing Predictable Demand Peaks

Strategy Mechanism Example
Shift demand Appointments, dynamic pricing, off-peak incentives Hospital scheduling elective surgeries
Flex capacity Part-time staff, overtime, cross-training Hospital calling in extra nurses for Monday mornings
Absorb with buffer Waiting rooms, virtual queues Emergency room triage

Hospital Application

Emergency rooms face both predictable patterns (Monday mornings, Friday nights) and unpredictable surges. I/O analysis handles the predictable part; queueing theory (Lessons 11-12) handles the random part.


Lesson 7: Input/Output -- Manufacturing

Case: Poma de Tuixent

Manufacturing I/O Differences

  • Items can be inventoried (buffer between input and output)
  • Setup times and batch sizes create lumpy output patterns
  • Multiple product types create sequencing decisions

Production Smoothing

When demand is seasonal but capacity is fixed, managers face a trade-off: - Chase strategy: Adjust capacity to match demand (hire/fire, overtime) -- higher labor costs, lower inventory - Level strategy: Produce at constant rate, build inventory during slack periods -- higher inventory costs, stable workforce

This directly sets up Lesson 8 on Aggregate Planning.


Lesson 8: Aggregate Planning

Case: Athena Luxury Purses

The Aggregate Planning Problem

Given a demand forecast over multiple periods, decide how much to produce each period by choosing among:

Decision Variable Cost Driver
Regular production Labor cost per unit
Overtime production Premium labor cost (typically 1.5x)
Subcontracting Higher unit cost but no fixed overhead
Hiring workers Hiring/training cost
Firing/laying off workers Severance, morale, legal costs
Building inventory Holding cost (storage, capital, obsolescence)
Backorders/lost sales Penalty cost, customer dissatisfaction

Cost Trade-offs Framework

The objective is to minimize total cost across all periods subject to meeting demand. This is fundamentally a trade-off between:

Capacity adjustment costs (hiring, firing, overtime) vs. Inventory holding costs vs. Stockout/backorder costs

Solving Approaches

  1. Spreadsheet enumeration: Try different strategies (pure chase, pure level, mixed), calculate total cost for each
  2. Linear programming: Formalize as optimization problem (often too complex for exam, but conceptually important)

Key Intuition

  • If hiring/firing is cheap relative to holding: use chase strategy
  • If holding is cheap relative to hiring/firing: use level strategy
  • If overtime premium is small: use overtime before hiring
  • Subcontracting makes sense when demand spikes are temporary and unpredictable

Back-of-Napkin: Aggregate Planning

Ask three questions: 1. What is total demand over the planning horizon? 2. What is total regular-time capacity? 3. What is the gap? --> This must be filled by overtime, subcontracting, inventory buildup, or lost sales

Cross-reference: Aggregate planning connects to Corporate Finance (working capital management) and Managerial Accounting (cost behavior -- fixed vs. variable, relevant costs for decisions).


Lesson 9: Productivity Management

Case: Surgikos

Productivity = Output / Input

Productivity = Throughput (units or revenue) / Resources consumed (labor, capital, materials)

Partial vs. Total Factor Productivity

Measure Formula Use
Labor productivity Output / Labor hours Compare across shifts, plants
Capital productivity Output / Capital employed Assess equipment ROI
Total factor productivity Output / (weighted sum of all inputs) Overall efficiency benchmark

Productivity Improvement Levers

  1. Process improvement -- reduce waste, eliminate non-value-added steps
  2. Technology -- automate, better equipment
  3. Workforce management -- training, incentives, job design
  4. Quality improvement -- fewer defects = less rework = more effective output
  5. Capacity utilization -- spreading fixed costs over more units

The Surgikos Insight

Productivity is not just about working harder or faster. It is about working smarter -- redesigning processes, removing bottlenecks, and ensuring that every unit of input creates maximum output.


Lesson 10: Midterm Exam -- covers Lessons 1-9


Lesson 11: Queue Management -- Introduction

Why Queues Form Even When Capacity > Demand

Process analysis and I/O curves handle predictable imbalances. But even when average capacity exceeds average demand, random variability in arrivals or service times creates queues. Five customers might arrive in the first 5 minutes of an hour, then none for 10 minutes.

Anatomy of a Queueing System

Three defining elements:

1. Arrivals:

Parameter Symbol Meaning
Mean inter-arrival time tₐ Average time between arrivals
Arrival rate lambda (λ) λ = 1/tₐ (items/time)
Std. dev. of inter-arrival time σₐ Variability of arrivals
Coefficient of variation CVₐ CVₐ = σₐ / tₐ

2. Service:

Parameter Symbol Meaning
Mean service time tₛ Average time to serve one item
Service rate mu (μ) μ = 1/tₛ (items/time per server)
Std. dev. of service time σₛ Variability of service
Coefficient of variation CVₛ CVₛ = σₛ / tₛ

3. Design:

Parameter Symbol Meaning
Number of servers S Parallel identical servers
Queue discipline -- FIFO, priority, etc.
Queue structure -- Pooled (single line) vs. separate lines

Utilization

ρ = λ / (S × μ)

Variable Meaning
ρ (rho) Utilization (fraction of time servers are busy)
λ (lambda) Arrival rate
μ (mu) Service rate per server
S Number of servers

Critical thresholds: - ρ < 1: System is stable; queues form but eventually clear - ρ = 1: No slack; any fluctuation causes infinite queue buildup - ρ > 1: System is fundamentally under-capacity; queue grows without bound

Coefficient of Variation (CV) -- Measuring Variability

CV Value Interpretation
CV = 0 Deterministic (perfectly predictable, like a metronome)
CV = 1 Typical of random/Poisson processes (customers arriving independently)
CV > 1 Highly variable, chaotic, unpredictable

Four Key KPIs

KPI Symbol What It Measures
Avg. number waiting in queue Lq Queue length
Avg. waiting time in queue Wq Time before service starts
Avg. number in system L Queue + being served
Avg. total time in system W Waiting + service time

Little's Law in Queueing Notation

L = λ × W (system level) Lq = λ × Wq (queue level)

Relationships Between KPIs

W = Wq + tₛ (total time = wait + service) L = Lq + Lₛ (total in system = in queue + in service) Lₛ = S × ρ = λ × tₛ (avg. number being served)

Once you know ANY ONE of {Lq, Wq, L, W}, you can derive ALL the others.

The Sakasegawa Approximation

Lq ≈ [ρ^√(2(S+1)) / (1 − ρ)] × [(CVₐ² + CVₛ²) / 2]

Component What It Does
ρ^√(2(S+1)) / (1 − ρ) Captures utilization and server count effect
(CVₐ² + CVₛ²) / 2 Captures variability effect

Simplified Special Case (S=1, CVₐ = CVₛ = 1)

Lq = ρ² / (1 − ρ)

The Utilization Trap -- Back-of-Napkin

Utilization (ρ) Lq (S=1, CV=1) Interpretation
50% 0.5 Short queue
70% 1.6 Manageable
80% 3.2 Getting long
85% 4.8 Expect queues to explode past here
90% 8.1 Very long waits
95% 18.1 Unacceptable
99% 98.0 System is broken

Rule of thumb: If utilization > 85%, expect queues to explode. The relationship is NONLINEAR -- going from 90% to 95% roughly doubles the queue.

Intuition: The Traffic Jam

A highway at 50% capacity flows freely. At 80% capacity, any small disruption (a lane change, a slow driver) causes a ripple. At 95% capacity, the highway locks up. The same physics governs any queueing system.


Lesson 12: Queue Management -- Pooling

Case: Etihad Airways

Three Managerial Levers

1. Add capacity (increase S or μ) - Directly reduces ρ - Cost: more servers, equipment, labor

2. Reduce variability (lower CVₐ or CVₛ) - Arrival variability: appointments, reservations, demand smoothing, metered entry - Service variability: standardize procedures, train for consistency, segment customers by complexity - Effect: as powerful as adding capacity, often cheaper

3. Pool resources (combine separate queues into one) - Instead of 2 separate queues with 1 server each, use 1 pooled queue with 2 servers - Variability evens out across the larger system - Dramatically reduces probability that one server starves while another is overwhelmed

Pooling Effect -- Why It Works

With separate queues: if one server finishes and has no customers, that capacity is wasted even though the other queue may be long.

With a pooled queue: idle servers automatically take the next customer from the shared queue. No capacity is wasted.

Quantitative impact: Pooling reduces Lq through BOTH the utilization term (same ρ but more servers) and the √(2(S+1)) exponent in the Sakasegawa formula.

Trade-offs of Pooling

  • May feel less personal (one long line vs. "your" dedicated server)
  • Requires cross-trained staff (broader skills, possibly higher wages)
  • Perception management: customers may perceive one long line as worse than multiple short lines, even if the math says otherwise

Etihad Airways Application

Airlines pool check-in counters (economy, business, first) vs. dedicate them. Pooling improves throughput and reduces worst-case waits, but business/first-class passengers expect dedicated service. The solution: partial pooling -- dedicated lines for premium, pooled for economy.


Lesson 13: Production Planning

Case: Edentel

From Aggregate to Detailed Planning

Aggregate planning (Lesson 8) decides how much to produce each period. Production planning decides what specific products to produce, when, and in what sequence.

Key Decisions

  1. Lot sizing: How much of each product per production run? (connects to EOQ in Lesson 14)
  2. Sequencing: In what order to produce different products? (minimize total setup time)
  3. Scheduling: When exactly to start and finish each job? (Gantt charts)

MRP Logic (Material Requirements Planning)

Net requirement = Gross requirement − On-hand inventory − Scheduled receipts

Work backward from finished goods delivery dates to determine when to start each activity, accounting for lead times at each stage.


Lesson 14: Inventory -- Batching (EOQ)

Case: MORSA

Why Batch? Three Effects

  1. Increases capacity (fewer setups = more production time)
  2. Decreases unit cost (fixed setup cost spread over more units)
  3. Increases inventory (larger batches = more average stock)

The EOQ Model

Setup: Demand is stable at D units/year. Each order costs S to place. Holding one unit for one year costs H = v × i (unit value × carrying cost rate).

Two competing costs:

Ordering cost/year = S × D/Q (decreases with batch size Q)

Holding cost/year = H × Q/2 (increases with batch size Q)

The optimal batch size (Economic Order Quantity):

EOQ = √(2 × D × S / H)

Variable Name Units
D Annual demand units/year
S Setup/ordering cost $/order
H Holding cost per unit per year $/unit/year
v Unit value $/unit
i Carrying cost rate %/year
Q Order quantity (batch size) units

At EOQ, ordering cost = holding cost (the two curves cross at the minimum of total cost).

Back-of-Napkin: EOQ Square Root Relationship

The EOQ formula has a square root, which makes it extremely robust:

  • EOQ doubles when demand quadruples (because √4 = 2)
  • A 40% error in demand estimation causes only ~18% error in EOQ
  • A 7% rounding of EOQ (e.g., to box sizes) causes < 1% increase in total cost

This is the most important practical insight about EOQ: Don't obsess over getting perfect data. The formula is forgiving.

When EOQ Breaks Down

The simple EOQ formula assumes: - Only ordering and holding costs matter - No quantity discounts - Constant, known demand - No capacity constraints

When other costs exist (quantity discounts, free shipping above a threshold), use a spreadsheet approach: enumerate total costs for candidate batch sizes and pick the minimum.

Batch Size and Capacity

Capacity = M / (p + s/Q)

Where M = available time, p = unit processing time, s = setup time, Q = batch size.

Minimum batch for required capacity C:

Qmin = C / [(M − C × p) / s]

Cross-reference: Inventory on the balance sheet (Financial Accounting) -- larger batches mean higher average inventory, higher current assets, lower inventory turnover ratio. The CFO cares about EOQ because it directly affects working capital.


Lesson 15: Inventory -- Safety Stock

Case: MORSA (continued)

Why Safety Stock?

EOQ assumes demand is perfectly known. In reality, demand fluctuates. Safety stock is the buffer inventory kept to protect against stockouts caused by demand uncertainty.

The Safety Stock Formula

SS = z × σ_d × √VP

Variable Name Meaning
SS Safety stock Extra units held as buffer
z Safety factor From normal distribution table (higher z = lower stockout risk)
σ_d Std. dev. of demand per period Measures demand uncertainty
VP Vulnerable period Number of periods the safety stock must cover

Vulnerable Period

VP = LT + RP

Component Meaning
LT (Lead Time) Time from placing order to receiving goods
RP (Review Period) How often inventory is checked (0 if continuous review)

Common z Values

Service Level Stockout Risk z Value
84.1% 15.9% 1.00
90.0% 10.0% 1.28
95.0% 5.0% 1.64
97.7% 2.3% 2.00
99.0% 1.0% 2.33
99.9% 0.1% 3.09

Key Intuition: The Square Root Effect

Safety stock grows with the square root of the vulnerable period, NOT linearly.

  • Doubling the lead time does NOT double the safety stock
  • Safety stock for 4 periods = 2x safety stock for 1 period (because √4 = 2)
  • Safety stock for 9 periods = 3x safety stock for 1 period

Why? Because demand fluctuations in different periods are independent and partially cancel out. A high-demand day is likely followed by a normal or low-demand day.

Reorder Point

ROP = (Average demand per period × VP) + SS

When inventory drops to ROP, place an order.

Back-of-Napkin: Safety Stock

  • Demand: μ = 100/day, σ = 20/day, lead time = 9 days, continuous review (RP=0)
  • For 95% service level: z = 1.64
  • SS = 1.64 × 20 × √9 = 1.64 × 20 × 3 = 98.4 units
  • ROP = 100 × 9 + 98.4 = 998.4 units --> order when inventory hits ~1,000 units

Important Nuance: Use Forecast Error, Not Demand Variability

If you have a good forecasting model, the safety stock should protect against forecast error (the residual uncertainty), not total demand variability. Better forecasts --> smaller σ --> less safety stock needed.

Cross-reference: Safety stock ties to Financial Accounting (inventory valuation methods -- FIFO, LIFO, weighted average) and to Marketing Management (service level targets are ultimately customer-facing promises).


Lesson 16: Inventory -- Perishable Goods (Critical Fractile)

Case: Fiore di Zucca

The Newsvendor Problem

For products with a short life cycle (fashion, perishable food, newspapers, seasonal toys), unsold inventory has little or no value after the selling period. You must decide how much to produce/order before knowing demand.

Two Key Costs

Cost Symbol Definition
Cost of underage Cᵤ (or G) Profit lost per unit if demand > supply (you could have sold it)
Cost of overage Cₒ (or L) Loss per unit if supply > demand (unsold unit you must dispose of)

The Critical Fractile Formula

Fc = Cᵤ / (Cᵤ + Cₒ) = G / (G + L)

Decision rule: Produce quantity q* such that:

P(Demand ≤ q*) = Fc

Graphically: compute Fc, draw a horizontal line at that level on the cumulative demand distribution, and read off q* where it intersects.

How to Apply

  1. Calculate G: How much more you earn if you produce one more unit and sell it
  2. G = selling price - unit cost (or: incremental margin)
  3. Calculate L: How much you lose if you produce one more unit and DON'T sell it
  4. L = unit cost - salvage value
  5. Compute Fc = G / (G + L)
  6. Read q* from the CDF of demand at probability Fc

Worked Example: Book Publisher

  • Cost to produce: 9 euro/book
  • Selling price: 30 euro/book
  • G = 30 - 9 = 21 euro
  • L = 9 - 0 = 9 euro (no salvage)
  • Fc = 21 / (21 + 9) = 0.70
  • From the CDF: P(D <= 1,100) = 0.70
  • Optimal print run: 1,100 books

Worked Example: Airline Revenue Management

  • Business class ticket: 2,100 euro; Economy: 700 euro
  • Unlimited economy demand
  • G = 2,100 - 700 = 1,400 (extra revenue from business vs. economy)
  • L = 700 (empty seat that could have been economy)
  • Fc = 1,400 / (1,400 + 700) = 0.667
  • From CDF of business demand: optimal seats to protect = 27

Key Intuition

  • High margin, low disposal cost (G >> L) --> Fc close to 1 --> produce aggressively (you make a lot per sale, lose little per unsold unit)
  • Low margin, high disposal cost (G << L) --> Fc close to 0 --> produce conservatively (each unsold unit is very costly)
  • Equal costs (G = L) --> Fc = 0.5 --> produce the median demand

Back-of-Napkin: Critical Fractile

  • If you make 3x more selling a unit than you lose not selling it: Fc = 3/(3+1) = 75% --> produce the 75th percentile of demand
  • If you make equal amounts: Fc = 50% --> produce the median
  • If you lose 2x more from overproduction than you gain from a sale: Fc = 1/(1+2) = 33% --> produce conservatively

Computing Expected Profit

The critical fractile tells you the optimal quantity but NOT the expected profit. To find expected profit: 1. Discretize the demand distribution into branches (e.g., 5 equal-probability branches using quantiles) 2. Calculate profit for each demand branch given the optimal production quantity 3. Weight by probability and sum


Lesson 17: Supply Chain Coordination

The Bullwhip Effect

Small fluctuations in consumer demand get amplified as they propagate upstream through the supply chain. A 5% increase in retail demand might translate to a 40% spike in orders at the manufacturer.

Causes

  1. Demand signal processing: Each level forecasts based on orders received (not actual consumer demand)
  2. Order batching: EOQ logic creates lumpy orders
  3. Price fluctuations: Forward-buying during promotions
  4. Rationing/shortage gaming: Over-ordering when supply is scarce

Mitigation Strategies

  • Information sharing: Share point-of-sale data upstream (e.g., Walmart-P&G partnership)
  • Vendor-managed inventory (VMI): Supplier manages retailer's inventory
  • Everyday low pricing (EDLP): Reduces forward-buying
  • Smaller, more frequent batches: Reduces order lumpiness (requires reducing setup costs)
  • Collaborative forecasting: Joint demand planning across supply chain tiers

Cross-reference: Supply chain design connects to Operations Strategy (S3 -- "What Is the Right Supply Chain for Your Product?" by Fisher) and Marketing Management (distribution channel strategy).


Lessons 18-20: Operations Day (simulation) -- applies all concepts from the course in a competitive team simulation


Lesson 21: Integration -- Organizational Issues

Case: Bioco

Operations and Organization Design

Operations decisions are not made in a vacuum. They interact with: - Incentive systems -- workers optimizing their own metrics (utilization) may hurt system performance (throughput time) - Organizational silos -- marketing promises delivery dates that operations cannot meet - Culture -- continuous improvement requires psychological safety to report problems

Key Tensions

Tension Operations View Other Department View
Utilization vs. responsiveness Keep machines busy Deliver fast
Batch size Large batches = efficient Small batches = flexible
Inventory Costly buffer Sales safety net
Quality Process control Design innovation

Lesson 22: Integration -- Service Excellence

Case: Etnia Barcelona

Service Operations Framework

Service excellence requires aligning: 1. Service concept -- what value are you delivering? 2. Service delivery system -- the operations pipeline for services 3. Demand management -- matching variable demand to fixed capacity 4. Customer management -- shaping expectations, managing waits

Perceived vs. Actual Waiting Time

Research shows that perceived wait time matters more than actual wait time. Occupied time feels shorter than unoccupied time. Uncertain waits feel longer than known waits. Unexplained waits feel longer than explained waits.

Operations as Competitive Advantage

When operations and strategy are perfectly aligned, operations becomes the source of sustained competitive advantage -- not just a cost center. This is the bridge to Operations Strategy (S3).


Lesson 23: Final Exam -- comprehensive, covers all course material


Quick Reference

Master Formula Sheet

# Formula Variables When to Use
1 CT = 1/T CT = cycle time, T = throughput Convert between throughput and cycle time
2 WIP = T × TT WIP = work-in-process, TT = throughput time Little's Law -- universal relationship
3 Capacity = Availability / Consumption hours/day divided by hours/item Capacity of a single processor
4 ρ = λ / (S × μ) ρ = utilization, λ = arrival rate, μ = service rate, S = servers Server utilization in queueing
5 Lq ≈ [ρ^√(2(S+1)) / (1 − ρ)] × [(CVₐ² + CVₛ²)/2] Sakasegawa approximation Average queue length
6 L = λ × W L = avg in system, W = avg time in system Little's Law for queues
7 Lq = λ × Wq Lq = avg in queue, Wq = avg wait in queue Little's Law for queue portion
8 W = Wq + tₛ Total time = wait + service Linking queue and system times
9 Lₛ = S × ρ Avg number in service From utilization
10 EOQ = √(2DS/H) D = demand, S = setup cost, H = holding cost Optimal batch/order size
11 SS = z × σ_d × √VP z = service factor, σ_d = demand std dev, VP = vulnerable period Safety stock calculation
12 VP = LT + RP LT = lead time, RP = review period Vulnerable period
13 ROP = (μ_d × VP) + SS μ_d = avg demand/period When to reorder
14 Fc = Cᵤ / (Cᵤ + Cₒ) = G / (G + L) Cᵤ = underage cost, Cₒ = overage cost Critical fractile -- newsvendor
15 Capacity (with batches) = M / (p + s/Q) M = available time, p = unit time, s = setup, Q = batch Capacity with setup times
16 Qmin = C / [(M − C×p)/s] C = required capacity Minimum batch for target capacity
17 Avg wait (I/O) = Area / Number of items Area between cumulative curves Input/output analysis
18 Avg queue (I/O) = Area / Time period Area between cumulative curves Input/output analysis

Quick Mental Math Shortcuts

Situation Shortcut
Little's Law WIP / Throughput = Time. Always.
Utilization > 85% Queues will explode nonlinearly
EOQ sensitivity Demand x4 --> EOQ x2 (square root)
Safety stock scaling Lead time x4 --> SS x2 (square root)
Critical fractile = 0.5 Produce the median demand
Fc > 0.5 High-margin product -- produce above median
Fc < 0.5 Low-margin/high-disposal-cost -- produce below median
Bottleneck removed A new one always appears elsewhere
Capacity = Availability/Consumption Use weighted avg consumption for product mix
I/O area = triangle 1/2 × base × height for cumulative curve gaps

Glossary

Term Definition
Aggregate Planning Deciding production levels across multiple periods to balance capacity costs, inventory costs, and demand
Availability Total time a processor has available for work in a given period
Batch A group of items processed together, sharing a single setup
Bottleneck The processor with the lowest capacity in a system; determines system capacity
Bullwhip Effect Amplification of demand variability as it moves upstream in a supply chain
Capacity Maximum throughput of a processor or system (items/time)
Chase Strategy Adjusting production rate each period to match demand (variable workforce)
Coefficient of Variation (CV) Standard deviation divided by mean; unitless measure of relative variability
Critical Fractile (Fc) The ratio Cᵤ/(Cᵤ + Cₒ) that determines optimal production quantity for perishable goods
Cycle Time (CT) Average time between successive completions; CT = 1/Throughput
EOQ Economic Order Quantity; the batch size that minimizes total ordering + holding cost
Holding Cost (H) Cost of keeping one unit in inventory for one year (includes capital, storage, obsolescence)
Input/Output Curve Cumulative arrival and departure functions plotted on same time axis
Items Things that flow through and are processed by the operations system
Labor Efficiency Ratio of labor content of output to total labor hours available
Lead Time (LT) Time from placing an order to receiving goods
Level Strategy Producing at a constant rate regardless of demand fluctuations
Little's Law WIP = Throughput × Throughput Time; holds for any stable system
Newsvendor Problem Single-period inventory decision for perishable/seasonal goods
Pooling Combining separate queues/resources to share variability and improve performance
Processors/Servers Resources (people, machines, equipment) that perform activities on items
Reorder Point (ROP) Inventory level at which a new order should be placed
Review Period (RP) Time between consecutive inventory checks
Safety Stock (SS) Extra inventory held to protect against demand uncertainty during the vulnerable period
Sakasegawa Approximation Formula to estimate average queue length as a function of utilization, servers, and variability
Setup Time Time to prepare a processor for a new batch (independent of batch size)
Throughput (T) Actual rate of items processed per unit of time
Throughput Time (TT) Total time an item spends in the system from entry to exit
Utilization (ρ) Fraction of available capacity actually used; ρ = λ / (S × μ)
Vulnerable Period (VP) Time window that safety stock must cover: VP = LT + RP
WIP (Work-in-Process) Items that have entered but not yet exited the system
Work Content Total processor time consumed in producing one item