Analysis of Business Problems (ABP)¶
Professors: Liinus Hietaniemi & Luis Palencia | ECTS: 4 | Evaluation: Reports 50%, Class participation 50%
Why This Matters¶
Every MBA course teaches you something about business. Corporate Finance teaches you to value cash flows. Marketing teaches you to segment customers. Operations teaches you to manage throughput. But none of those courses tells you what to do when you walk into work on a Monday morning and your boss says, "We have a problem -- fix it," and nobody can agree on what the problem actually is. That is the domain of Analysis of Business Problems, and it is arguably the most important course in the first year.
ABP is the operating system on which every other MBA course runs. It does not teach you a functional discipline; it teaches you how to think. Specifically, it teaches a six-step framework for solving "non-operational" problems -- the messy, ambiguous, high-stakes decisions that define managerial life. These are problems where there is no algorithm, no standard operating procedure, no textbook answer. They require judgment, and judgment can be trained. ABP is that training.
The course uses approximately twenty cases, one per session, but the cases are not the point. The point is the framework. Every case is an exercise in the same six steps: diagnose the problem, establish criteria, generate alternatives, analyze them, synthesize a decision, and build an action plan. Students apply this progression to increasingly complex situations -- from straightforward plant-location decisions in early sessions to ambiguous, multi-stakeholder dilemmas by the end. Through repetition, the framework becomes instinct. By the time you graduate, "What is the real problem here?" should be the first question that enters your mind in any professional situation.
How It All Connects¶
The six steps form a complete system, not a checklist. They are designed to be worked through in sequence, but the process is explicitly iterative: insights gained during analysis may force you back to redefine the problem; generating alternatives may reveal criteria you had not considered; and designing an action plan may expose that your chosen alternative is not implementable, sending you back to synthesis. The framework's power lies not in rigid adherence to a linear sequence but in the discipline of ensuring that every step receives deliberate attention before a decision is finalized.
The steps connect as follows. Diagnosis defines the question you are trying to answer. Criteria establish what "good" looks like -- the standards against which any solution must be measured. Alternatives are the possible answers to your question. Analysis is the bridge between raw data and informed judgment: it evaluates each alternative against each criterion, using both quantitative and qualitative methods. Synthesis is the decision itself -- not a mechanical calculation, but an exercise of judgment informed by everything that came before. And the Action Plan is where strategy meets reality: it translates your decision into a sequence of concrete steps with owners, deadlines, and measurements. Without the action plan, the decision is just a good intention.
Critically, ABP connects to every other course in the MBA. When you do a net present value (NPV) calculation in Corporate Finance, you are performing Step 4 (Analysis) for a financial criterion identified in Step 2. When you build a segmentation strategy in Marketing, you are generating alternatives (Step 3) evaluated against criteria like market attractiveness and competitive position (Step 2). When you assess capacity constraints in Operations, you are analyzing feasibility -- a criterion. When you wrestle with stakeholder interests in Managing People in Organizations, you are navigating the qualitative criteria and synthesis trade-offs that ABP formalizes. The six-step framework is the connective tissue of the MBA.
Lesson 1: Structured vs Unstructured Problems¶
Every manager solves problems. What distinguishes the problems that ABP addresses from the hundreds of routine decisions made every day is the degree of structure. Understanding this distinction is the entry point to the entire course.
An operational problem (also called a structured problem) is one that can be solved by following a known procedure. Replacing a burned-out light bulb, processing a standard purchase order, running a monthly payroll -- these are operational problems. They have clear inputs, established procedures, and predictable outputs. You can write a manual for them, and someone who follows the manual will arrive at the correct solution every time.
A non-operational problem (an unstructured problem) is fundamentally different. It cannot be solved by following a tutorial. It requires careful judgment and a solution that balances competing criteria. Non-operational problems arise in human-centered contexts where people, markets, and organizations are constantly changing. Their solutions may need to be revised over time as new information becomes available. There is no manual, no algorithm, and no guarantee that the "right" answer even exists in any absolute sense.
Non-operational problems share several characteristics that make them harder to solve. First, they are often not clearly defined -- what appears to be the problem on the surface frequently masks a deeper issue that only emerges through analysis. Second, they are multidimensional, with multiple interdependent facets. A decision to slash the price of a premium product, for example, affects sales volumes, financial performance, customer behavior, commercial strategy, and competitive dynamics simultaneously. Third, the consequences of these interdependencies unfold across different time horizons: a shift in customer behavior may happen immediately, while the impact of a triggered price war could persist for years.
The IESE technical notes distinguish three levels of non-operational problems based on their nature. Tactical problems relate to implementing strategy and are typically made by middle management -- for example, choosing a discount policy for a product line. Strategic problems relate to the overall direction of the organization; they are uncertain, structurally significant, and fall within the purview of senior management -- for example, deciding to diversify into a new business area. Adaptive problems have no clear solution and require continuous learning; they are faced by leaders operating in highly uncertain and volatile environments -- for example, determining the most effective digital marketing model for a retailer in a rapidly evolving landscape.
The six-step framework is designed specifically for non-operational problems. It provides the structure that these problems inherently lack. And the first step -- diagnosis -- is where everything begins.
Common mistakes at this stage: Students often assume they already know what kind of problem they are facing. The most dangerous error is treating a non-operational problem as an operational one -- reaching for a standard solution when the situation demands original thinking. Conversely, over-complicating a genuinely straightforward decision wastes time and resources.
Cross-reference: The distinction between operational and non-operational problems maps directly onto Operations Management's concept of routine vs. non-routine processes. It also connects to Competitive Strategy's treatment of strategic vs. tactical decisions.
Lesson 2: Diagnosis -- What is the Problem?¶
Defining the problem is the first step of the six-step framework, and it is the step most often done poorly. Getting the diagnosis wrong means that every subsequent step -- no matter how rigorous -- will produce the wrong answer. As the technical note puts it: "incorrect problem definitions can lead to wasted effort and unsatisfactory outcomes." You can build a flawless financial model, generate creative alternatives, and design an elegant action plan, but if you are solving the wrong problem, none of it matters.
What Is a Problem?¶
A problem is a situation, question, or condition that requires a solution, decision, or action. It implies the need or desire to change something. Before we recognize a problem directly, we often encounter symptoms -- unexpected signals that capture our attention and trigger concern. Perhaps a key customer stops reordering, or employee turnover spikes in one department. These signals act as warnings. When enough of them accumulate, we begin to identify the existence of a problem.
The critical discipline at this stage is distinguishing the problem from its symptoms. Symptoms are surface-level indicators. Addressing them will not resolve the underlying issue. If sales are declining (symptom) because the product portfolio is outdated (problem), launching a promotional campaign addresses the symptom but leaves the root cause untouched. The decline will continue.
How to Define the Problem¶
The technical notes offer six practical guidelines for defining problems accurately:
Write it down. Putting the problem into words forces you to visualize it and gain perspective. You may have several versions or even multiple problems -- write them all down. Some may be different expressions of the same issue; others may be distinct problems that require prioritization.
Differentiate problems from facts and symptoms. "100 grams of dark chocolate contain 600 kilocalories" is a fact, not a problem. "Hunger in the world" is also a fact, but framed this broadly, it is unmanageable. To be useful, problems must be framed in actionable terms. And symptoms -- like declining revenue or low morale -- are not problems themselves but indicators of deeper issues.
Ask whose problem it is. Problems are not abstract. They affect specific people. The same situation can be a problem for one person but not for another, depending on their role, authority, or perspective. A clear definition ties the problem to a specific person or group who has the power and responsibility to act.
Frame the problem as a question. "The car does not work" is a statement. "How do we fix the car for Julia?" is a problem. Context matters enormously: whether Julia is on vacation (urgent) or planning to sell the car (perhaps not urgent) changes both the definition and the appropriate solution.
Take a step back. Sometimes we define problems too narrowly. Examining the broader context can offer new insights and better definitions. A complaint about office temperature might actually be about inadequate workspace design, or about employees feeling unheard.
Make it iterative. Defining the problem may require multiple revisions. Ask: Why do we have this problem? Whose problem is it? As new information becomes available -- especially during the analysis step -- update your definition accordingly.
The 5 Whys Method¶
The 5 Whys is a practical technique for drilling past symptoms to root causes. You ask "Why?" repeatedly -- five times, or as many as necessary -- until you reach the fundamental issue.
Consider this example from the technical note: Problem Level 1 -- Lack of sales growth. Why? Because the product portfolio is outdated. Problem Level 2 -- Why? Because we lack new products. Why? Because our research and development (R&D) department is underperforming. Problem Level 3 -- Why? Because the research director lacks key capabilities. Root Problem -- How do we replace or develop the research director?
This is a cascade of problems, where each level is deeper than the last. The goal is to keep digging until you reach the cause you can actually act on. Notice how different the root problem ("How do we develop or replace the R&D director?") is from the surface symptom ("Sales are not growing"). If you stop at Level 1, you might restructure the sales force -- an expensive intervention that would not solve anything.
Types of Problem Definitions to Avoid¶
The technical notes identify several categories of poorly defined problems:
A symptom masquerading as a problem -- addressing it will not resolve the underlying issue. An ill-defined problem fails to capture the core issue, often leading to solutions that solve the wrong thing. An aspirational problem is a goal that is not grounded in actionable terms -- "We need to be the best company in our industry" is an aspiration, not a problem definition. A poorly defined problem is actually a "solution in search of a problem" -- an attractive idea that does not align with the firm's actual needs. ("We should implement blockchain" is a solution, not a problem statement.)
Who Should Define the Problem?¶
An important consideration is who should be involved in defining the problem. Should you do it alone, or is collaboration needed? Reflect honestly on your capabilities and knowledge: Are you the right person to define the problem accurately? Could partnering with others improve the outcome? The technical note emphasizes that answering these questions honestly requires humility -- a quality that separates effective managers from those who compound problems by refusing to acknowledge the limits of their own perspective.
Common mistakes at this stage: Jumping straight to solutions without spending adequate time on diagnosis. Confusing symptoms with root causes. Framing the problem so broadly that it becomes unmanageable, or so narrowly that the real issue is excluded. Failing to ask "Whose problem is this?" and therefore proposing solutions that the decision-maker has no authority to implement.
Practical tip: In case discussions, spend the first few minutes of your preparation writing down the problem as a question. If you cannot phrase it as a clear question addressed to a specific decision-maker, you have not finished diagnosing.
Cross-reference: Diagnosis connects directly to Leadership (understanding stakeholder perspectives), to Business Ethics (whose problem matters morally), and to Competitive Strategy (distinguishing strategic problems from tactical ones).
Lesson 3: Criteria -- What Matters?¶
Once the problem is defined, the next question is: What standards will we use to evaluate possible solutions? These standards are called criteria, and selecting the right ones is what separates a disciplined decision from a gut feeling.
What Is a Criterion?¶
A criterion is a standard or principle used to evaluate, compare, or distinguish between alternatives when making a decision. It defines what is important in assessing options and serves as a guiding factor in the decision-making process. Criteria can be understood as constraints that imply different consequences for different alternatives.
This last point is essential and frequently misunderstood: a criterion must behave differently toward the alternatives. If a factor affects all alternatives equally, it is not a criterion -- it is a contextual feature. For example, the current economic situation of a country is a contextual feature because it affects all alternatives within that country in roughly the same way. Similarly, a deadline is typically contextual because it is invariant across alternatives. However, if the contemplated alternatives require substantially different implementation times, then "time to implement" becomes a relevant criterion.
The advantage of distinguishing criteria from contextual features is practical: context can be taken as given, allowing decision-makers to focus their energy on the factors that actually differentiate the alternatives.
The notes also draw an important distinction between criteria and elements of analysis. "Competition" is generally not a criterion but rather an element of analysis -- a lens through which you examine criteria like profitability, growth, or risk. However, boundaries are not always clean: "competition" could be a criterion if the decision is specifically about whether to ally with or compete against a rival. The technical note advises pragmatism: make the distinction that serves the analysis, and do not let taxonomic debates become an end in themselves.
Criteria vs. Alternatives: Which Comes First?¶
In theory, criteria should be established before alternatives, because this keeps the decision-making process objective. If alternatives are chosen first, there is a natural tendency to justify them retroactively by selecting favorable criteria, leading to a biased process. Setting criteria first ensures that priorities drive the decision rather than subjective preferences.
In practice, alternatives and criteria are often generated iteratively. Developing criteria can lead to the discovery of new alternatives, and discussing alternatives can reveal criteria that had not been considered. This iterative dynamic is not a flaw -- it is how the framework is designed to work.
There are legitimate exceptions where alternatives should come first. In creative and uncertain situations, defining criteria too early may restrict innovation -- when you do not yet know what feasible alternatives exist, a stringent set of criteria could be a limiting factor. In urgent or constrained decisions requiring quick action, listing feasible alternatives first allows for rapid response. And when proven solutions already exist, starting with alternatives allows for adaptation before setting selection criteria.
Criteria Should Be Objective¶
All decision-makers develop mental models to interpret reality, and these models naturally guide them toward specific criteria. There is a trap here: selecting criteria based on individual preferences or pre-existing inclinations creates a false sense of objectivity. Weighting criteria by personal importance provides an illusion of rigor while actually embedding bias into the process.
The antidote is awareness and collaboration. Teamwork helps mitigate the risk of overlooking crucial factors. The technical note also suggests that large language models (LLMs) and artificial intelligence (AI) can be used to identify criteria that you might not have considered on your own, helping uncover blind spots and implicit groupings in your thinking.
Identifying Relevant Criteria¶
Certain common criteria underlie most business decisions. Economic criteria -- cost, return on investment, financial feasibility -- are almost always relevant (though they may not always be the most important). Criteria related to technical knowledge and skills assess feasibility. Social and relationship considerations -- customer satisfaction, employee morale, stakeholder impact -- must be weighed. Ethical criteria -- fairness, responsibility -- ensure that decisions align with broader values. And risk considerations cut across all other categories, as uncertainties in financial, technical, social, and ethical dimensions must be carefully assessed.
A useful organizing framework for generating criteria is the three-question test:
- Is it a good business? This triggers economic and strategic criteria -- profitability, growth potential, market attractiveness.
- Can we do it? This uncovers capacity-related criteria -- physical assets, financial resources, talent, experience, and operational bottlenecks.
- Do we want to do it? This surfaces relationship, ethical, and personal considerations -- alignment with values, stakeholder impact, and the decision-maker's own aspirations.
Keeping the List Manageable¶
There is a tension between completeness and usability. The technical note is emphatic: good practice suggests a maximum of four or five criteria. Increasing this number does not make the analysis more complete; it makes it more laborious and harder to synthesize. The goal is not "criteria mining" -- making criteria generation an end in itself. It is about being relevant, not exhaustive.
The recommended approach: Brainstorm broadly. Categorize and group similar factors to reduce redundancy. Apply the Pareto principle -- focus on the 20% of criteria likely to drive 80% of the decision's impact. Eliminate criteria that give very similar answers for each alternative (these are contextual features in disguise). Validate with stakeholders to ensure no key factor has been overlooked.
Quantitative vs. Qualitative Criteria¶
Decisions almost always involve both. Quantitative criteria can be measured numerically -- cost, profit, efficiency -- and allow precise comparisons using models such as NPV or cost-benefit analysis. Qualitative criteria require more judgment -- brand reputation, ethical considerations, cultural fit -- and are harder to analyze but no less relevant. The technical note stresses that the difficulty of measuring qualitative criteria does not diminish their importance. Both types must be incorporated into the decision-making framework.
Common mistakes at this stage: Generating too many criteria and getting lost in analysis. Confusing contextual features with criteria. Weighting criteria by personal preference and mistaking this for objectivity. Neglecting qualitative criteria because they are harder to quantify. Failing to distinguish criteria from elements of analysis.
Practical tip: After generating your criteria list, apply this test to each one: "Does this criterion give a meaningfully different answer for Alternative A versus Alternative B?" If not, it is context, not a criterion. Remove it and simplify your analysis.
Cross-reference: The three-question framework ("Is it a good business? Can we do it? Do we want to do it?") maps directly onto Corporate Finance (valuation and returns), Operations Management (capacity and feasibility), and Business Ethics (should we do it). Criteria selection is also central to Decision Analysis, where probability-weighted outcomes formalize the quantitative criteria.
Lesson 4: Alternatives -- What Can We Do?¶
An alternative is one of the possible solutions to a problem. If diagnosis is the foundation and criteria are the yardstick, alternatives are the creative engine of the framework. You cannot choose a good solution if you have not generated one. There is no repository of alternatives sitting on a shelf -- each set of alternatives must be custom-built for the specific problem at hand. This requires deep understanding, broad knowledge, and genuine creativity.
Preparing the Search¶
Before jumping into generating alternatives, ask yourself whether you are well equipped for the search. Do you have enough information? The technical note uses the analogy of a football coach preparing for a match: regardless of how good his team is, if he lacks information about the rival team, the field, and the weather, the result will very likely be unfavorable. The quality of alternatives depends directly on the knowledge accumulated about the problem's landscape.
Several questions should be addressed before beginning:
Who is going to generate the alternatives? Should it be you alone, or a team? Who brings the skills and knowledge needed? Acknowledging that other employees or third parties may contribute significantly requires recognizing the value of other people's skills and having the humility to accept their input.
How many alternatives? After thoroughly studying the problem, having fewer than three alternatives could imply that there is still work to do. Three is a practical minimum that forces you to think beyond the obvious binary of "do it" or "don't do it."
When should you stop searching? As in any decision process, timing is part of the decision. Excessive time spent searching for alternatives has a cost, and insufficient time risks missing the best option. Weigh the cost of continued search against the expected benefit of finding a better alternative.
Be prepared to iterate. After analyzing the alternatives (Step 4), new or refined alternatives may emerge. The process is iterative by design.
Characteristics of Good Alternatives¶
A good alternative has four essential characteristics. It must be effective -- it actually solves the problem. It must be efficient -- it solves the problem without excessive waste of resources or collateral damage. It must be well-defined -- clear enough that people can understand and evaluate it. And it must be realistic and workable -- actionable in the real world, not just elegant on paper.
How to Design Alternatives¶
There is no single method. It depends on the type of problem, the available resources, and the people involved. The technical note offers practical advice:
Write them down. Even if they are undeveloped or seem inconsistent. Writing forces you to articulate ideas in words, and in this process you often discover shortcomings and generate new approaches. Without writing, it is very difficult to materialize a good set of alternatives.
Take your time. This is a holistic process. Your mind needs time to process inputs, elaborate, imagine, and discard. Starting the process several times can be productive.
Write as many as possible. Some will not survive detailed analysis, but they can enrich the surviving ones. The process is generative.
Consider inaction. "Do nothing" is always an alternative, and sometimes it is the right one. But it should be a deliberate choice, not a default.
Study precedents. Search for and analyze how people have solved similar problems. This is not about copying solutions but about expanding your thinking.
Consider delay. Can some alternatives be postponed? Deferring a decision sometimes creates a genuinely new option.
Frame and reframe. Revisit and reformulate alternatives as many times as needed.
Creativity Methodologies¶
The technical note presents several structured approaches to fostering creativity:
Divergent thinking: Set a rule -- generate at least ten solutions before evaluating any of them. Deliberately include strange or unrealistic ideas.
Provocative questions (lateral thinking, after Edward de Bono): Challenge assumptions. "Do we really need X for this to work?"
Decomposition: Break the problem into pieces and design alternatives for each piece.
Work the rules: List all the constraints that define the problem. Can you find ways around them? If so, what new solutions emerge?
Role-storming: Instead of brainstorming as yourself, adopt the perspective of different stakeholders -- customer, supplier, regulator -- and think as they would.
Time travel thinking: Imagine solving the problem in a different place, country, or era, using different technologies.
The technical note also describes four formal methodologies: mind mapping (for problems with very little structure and no previous experience), brainstorming (similarly for unstructured problems), the SCAMPER method -- Substitute, Combine, Adapt, Modify, Put to Another Use, Eliminate, Reverse -- (for remodeling and reshaping existing solutions), and alternative problem-solving techniques such as reframing the problem, bypassing it entirely, turning it into an advantage, applying solutions from unrelated fields, or eliminating the need for a solution altogether.
Common mistakes at this stage: Settling for only two alternatives (usually "do it" or "don't"). Generating alternatives before adequately understanding the problem. Letting feasibility concerns kill creative thinking too early -- evaluation comes in the next step, not this one. Failing to consider "do nothing" as a legitimate alternative.
Practical tip: In case preparation, force yourself to articulate at least three distinct alternatives before analyzing any of them. If two of your alternatives are really variations of the same idea, you do not have three yet.
Cross-reference: Alternative generation draws on Marketing Management's concept of segmentation (different solutions for different customer groups), Operations Management's concept of process redesign, and Competitive Strategy's notion of strategic options. The creativity methodologies also connect to the innovation literature covered in Entrepreneurship 1.
Lesson 5: Analysis -- What Do the Numbers Say?¶
Analysis is Step 4 of the framework, and it serves as the bridge between data and decision. Its purpose is to reduce uncertainty, clarify the consequences of each alternative, and provide a solid basis for comparison. Without analysis, the choice among alternatives will remain, at best, intuitive or superficial. With analysis, the decision-maker's judgment is enriched with evidence and structure.
What Analysis Means in This Context¶
"Analysis" in ABP should not be understood merely as numerical calculations. It encompasses both quantitative and qualitative elements aligned with the criteria defined in Step 2. The analyst's work is translated into arguments, figures, and assessments that support a well-founded recommendation -- whether to a committee, a superior, or to oneself.
Analyzing alternatives means examining the characteristics of each one and understanding their implications: how they affect profitability, employee well-being, customer satisfaction, environmental impact, or any other criterion. The dimensions used to characterize and evaluate an alternative are precisely the criteria established earlier.
Types of Analysis¶
The technical note catalogs analyses commonly used in business situations:
Quantitative analysis includes preparation of actual or projected financial statements (income statements, cash flow statements, balance sheets), financial analysis (break-even analysis, marginal analysis, solvency and capital structure ratios, economic valuations), and operations analysis (maximum capacity, bottlenecks, required inventory).
Qualitative analysis includes impact on employees (employment, motivation, interdepartmental relations, trust), strategic analysis (alignment with vision, mission, values, and strategy), commercial analysis, and ethical analysis (legitimacy of a decision from a moral standpoint).
Both quantitative and qualitative includes risk assessment (scenario development, sensitivity analysis, probability-impact matrices), feasibility assessment (organizational capacity to implement an alternative -- operations, technology, culture), and business impact analysis (impact on customers, reputation, suppliers, and other stakeholders).
Guiding Criteria for Sound Analysis¶
Regardless of the specific type, sound analysis must satisfy five quality criteria:
Relevance. Each piece of analysis must be directly related to the established evaluation criteria. Irrelevant analysis confuses and distracts.
Technical rigor. Calculations must be correct and inferences logically sound. However, "correct" should not be confused with "precise." In many cases, only a reasonable quantitative approximation is possible. The resulting figures may lack precision but still be sufficiently relevant to differentiate between alternatives. The technical note gives an illustrative example: a business owner receives a EUR 750,000 offer for his company. An analysis showing that the company's value falls between EUR 350,000 and EUR 500,000 is imprecise -- but it is more than sufficient for the owner to conclude that the offer is highly attractive from an economic standpoint.
Transparency. The analysis must be easy to follow, understand, and replicate. Clarity is key.
Comprehensiveness. It must cover all the essential criteria bearing on the decision.
Balance. Each alternative must be analyzed with comparable depth. Analyzing one option in detail while treating another superficially undermines fair comparison. Imbalance may be intentional (when the analyst subjectively favors one alternative) or unintentional (due to lack of diligence or cognitive biases).
The Quantifiable Bias¶
A common and dangerous mistake at this stage is focusing on aspects that are easy to quantify while leaving out those that, although harder to measure, are strategically critical. It is common to dismiss conclusions based on non-quantifiable factors as "unprofessional" or "unsubstantiated." This bias leads to incomplete analyses.
The technical note provides a vivid example: in an internationalization decision, the financial analysis may be detailed and conclusive, but if the company's ability to manage a business abroad or the institutional stability of the target country is not taken into account, the analysis is incomplete and carries a high risk of leading to poorly grounded decisions.
The recommended approach is to begin with quantitative criteria -- the numbers, the financials, the projections -- and then move to qualitative criteria. This sequence leverages the precision of numbers while ensuring that judgment-dependent factors receive their due attention.
Limitations and Sensitivity¶
All analyses involve assumptions. Good practice demands making these assumptions explicit and, for quantitative data, conducting sensitivity analysis to see how conclusions change in response to reasonable variations in key variables. If a business depends on volatile raw material costs, for example, a sensitivity analysis can assess how price fluctuations affect profitability and help identify the available margin for maneuver.
For qualitative factors, defining assumptions is harder, but constructing hypothetical scenarios is possible. The technical note suggests modeling alternative scenarios involving different degrees of, say, talent loss, and estimating their impact on operational capacity.
Comparative Analysis¶
A fundamental principle: the analysis should not examine each alternative in isolation but rather compare them against one another based on the defined criteria. This requires a structured approach. Each alternative is evaluated against each criterion, and then the interactions among criteria are analyzed.
SWOT analysis (strengths, weaknesses, opportunities, threats) is a commonly used structuring tool. Another is the weighted evaluation matrix, which assigns a weight to each criterion and scores each alternative. The technical note presents an example but immediately cautions: the results of a weighted matrix should not be taken as a definitive answer. The matrix risks oversimplifying how criteria interact and creates a false sense of certainty. It is meant to support the decision-maker's critical judgment -- not replace it.
Effectiveness and Efficiency¶
Two metrics that can be applied across all criteria are effectiveness (the degree to which an alternative achieves the goal of solving the problem) and efficiency (the level of resources required to implement the solution). These concepts provide a consistent lens for comparing alternatives on any criterion.
The Role of the Team¶
The analysis team -- the individuals who contribute to the analysis and, in some cases, determine the best alternative -- is an important factor. Assembling the right team is essential to making a sound decision. Diverse expertise catches blind spots that individual analysts miss.
Common mistakes at this stage: Starting with conclusions and working backward to justify them. Deciding on a solution before conducting the analysis (gut-feeling bias). Giving disproportionate weight to a single criterion without realizing it. Analyzing one alternative deeply while treating others superficially. Ignoring qualitative criteria because they are harder to model.
Practical tip: At Step 3, resist the urge to jump to solutions. Force yourself to quantify. Build the income statement. Calculate the break-even point. Run the sensitivity analysis. Then -- and only then -- layer in the qualitative assessment. This discipline prevents the common pattern of rationalizing a predetermined conclusion.
Cross-reference: Analysis is where ABP draws most directly on other courses. Financial statement analysis comes from Financial Accounting. NPV, break-even, and valuation come from Corporate Finance. Capacity and bottleneck analysis come from Operations Management. Market and competitive analysis come from Marketing Management and Competitive Strategy. Ethical analysis comes from Leadership (the ethical lenses). Decision Analysis contributes probability frameworks and expected-value calculations.
Lesson 6: Synthesis -- What Should We Do?¶
Synthesis is the moment of decision. After analyzing all alternatives according to the criteria, you produce an overall evaluation and select a solution to the problem. This step is distinct from analysis: in analysis, you examine each option individually to understand what it achieves and at what cost. In synthesis, your objective is to compare the alternatives and select the best one for addressing the problem at hand.
The Nature of Synthesis¶
Most of the time, the synthesis comes naturally after a thorough analysis. The analysis presents the pros and cons of each alternative, and prior work (Steps 1 through 4) helps discard some options, narrowing the decision down to two or at most three alternatives. If the earlier steps have been done well, one of these remaining alternatives will likely represent a good decision.
But "good" does not mean "perfect." A perfect fit between the problem and the decision rarely exists. This is a crucial insight that the technical note emphasizes repeatedly. The chosen solution will have drawbacks. It will not score highest on every criterion. The action plan (Step 6) exists precisely to smooth the negative sides of the chosen alternative and enhance its positive aspects.
Boundary Conditions¶
When choosing an alternative, the decision must fulfill certain minimum requirements. These are called boundary conditions -- the baseline conditions that make a solution workable. Peter Drucker's formulation is cited in the technical note: "To make effective decisions, you need to know the boundaries. You need to know what is good in terms of a continuum -- from the minimal solution to the ideal. Most importantly, you should not depend on a decision that requires everything to go right."
Once you have identified all alternatives that fulfill the boundary conditions, the selection process focuses on finding the alternative that does reasonably well across all criteria and excels on the most relevant ones. This is not about optimization in a mathematical sense; it is about finding a robust, defensible choice.
Why Weighted Scoring Is Insufficient¶
If you come from an engineering or quantitative background, you may be tempted to assign each criterion a weight, score each alternative, multiply, and choose the highest total. The technical note warns explicitly against relying on this approach. The reason: some variables cannot be easily measured, and even when they can, they cannot simply be added to other variables. Weighting provides a false sense of objectivity by reducing a multidimensional judgment to a single number, obscuring the interactions and trade-offs among criteria.
The concept of "qualculation" -- a term coined by sociologist Michel Callon -- captures what synthesis actually requires. Qualculation is decision-making that blends both quantitative (numerical, measurable) and qualitative (judgmental, value-based) elements. Pure calculation -- choosing the supplier with the lowest unit cost, or approving a loan based solely on credit score -- is appropriate for operational problems. Non-operational problems demand qualculation: a luxury brand prices a handbag at $2,000 not just because of margins but because of perceived exclusivity and brand storytelling. A company retains a low-performing employee because that person boosts team morale and mentors others. A firm acquires a competitor not for the highest EBITDA but for strategic alignment and innovation potential.
Who Makes the Decision?¶
Synthesis also requires deciding who decides. Individual decision-makers are best suited when speed is essential, accountability is clearly defined, or the decision falls within one person's scope of expertise. Collaborative decisions are better when the problem is complex or cross-functional, when buy-in from multiple stakeholders is needed, or when diverse perspectives genuinely improve the quality of the outcome.
The technical note introduces the RACI framework as a useful tool: Responsible (who makes the final call), Accountable (who owns the outcome), Consulted (who contributes input), and Informed (who needs to be updated).
Evaluating Your Recommendation¶
Before finalizing, test your decision against these questions:
Does the recommended solution actually address the problem as diagnosed? Is there data or experience to support the solution? Does it incorporate the main criteria used in the analysis? Have you developed a plan to reduce the risks arising from the selected alternative? Have you considered who will implement it? Have you factored financial criteria into the analysis? Have you considered the different parties affected -- employees, shareholders, customers, suppliers? Have you worked on a viable action plan?
Qualities of a Good Decision¶
The technical note identifies five dimensions of a sound decision. It should be useful (it largely solves the problem), consistent (it aligns with the style, policies, and goals of the decision-maker and organization), reasonable (it applies good judgment), feasible (it can be implemented), and convincing (its rationale can be accepted and understood by the organization).
Common mistakes at this stage: Over-relying on weighted scoring matrices. Seeking the "perfect" answer instead of a robust one. Ignoring boundary conditions. Making the decision before completing the analysis (confirmation bias). Failing to consider implementation feasibility. Treating synthesis as a mathematical exercise rather than an act of judgment.
Practical tip: After arriving at your recommendation, apply a simple stress test: "If this decision goes wrong, what is the most likely reason?" If you cannot answer that question, you have not thought hard enough about the risks. Your action plan should address that most-likely failure mode directly.
Cross-reference: Synthesis connects to Decision Analysis (expected values and decision trees as inputs to judgment), Corporate Finance (when financial criteria dominate), Business Ethics (when qualitative criteria involve moral trade-offs), and Leadership (stakeholder analysis and the politics of organizational decisions). The RACI framework appears again in Operations Management and in leadership contexts.
Lesson 7: Action Plan -- How Do We Do It?¶
Knowing what to do is not enough. Many decisions never get beyond good intentions because no one has taken the time to think about how to carry them out. The action plan is the sixth and final step of the framework, and it is where strategy meets reality. Its purpose is to translate a strategic choice into a sequence of actions with clearly defined responsible parties, deadlines, and resources that ensure implementation and enable evaluation of results.
As Eisenhower said (cited in the technical note): "Plans are useless, but planning is essential." Few plans survive first contact with reality intact, but the effort of planning -- thinking through options, anticipating difficulties, ordering priorities -- provides critical insights and helps identify risks that were not detected during analysis. Those who have planned are better prepared, even when the plan changes.
Design Before Action¶
A good action plan is not designed in a vacuum. Before defining specific actions, pause and ask: Is the plan consistent with the other steps in the decision-making process? What is the context? What risks have been identified? What are the shortcomings? Who can we count on? How urgent is it? Who should design the plan? Who will implement it?
An important early decision is who designs the plan and how. Should the guidelines come from above, as a clear directive? Or should those who will execute the plan participate in its design? There is no universal answer. Sometimes speed demands top-down direction; other times, involving implementers is essential to secure their commitment.
The plan must be consistent with the criteria that guided the decision. If speed was a key criterion, you cannot afford a slow implementation. If prudence was paramount, there is no room for a risky deployment. If profitability was the primary concern, implementation must use cost-effective solutions. A plan that contradicts the criteria that inspired it is born weak.
Organizational culture is another critical factor. How does the organization respond to directives? Is it disciplined or does it prioritize individual initiative? Is leadership accepted or questioned? What is the reaction to mistakes? If a plan does not account for the organization's culture, it will not take off, no matter how well-designed its other components may be.
Attributes and Trade-Offs¶
An effective action plan balances five attributes: flexibility, specificity, realism, ambition, and traceability. Designing a plan means confronting the trade-offs among these attributes.
Flexibility vs. specificity. A highly specific plan provides security -- each step is defined and everyone knows what to do. But it becomes a burden if the context shifts. A highly flexible plan adapts better to changes but generates ambiguity and low commitment.
Realism vs. ambition. Realistic plans increase the likelihood of successful implementation but may fall short on results and fail to challenge the organization. Overly ambitious plans may initially mobilize people but generate frustration when targets prove unattainable.
Traceability vs. shared responsibility. If each action has a single owner, monitoring is easy. But in complex cross-functional projects, assigning one person is politically challenging, and shared responsibility blurs accountability. Explicit coordination mechanisms are required.
The choice of where to land on each trade-off defines the plan's character. The guiding principle: know the culture and context in which the decision was made.
Actions Are the Heart of the Plan¶
Actions must be clear, executable, and measurable. The SMART framework is useful here: Specific, Measurable, Achievable, Relevant, and Time-bound. But not all actions have the same impact. Applying the Pareto principle -- identifying the 20% of actions that will generate 80% of the impact -- forces prioritization when resources are scarce (and they are almost always scarce). It is better to do a few things well than many things superficially.
The technical note warns against two extremes. Too many detailed actions overload the system and can lead to paralysis -- more time spent on design than on implementation. Too few or overly aggregated actions provide agility but lack the specificity needed for execution and accountability. The right level of detail is another trade-off whose resolution is more art than technique.
A practical recommendation: include some quick wins -- quick, visible actions that build confidence and motivation among implementers. Small victories drive progress and reduce the risk of abandonment. They serve as anchor points, maintaining morale while more complex actions are initiated.
Actions must form a coherent system, not a shopping list. They should reinforce one another and be oriented toward mitigating risks and compensating for capability shortcomings.
Risk Mitigation and Shortcoming Compensation¶
The synthesis step selected a direction that still entails risks. An important element of the action plan is mitigating those risks -- both risks identified during analysis and risks that were not foreseen. For foreseeable risks, there is no excuse for not having taken corrective actions. For unforeseeable risks, especially in VUCA (Volatile, Uncertain, Complex, and Ambiguous) environments, less aggressive actions, prudent slack, or contingency plans based on worst-case scenarios can be developed.
Beyond risks, there are shortcomings -- gaps in resources, capabilities, time, allies, or budget. These should be addressed in the plan. If trained staff is insufficient, training should be part of the plan. If support from allies is weak, negotiate before proceeding. If funding is inadequate, secure additional financing. Sometimes shortcomings can be offset by imaginative solutions, adjusted sequencing, internal or external alliances, or a reduction in initial scope. The key is not to ignore them. A plan that does not consider its risks and limitations is not incomplete -- it is naive.
Plan B¶
It almost always makes sense to have a fallback. Plan B is not meant to achieve the full ambition of Plan A but to contain the damage, preserve the essentials, and keep things moving while Plan A is corrected or redesigned. A good Plan B focuses on the critical risks that could derail Plan A, is simple, fast to activate, and easy to understand. It works with limited resources, has clear triggers for when to activate it, and is politically and culturally feasible. Plan B is not about winning; it is about not losing.
Measurement¶
Without measurement, there is no learning. Two types of measurement are essential: execution measurement (Has what was planned been done?) and impact measurement (Have we achieved the desired results?). Both are necessary, and they do not always coincide. Launching an internal campaign may be executed perfectly but fail to generate the desired change.
There is a trade-off between specificity and durability in measurement. Very specific goals are easy to verify but risk becoming irrelevant if context shifts. More open goals require interpretation but hold up better over time. The resolution: establish periodic review procedures, tailored to the plan's scope, that enable adjustments without losing direction.
Implementation Considerations¶
Although full implementation is beyond the scope of the framework's design phase, certain factors must be anticipated. The plan needs a sponsor with real influence who publicly supports it, defends it when scheduling conflicts arise, and provides resources. Without visible executive support, the plan is weakened from the start.
Communication cannot be left to chance. The plan must specify: who to communicate with, by what means, how often, at what level of transparency, what content to share, and what tone to use. Technical communication can deter momentum; communication that recalls the plan's purpose can reinforce it.
A common mistake is assigning too many actions to one person. During design, it is easy to get carried away by enthusiasm and multiply tasks, leading to overload, discouragement, and abandonment. A few achievable actions, especially at the beginning, are more effective than a long list of good intentions.
Common mistakes at this stage: Treating the action plan as an afterthought. Failing to identify a sponsor. Overloading individuals with tasks. Ignoring organizational culture. Not building in measurement. Confusing execution metrics with impact metrics. Having no Plan B.
Practical tip: For every case you analyze, force yourself to answer three implementation questions: Who specifically will do what? By when? How will we know if it worked? If you cannot answer these, your action plan is not yet an action plan -- it is a wish list.
Cross-reference: Action planning connects directly to Operations Management (project planning, Gantt charts, critical path), Leadership (motivating teams, managing resistance to change, communication), Corporate Finance (resource allocation and budgeting), and Business Ethics (accountability and fairness in implementation). The SMART framework and Pareto principle appear across multiple courses.
Lesson 8: Integration -- Applying the Framework to Complex Problems¶
The first seven lessons present each step in relative isolation. Real business problems do not arrive pre-labeled as "diagnosis problems" or "criteria problems." They arrive as undifferentiated messes -- ambiguous, politically charged, time-pressured, and data-poor. The final phase of ABP teaches students to use the entire framework as an integrated system on problems that resist clean decomposition.
The Iterative Nature of the Framework¶
The most important insight of the integration phase is that the six steps are not a waterfall process. They are iterative. Analysis (Step 4) frequently reveals that the problem was misdiagnosed (Step 1), requiring a return to the beginning. Generating alternatives (Step 3) can surface criteria (Step 2) that were not previously considered. Designing an action plan (Step 6) may expose that the chosen alternative (Step 5) is not implementable, forcing a return to synthesis. The framework's value lies not in following a rigid sequence but in ensuring that every step receives disciplined attention at least once -- and revisiting earlier steps whenever new information demands it.
Handling Ambiguity¶
Complex problems often resist clear diagnosis. The decision-maker may face multiple interconnected problems, each with its own set of stakeholders, criteria, and alternatives. In these situations, the framework demands prioritization: Which problem is most urgent? Which has the greatest impact? Which must be solved before the others become tractable?
The 5 Whys method remains useful here, but its application becomes more challenging because multiple causal chains may lead to different root causes. The discipline of writing down the problem as a question -- and being willing to revise that question repeatedly -- is essential.
Dealing with Incomplete Information¶
Real decisions are almost always made with incomplete information. The framework does not promise certainty; it promises structured thinking under uncertainty. Sensitivity analysis (from Step 4) becomes particularly important: understanding which assumptions drive your conclusions allows you to focus information-gathering efforts where they will have the most impact.
The distinction between "precise" and "useful" is critical. A rough estimate that correctly identifies the direction of a decision is more valuable than a precise calculation that misses a key qualitative criterion. The business owner who knows his company is worth between EUR 350,000 and EUR 500,000 has enough information to evaluate a EUR 750,000 offer, even though the valuation range is wide.
The Role of Judgment¶
ABP is not an academic model for determining how to make "better" or "worse" decisions in some objective sense. It is a structured approach for developing decision-making skills through practice and reflection. The framework does not replace judgment; it trains judgment. Analysis does not automatically yield a "right answer" -- it illuminates consequences so that the decision-maker can apply informed judgment to assess those consequences.
After completing analysis, it is important to step back and evaluate whether the result is coherent. During analysis, it is easy to give disproportionate weight to a single criterion without realizing it. Critical thinking -- checking your own reasoning for consistency and bias -- is indispensable.
The Human Dimension¶
Complex problems invariably involve people. Good decisions always involve consideration of their impact on the people affected. This is not just an ethical requirement (though it is that); it is a practical one. A decision that ignores the human dimension -- employee morale, customer trust, stakeholder relationships -- may be analytically elegant but operationally disastrous.
The technical note on synthesis reminds us that "a decision is just as important as the action plan." Even a mediocre alternative, implemented with skill and sensitivity to the human context, can outperform a brilliant alternative implemented badly.
Bringing It All Together¶
The complete framework, applied to a complex problem, looks something like this:
You encounter a situation. You resist the urge to propose solutions immediately. Instead, you ask: What is the real problem here? Whose problem is it? You write it down as a question. You ask "Why?" until you reach the root cause.
You then ask: What criteria will we use to evaluate solutions? You generate four or five relevant criteria -- economic, feasibility, ethical, strategic -- making sure they actually differentiate among alternatives.
You generate at least three distinct alternatives, using creative techniques if necessary. You consider doing nothing. You consider delay.
You analyze each alternative against each criterion, using quantitative methods where possible and qualitative assessment where necessary. You check your assumptions. You run sensitivity analysis. You compare alternatives systematically, not in isolation.
You synthesize. You check boundary conditions. You choose the alternative that does reasonably well on all criteria and excels on the most important ones. You accept that the choice is imperfect.
You design an action plan. You identify who does what, by when, with what resources. You build in quick wins, risk mitigation, and measurement. You have a Plan B.
And then -- critically -- you remain willing to revisit any step if reality reveals that your analysis was incomplete, your diagnosis was wrong, or your chosen alternative is not working.
This is the ABP framework. It is simple in structure and demanding in execution. Mastering it is the work of the entire MBA and of the career that follows.
Quick Reference¶
| Step | Core Question | Key Output |
|---|---|---|
| 1. Diagnosis | What is the problem? | Problem statement framed as a question, tied to a specific decision-maker |
| 2. Criteria | What matters? | 4-5 relevant criteria (economic, feasibility, ethical, strategic) |
| 3. Alternatives | What can we do? | At least 3 distinct, realistic options (including "do nothing" if appropriate) |
| 4. Analysis | What do the numbers say? | Quantitative and qualitative evaluation of each alternative against each criterion |
| 5. Synthesis | What should we do? | A decision that satisfies boundary conditions and excels on key criteria |
| 6. Action Plan | How do we do it? | SMART actions, owners, deadlines, risk mitigation, measurement, Plan B |
The Three-Question Framework for Criteria: 1. Is it a good business? (economics, strategy, attractiveness) 2. Can we do it? (capacity, resources, talent, operational feasibility) 3. Do we want to do it? (values, ethics, stakeholder relationships, personal alignment)
The 5 Whys: Ask "Why do we have this problem?" repeatedly until you reach the root cause you can act on.
Quality Criteria for Analysis: Relevance, technical rigor, transparency, comprehensiveness, balance.
Good Decision Checklist: Useful, consistent, reasonable, feasible, convincing.
Action Plan Design Questions: - Who specifically will do what? - By when? - How will we know if it worked? - What is Plan B?
Glossary¶
Adaptive problem: A non-operational problem with no clear solution that requires continuous learning; typically faced by senior management in highly uncertain environments.
Alternative: One of the possible solutions to a problem; must be effective, efficient, well-defined, and realistic.
Boundary conditions: The minimum requirements that a decision must satisfy to be workable; the baseline below which a solution is not viable.
Contextual feature: A factor that affects all alternatives equally and therefore does not differentiate among them; should be noted but excluded from the criteria list.
Criterion (plural: criteria): A standard or principle used to evaluate, compare, or distinguish between alternatives; must behave differently toward different alternatives to be relevant.
Diagnosis: Step 1 of the six-step framework; the process of identifying and defining the actual problem to be solved.
Element of analysis: A factor used during the analysis step to evaluate criteria (e.g., competition, market trends); sometimes overlaps with criteria depending on context.
5 Whys: A root-cause analysis technique that involves asking "Why?" repeatedly to move from symptoms to underlying causes.
Non-operational problem (unstructured problem): A problem that cannot be solved by following a known procedure; requires judgment, balances competing criteria, and may have no single "right" answer.
Operational problem (structured problem): A problem that can be solved by following established procedures or algorithms; has clear inputs and predictable outputs.
Plan B: A fallback plan that protects essential interests when Plan A is not working; simpler, faster to activate, and designed to contain damage rather than achieve full ambition.
Qualculation: A term coined by Michel Callon describing decision-making that blends quantitative (numerical) and qualitative (judgmental, value-based) elements; the hallmark of synthesis in non-operational problems.
Quick wins: Early, visible actions in an action plan that build confidence and momentum before more complex initiatives are launched.
RACI: A responsibility framework -- Responsible (makes the final call), Accountable (owns the outcome), Consulted (contributes input), Informed (needs updates).
Root cause: The fundamental underlying issue driving observable symptoms; the deepest level of a problem cascade that can be acted upon.
Sensitivity analysis: Testing how conclusions change when key assumptions are varied within reasonable ranges; essential for understanding the robustness of quantitative analysis.
SMART: A characterization of well-defined actions -- Specific, Measurable, Achievable, Relevant, and Time-bound.
Symptom: A surface-level indicator of a deeper problem; addressing symptoms without treating the root cause provides only temporary relief.
Synthesis: Step 5 of the six-step framework; the act of comparing analyzed alternatives and selecting the best option, exercising judgment rather than mechanical calculation.
VUCA: Volatile, Uncertain, Complex, and Ambiguous -- a characterization of environments where adaptive problem-solving and contingency planning are especially important.
Weighted evaluation matrix: A tool that assigns numerical weights to criteria and scores each alternative; useful for structuring comparison but should not be treated as a definitive answer, as it oversimplifies criterion interactions.