When the Dragonlord Isn’t the Real Problem

Do you remember Dragon Quest?

If you grew up with a Nintendo or a PlayStation, you probably spent entire weekends glued to one of the most iconic RPG franchises in gaming history. The story follows a simple but irresistible arc: an evil Dragonlord steals a precious artifact β€” the Ball of Light β€” plunging the entire world into darkness and despair. The hero sets out on a perilous journey, battling monsters across dungeons and wild terrain, solving ancient puzzles, collecting weapons and wisdom along the way. Finally, after what feels like an eternity of grinding, the hero faces the Dragonlord himself β€” a villain of overwhelming power β€” and somehow, improbably, triumphs. The Ball of Light is restored. Peace returns.

It’s deeply satisfying. But here’s what hit me the other day, years removed from my gaming childhood and knee-deep in the messy realities of building a business: that storyline isn’t just a game mechanic. It’s actually one of the most elegant frameworks ever devised for understanding how to solve a hard problem.

Think about it. The hero didn’t just “go slay evil.” Before they could even begin, they had to answer four critical questions:

  • Who am I, and what are my capabilities? (The Hero)
  • What am I actually trying to achieve? (The Treasure)
  • What’s standing between me and that goal? (The Dragon)
  • What is the single, overarching question I need to answer to win? (The Quest)

Without clarity on all four, the hero would wander aimlessly. Fight the wrong monsters. Miss the dungeon that contains the key artifact. Confront the Dragonlord at the wrong time, in the wrong way, with the wrong weapons.

And here’s the uncomfortable truth about real-world problem solving: most of us are doing exactly that. We’re wandering. Fighting the wrong monsters. We’re smart, hardworking, well-intentioned β€” and we’re solving the wrong problem.

Solvable is a book built on one uncomfortable observation: after coaching hundreds of senior executives across every industry imaginable, Arnaud Chevallier and Albrecht Enders found that most strategic failures weren’t failures of intelligence or effort. They were failures of process β€” specifically, how problems get framed, explored, and decided. Their framework, FrED (Frame, Explore, Decide), is one of the most practical tools I’ve encountered for cutting through that fog.

This article walks through FrED using the Dragon Quest metaphor the book itself embraces. We’ll cover how to write a real problem statement, stress-test your assumptions, generate genuine alternatives, define what actually matters, and make a decision you can defend under pressure.

Let’s suit up and start the journey.

Your Intuition Is Lying to You (Especially on Hard Problems)

Before we get into the framework, let’s make sure we’re talking about the right kind of problem.

There’s an important distinction between simple problems and what Solvable calls CIDNI problems β€” Complex, Ill-Defined, Non-Immediate but Important. These are problems without a clear “right answer,” where the variables are interconnected, the goals are partly subjective, and the consequences of getting it wrong are significant.

Should you launch that product? Pivot your business model? Hire or fire a key team member? Expand internationally? These aren’t math problems. They’re CIDNI problems. And the cognitive toolkit we rely on for everyday decisions actively misfires when applied to these challenges.

The culprit? System 1 thinking.

You’ve probably encountered this from Daniel Kahneman’s work. System 1 is fast, automatic, intuitive β€” it’s what lets you drive to work without consciously thinking about every turn. In most domains, it’s a remarkable asset. But when you’re facing a complex strategic decision, System 1 sees what Kahneman calls WYSIATI β€” “What You See Is All There Is.” It leaps to conclusions based on the first framing it encounters, unconsciously prunes the solution space, and weaponizes your own experience and biases against you.

I saw this constantly during my years in academic research. Brilliant scientists β€” people with extraordinary analytical minds β€” would frame a research problem around the methodology they already knew, rather than the question they actually needed to answer. The lab’s RNA sequencing pipeline was cutting-edge, so suddenly every biological question became an RNA sequencing question. The dragon they were fighting looked suspiciously like whatever dragon they’d already learned to fight.

The antidote isn’t to become superhuman. It’s to use a structured process β€” FrED β€” that forces you to slow down, examine your assumptions, and think more carefully at each stage. Let’s walk through it.

Phase 1 β€” Frame: Know Exactly What Problem You’re Trying to Solve

Write Your Problem Down Before You Touch a Single Solution

Here’s where Dragon Quest becomes more than a metaphor. Any complex problem can be summarized in a four-part structure called the HTDQ sequence:

  • Hero: Who are you, and what’s the relevant context? Include just enough background that someone unfamiliar with your situation can understand it. Think of this as the establishing shot in a film.
  • Treasure: What does the hero aspire to achieve? The one overriding goal. Not three goals. One.
  • Dragon: What single obstacle stands between the hero and the treasure? Introduce it with “however.” If there’s no dragon, there’s no problem.
  • Quest: The overarching question your entire effort aims to answer, expressed as: How should [Hero] get [Treasure], given [Dragon]?

Here’s a story that makes this concrete. In 1661, King Louis XIV ordered the construction of the Palace of Versailles with over 2,400 fountains. There was just one small problem: Versailles sat above all nearby water sources, and hydraulic technology hadn’t improved since Roman times. Over three decades, 30,000 workers dug canals, built aqueducts, and constructed what was called the most complex machine of the 17th century β€” all to bring water to those fountains.

They never succeeded.

But here’s the twist: a team of fountaineers solved the “problem” with whistles. When the king approached a fountain, a fountaineer would hear a colleague’s whistle and open the water flow. As soon as the king’s party passed, the water was cut off. What worked where 30,000 workers failed?

The answer lies in the framing. The engineers were solving: How should we bring sufficient water to the king’s fountains? The fountaineers were solving: How should we bring sufficient water for the king’s fountains to achieve their desired effect?

A few words changed everything. Every word in your quest carries weight. Before you start solving, write out your Hero-Treasure-Dragon-Quest sequence. Read it aloud. Does it actually capture what you’re trying to achieve?

Four Rules That Will Save Your Problem Statement From Itself

Once you’ve drafted your initial HTDQ sequence, it’s tempting to declare victory and move on. Don’t. There are four rules for stress-testing your frame, each named after something memorable enough that you’ll actually remember them.

The Backpack Rule β€” Pack only what you need.

The London Underground map, designed by electrical draftsman Harry Beck in 1931, is deliberately inaccurate. It distorts geography, exaggerates the city center, and straightens every line. And yet it’s become the template for transit maps around the world β€” because a less accurate map can be more useful than a more accurate one, when it captures exactly what you need and nothing more.

The same logic applies to your frame. Every meaningful piece of information in one section of your HTDQ sequence must reappear somewhere else. If you mention something in your Hero section that never shows up in the Dragon or Quest, ask yourself: does it actually belong? Pack light. A bloated frame is worse than a lean one.

The Rabbit Rule β€” No surprise appearances.

This is the mirror image of the Backpack Rule. Named after the magician’s trick (a rabbit can’t come out of a hat you haven’t put it in), the rule is simple: nothing should appear in your Quest that wasn’t already introduced in your Hero, Treasure, or Dragon.

Think of a student β€” call him Jerry β€” who gave a sprawling, disorganized presentation about his company’s challenge. He spent ten minutes discussing org charts and international expansion history, and then his Quest suddenly focused on social media presence β€” something that had never been mentioned before. His audience was lost. The rabbit appeared out of nowhere.

The Dolly Rule β€” Clone your language.

Named after the first successfully cloned mammal, this rule asks you to resist the English teacher’s instinct to vary your vocabulary. In problem solving, synonyms create ambiguity. If your Hero section mentions “clients,” your Dragon shouldn’t say “customers” and your Quest shouldn’t say “buyers.” Same thing, same word. Every time. Consistency over elegance.

The Watson Rule β€” Examine your assumptions.

In 2014, French rail operator SNCF ordered 341 new trains at a cost of €15 billion. When they arrived, the company discovered the trains were too wide for 1,300 platforms across the country. Altering those platforms cost another €50 million. Why? Engineers had measured platforms built in the last 30 years and assumed they were representative. The older platforms, built when trains were narrower, had been completely overlooked.

This is the Watson problem: we fail to catch cases, like Sherlock’s sidekick, because we don’t question what seems obvious. Every claim in your HTDQ sequence should be defensible. If you can’t justify it, you might be framing the wrong problem entirely.

Don’t Treat Your Symptoms β€” Go Looking for the Root Cause

Here’s where things get really interesting β€” and where most organizations completely skip a critical step.

In January 1989, British Midland Flight BD 092 was cruising at 28,000 feet when the crew felt a strong vibration and smelled burning fumes. The crew throttled back the right engine β€” and the vibration stopped. Logical conclusion: the right engine was the problem. They shut it down.

They were wrong. The left engine was malfunctioning. The cessation of vibration when the right engine was throttled back was a coincidence. The plane crashed short of the runway, killing 47 of 126 people on board.

The crew diagnosed the problem too quickly, based on incomplete information, and never went back to question their conclusion. Sound familiar?

This is precisely why Solvable argues for a dedicated diagnostic phase before considering solutions. Think of it as the difference between treating a headache with aspirin and actually getting the MRI.

Draw a Why Map. A Why Map is a visual breakdown of potential root causes. Starting with your central problem β€” say, “Why isn’t our company profitable?” β€” you branch outward, asking “why?” horizontally to go deeper, and “what else?” vertically to find new categories. The result is a structured map of every plausible explanation.

I’ve used a similar approach in bioinformatics: mapping the potential failure modes in an NGS data analysis pipeline the same way you’d map the root causes of low profitability. The logic is identical β€” don’t treat the first explanation as the only one, and don’t stop asking “why?” until you’ve reached something concrete enough to act on.

Build it MECE. Mutually Exclusive, Collectively Exhaustive is the organizing principle behind both Why Maps and the How Maps we’ll cover shortly. As I explored in my earlier piece on McKinsey’s problem-solving approach, MECE means no overlaps and no gaps. If you’re analyzing profitability, “revenues” and “costs” is a MECE split β€” every dollar of profit belongs to one or the other, and neither overlaps.

The power of MECE shows up sharply in the story of Stora Enso, a Finnish-Swedish paper company facing declining demand in the mid-2000s. Their leadership had been analyzing the company’s trees by physical components: planks for construction, pulp for paper, bark for energy. That decomposition was MECE β€” but it wasn’t insightful.

Then they hired an industrial engineer named Juan Carlos Bueno who reframed trees by their biochemical components β€” lignin, cellulose, hemicellulose. Suddenly an entirely new solution space opened up, eventually leading to a joint venture with H&M and IKEA to develop textile fibers from tree cellulose. Same tree. Entirely different frame β€” and entirely different future. The lesson: a structure can be technically correct and still fail to surface anything useful. Insightfulness matters as much as completeness.

Test each hypothesis with the LEAD approach. Once your Why Map identifies potential root causes, each one is a hypothesis that needs testing: Locate the evidence, Evaluate its quality, Assess the body of evidence, Decide whether to accept or reject the hypothesis.

Critically β€” seek opposing evidence first. Look for the data that could change your mind. Confirmation bias is one of the most dangerous shortcuts in problem solving, and the only real antidote is deliberately hunting for the reasons you might be wrong.

Once your diagnosis is complete, go back and update your HTDQ sequence. Your Dragon should be more precise. Your Quest should be better scoped. Now you’re ready to start looking for solutions.

Phase 2 β€” Explore: Open the Solution Space Before You Close It

Stop Defaulting to Your First Idea β€” Here’s How to Find Better Options

Here’s a question worth sitting with: when you face a difficult decision, how many alternatives do you actually consider?

Most people β€” even experienced executives β€” stop at two or three options. Often, they stop at one (“the obvious move”) before System 1 has fully engaged. A McKinsey survey found that 72% of senior executives felt bad strategic decisions in their organization were at least as frequent as good ones. One major culprit: insufficient exploration of alternatives.

The How Map is the solution-generating counterpart to the Why Map. Where a Why Map asks “Why is this happening?”, a How Map asks “How might we achieve this?” The same four structural rules apply β€” single question, move from question to alternatives, MECE structure, insightful framing. The deliberate choice of “might” over “should” matters here. “How should we” implies you’re already narrowing. “How might we” creates psychological space for unconventional ideas that your filters would normally kill before they even reach conscious thought.

Consider this scenario: a building manager whose tenants complained that the elevators were too slow. The obvious How Map branched into: speed up the current elevators, or add more elevators. Both expensive. Both technically complex.

But what if you widened the quest β€” from “How might we speed up the elevators?” to “How might we make our users happy with the speed of our elevators?” A completely new branch appears: make the elevators feel faster without actually being faster. Mirrors in elevator lobbies. Screens showing news or entertainment. Music. Installing mirrors costs a fraction of a motor upgrade β€” and works remarkably well.

This reframing technique is one of the most powerful creativity moves you can make. Equally powerful is relaxing constraints: What if cost weren’t a factor? What if you had six months instead of four weeks? What if you could partner with a competitor? These “What if?” questions break the invisible straitjackets that conventional thinking puts around your solution space.

From raw options to real alternatives. Your How Map might generate 20, 30, or more ideas. Not all of them are viable. The next step is convergent thinking β€” collapsing your options into a manageable set of concrete, mutually exclusive alternatives, each of which is a complete answer to your quest on its own.

Keep generating alternatives until you fall in love with at least two. If everyone agrees on the first option immediately, that’s a warning sign β€” not a green light.

Decide What You Actually Value Before You Pick a Winner

Here’s a confession from Solvable co-author Albrecht Enders: he once bought a house on Lake Geneva without adequately considering whether it would be quiet. The highway noise kept him up every night. It took massive renovations β€” moving windows to the back of the house β€” to fix the problem.

A smart, accomplished person with strong analytical skills missed something obvious. Why? Because he didn’t consciously surface all the criteria that actually mattered to him before making the decision.

Research from Duke and Georgia Tech shows that decision makers can omit nearly half of the criteria they would later consider relevant β€” and those omitted criteria turn out to be almost as important as the ones they initially considered. This isn’t a stupidity problem. It’s a process problem.

The antidote is what Julia Galef calls the Scout Mindset β€” approaching your decision criteria not as a soldier defending a position (what supports my preferred option?) but as a scout mapping territory (what actually matters, without prejudging the outcome?). Scouts look for what’s true; soldiers look for what’s useful. That distinction changes everything about how you set up a decision.

What makes a good set of criteria?

First, it should be roughly MECE: no overlapping criteria (which cause double-counting) and no missing ones (which cause blind spots β€” like Albrecht’s highway). A useful baseline is feasibility + desirability β€” can you do it, and do you want it more than the alternatives?

Second, criteria should be unambiguous and measurable. “Good cultural fit” is a criterion. “Percentage of team members who rate the candidate 7 or above on communication clarity, on a 10-point scale” is a better one. Qualitative criteria aren’t forbidden β€” but without explicit definitions, they become Rorschach tests that different stakeholders interpret differently.

Third, convert all criteria to “benefit criteria” β€” where a higher score is always better. Instead of “cost” (lower is better), use “affordability” (higher is better). It sounds trivial, but it eliminates a constant source of confusion when comparing alternatives numerically.

Finally, assign weights. Not everything matters equally. Even rough weights β€” from 1 (weakest) to 5 (strongest) β€” force you to think about what you actually prioritize. Beware of “equalizing bias” β€” the default tendency to give everything roughly the same weight because it feels fair. It rarely reflects what you actually care about.

Phase 3 β€” Decide: Pick the Best Answer and Make It Stick

The Tool That Forces You to Be Honest With Yourself

You’ve defined your quest, diagnosed your problem, mapped your alternatives, and articulated your criteria. Now it’s time for the part that most books treat as the entire point: actually deciding.

The go-to tool here is the decision matrix β€” a structured grid where each row is an alternative, each column is a criterion, and each cell contains an evaluation of how well that alternative performs on that criterion. Weighted scores produce a ranking.

Simple. But remarkably rare in practice.

Here’s why it matters: the matrix forces consistency. You’re applying the same yardstick to every alternative. It doesn’t guarantee the right answer β€” but it dramatically reduces the likelihood of choosing based on a single criterion you’ve unconsciously overweighted, or of being swayed by whoever argues most loudly in the room.

One critical rule: almost no real alternative should score highest on every criterion. If your evaluation shows an option that’s best across the board with zero trade-offs, treat that as a red flag β€” you’ve either missed important criteria or made unrealistic assumptions. There is no free lunch.

Quest: How should we grow our online business revenue by 30% next year?

Alternative Revenue
potential
Speed to
results
Low capital
needed
Skill
alignment
Scalability Score Rank
Weight (1–5) Weight: 5 Weight: 3 Weight: 4 Weight: 4 Weight: 3

Drag any weight slider β€” scores and rankings update live.

πŸ’‘Try it yourself β€” Interactive Decision Matrix
To see a decision matrix in action, scroll back up to the interactive example in this article. It models a real business scenario β€” four strategies for growing online revenue β€” evaluated across five weighted criteria: revenue potential, speed to results, capital needed, skill alignment, and scalability.

Try dragging the weight sliders and watch the rankings shift in real time. Crank "speed to results" to 5 and affiliate partnerships jump to the top. Max out "scalability" and the paid course dominates. Set all weights equal and the gap between options narrows considerably.

That's the whole point: the matrix doesn't just tell you what scored highest β€” it shows you why, and reveals exactly how sensitive your conclusion is to what you actually care about.

The matrix also creates the conditions for what Roger Martin calls Integrative Thinking β€” using the tension between imperfect alternatives as a catalyst to design a third option that captures the best of both.

The most memorable example: BMW engineer Max ReisbΓΆck in the 1980s wanted to take his family on vacation but faced a painful trade-off. Their sporty 3-series sedan was too small for the kids' bikes and tricycles; their VW estate wagon had plenty of space but handled terribly. Neither was acceptable.

So Max bought a 3-series, cut off the back, and built a custom estate body on it. The result ended up on the desk of BMW management, who launched an official project to produce it. The BMW Touring is now one of the brand's most popular models.

The point isn't to literally build your own car. It's to resist accepting the trade-offs you're presented with as inevitable. When the tension between two alternatives is painful enough, that pain is often a signal that a better alternative hasn't been discovered yet.

Quality check your own reasoning. Perform a sensitivity analysis: if you adjust the weights slightly, does the ranking change dramatically? If so, your conclusion isn't robust. Recruit someone who disagrees with you to poke holes in your reasoning β€” a devil's advocate is a quality control mechanism, not a nuisance. And if resources allow, run parallel pilots on your top two or three alternatives before fully committing.

When One Decision Isn't Enough β€” Keeping All Your Dragons in Alignment

Here's a complexity that most problem-solving frameworks quietly ignore: most real strategic decisions aren't a single fork in the road. They're a web of interconnected forks, and choosing one path constrains your options on others.

This is illustrated perfectly by the story of Boklok, a joint venture between IKEA and construction company Skanska to produce affordable prefabricated housing. The management team made thoughtful decisions on product type, pricing, production location, and go-to-market strategy. But they missed one critical domain: how to organize the operational relationship between IKEA and Skanska. That single omission nearly derailed the entire initiative.

The key distinction is between Big Dragons and Baby Dragons. Some problems are best treated as one Big Dragon β€” a single overarching challenge that, once solved, gives you a complete strategy. Others are better broken into a family of Baby Dragons β€” separate but interconnected sub-problems, each with its own HTDQ sequence, that need to be aligned with one another.

Wedding planning captures this well. You could theoretically create one giant How Map covering every decision from the guest list to seating arrangements. But the map would be so unwieldy as to be useless. Better to break it into baby dragons β€” size, location, food, music β€” solve each independently, then check for compatibility between them.

Existing strategic frameworks (Porter's Five Forces, Osterwalder's Business Model Canvas, Hambrick and Fredrickson's Strategy Diamond) can be useful as checklists of baby dragons you might have forgotten to address. But frameworks are great servants and terrible masters. No framework was designed with your specific problem in mind. Use one as a starting point for your own MECE thinking β€” never as a substitute for it.

Being Right Isn't Enough β€” You Also Have to Bring People With You

You've done the work. You've diagnosed the root cause, mapped the solution space, built a decision matrix, and identified the best on-balance alternative. You're right.

And you might still fail to execute.

Being right and being effective are two different skills, and the gap between them is where most good analyses die. Solvable devotes a full chapter to this β€” winning stakeholder support isn't manipulation; it's the recognition that even a perfect solution requires buy-in to work.

The model draws from Aristotle's three pillars of persuasion:

Logos (Logic) β€” The analytical case for your recommendation. Everything you've done in FrED so far. Necessary, but not sufficient on its own.

Ethos (Character) β€” Trustworthiness and credibility, built over time through track record, expertise, and how you carry yourself. An important tactical point: paint a balanced picture. If you recommend an alternative that scores best on every single criterion with zero trade-offs, sophisticated stakeholders will be suspicious. Acknowledge the weaknesses. It demonstrates honest engagement with the decision rather than advocacy dressed up as analysis.

Pathos (Emotion) β€” Feeling is not a flaw in decision-making; it's a fundamental driver of action. Psychologist Jonathan Haidt uses the metaphor of an elephant and a rider: the rational, analytical rider sits on top of the elephant β€” the emotional, intuitive system that actually determines whether we move. You can give the elephant direction, but you can't drag it anywhere it doesn't want to go. Effective persuasion appeals to both.

Structure your message like a pyramid. Lead with your key message, then the storyline that supports it, then the arguments, then the evidence. Calibrate your depth to your audience's level of anticipated pushback: if you expect little resistance, lead with your recommendation. If you expect significant resistance, start with the criteria β€” what matters and why β€” before discussing alternatives at all.

Know when to shift gears. Here's a nuance even experienced executives get wrong: the analytical openness that serves you during problem solving can undermine you when presenting conclusions. At some point, you need to transition from "scout mode" (genuinely uncertain, gathering input, testing hypotheses) to "conviction mode" (clear, confident, able to defend your recommendation under fire). Not because you're being dishonest β€” but because the situation calls for decisiveness. The decision matrix and alternative analyses are the engine room of your work: critical infrastructure, but not what passengers need to see on the upper deck.

Moving Forward When You Can't Know Everything

On March 18, 1967, the supertanker Torrey Canyon was navigating through the Scilly Isles off the coast of England, carrying 120,000 tonnes of crude oil. The first officer had already corrected the ship's course when a sleep-deprived captain woke up and countermanded the change β€” he was behind schedule and couldn't afford the detour.

The ship ran aground. It caused what was, at the time, the worst oil spill in history.

The captain wasn't unintelligent. He was caught in plan continuation bias β€” the tendency to stick with a chosen course even when new evidence says you should change. His deadline felt urgent. The risk of running aground felt abstract.

This is the final challenge in FrED: moving from deciding to doing, under real-world conditions of uncertainty and change.

Think in probabilities, not certainties. After completing your FrED analysis, rate your confidence in your chosen alternative on a scale of 0 to 100 β€” not as a psychological exercise, but as a discipline. It's a reminder that your conclusion is a hypothesis, not a fact. Then ask: what evidence would change my mind? What early signals of success or failure should I be monitoring? Write that down before execution begins.

Update as you learn. Keynes put it well: "When the facts change, I change my mind. What do you do, sir?" Bayesian reasoning means continuously updating your confidence as new information arrives β€” not because you were wrong to decide, but because the world has moved. The goal isn't to predict the future perfectly. It's to be less wrong over time.

Build in two-way doors. Jeff Bezos made this concept famous: some decisions are hard to reverse (one-way doors) and some can be undone if they don't work out (two-way doors). When facing genuine uncertainty, structure your early commitments as two-way doors wherever possible β€” maintaining the ability to course-correct before the stakes get higher. Solvable calls the lightweight version of this Rapid FrED: a condensed version of the framework you can run in minutes on smaller, time-sensitive decisions.

Treat every execution as an experiment. Build in checkpoints. Define in advance what "success" looks like at 30, 60, and 90 days. When outcomes deviate from predictions β€” in either direction β€” treat the deviation as information, not just a result. The goal isn't to be right the first time. It's to get better at being right over time.

The Full Dragon Quest Map: FrED at a Glance

Let's step back and see the whole journey:

PhaseWhat You're DoingKey ToolThe Central Question
FRAMEWriting your problem statementHTDQ SequenceWhat is my problem, precisely?
Stress-testing your frameBackpack, Rabbit, Dolly, Watson RulesIs my frame accurate and lean?
Finding the real root causeWhy Maps + MECE + LEADWhat's actually driving this?
EXPLOREOpening the solution spaceHow Maps + Ideation TechniquesWhat are all the ways I might solve this?
Surfacing what really mattersScout Mindset + Weighted CriteriaWhat do I actually value here?
DECIDEComparing alternatives honestlyDecision Matrix + Integrative ThinkingWhich option is best on balance?
Checking for hidden interdependenciesBig/Baby Dragon AlignmentAre all my decisions pulling in the same direction?
Bringing stakeholders with youLogos, Ethos, Pathos + Pyramid PrincipleHow do I turn analysis into action?
Executing under uncertaintyProbabilistic Mindset + Bayesian UpdatingHow do I keep learning as I go?

The Hidden Cost of Jumping Straight to Solutions

Here's the pattern I kept seeing in academic research labs β€” and it shows up just as reliably in boardrooms:

People skip the framing. They skip the diagnosis. They jump straight to generating solutions and evaluating them intuitively. And then they wonder why the solutions don't hold up.

The lab version of this was a PI (Principal Investigator) who would hand down a research direction β€” "let's sequence these samples using approach X" β€” without first diagnosing whether approach X actually addressed the underlying biological question. Months of work, hundreds of thousands of dollars in reagents and sequencing costs, and then the results come back ambiguous β€” because the quest was never clearly defined.

The Boeing 737 Max disaster β€” which opens Solvable β€” is the same failure at catastrophic scale. Facing competitive pressure from the Airbus A320neo, Boeing executives chose to rapidly update an existing aircraft rather than properly frame their options. The decision was made in weeks, under enormous pressure, without adequate exploration of alternatives or honest assessment of the technical constraints. The result was an automated flight control system (MCAS) that pilots weren't even told about β€” a solution engineered in secret to a problem that was never properly diagnosed. The consequences were devastating.

FrED won't eliminate bad decisions on its own. But it creates the conditions β€” structured thinking, explicit assumptions, multiple perspectives β€” that make them far less likely.

Conclusion: Your Quest Is Waiting

Every worthwhile goal you've ever set has the structure of a Dragon Quest: a Hero with aspirations, a Treasure worth pursuing, a Dragon that stands in the way, and a Quest that defines the journey.

The difference between the hero who succeeds and the one who wanders indefinitely isn't raw intelligence or sheer willpower. It's the quality of their process. Did they frame the right problem? Did they diagnose the actual root causes rather than treating symptoms? Did they genuinely explore the solution space, or just settle for the first idea that felt familiar? Did they decide based on evidence and structured criteria, or did they follow the HiPPO β€” the Highest Paid Person's Opinion?

The quality of your thinking process determines the quality of your outcomes. Not your IQ. Not your work ethic. Your process.

FrED won't solve your problems for you. But it will give you a map β€” a real one, not a satellite photo of the London Underground β€” that shows you where you are, where you're going, and how to get there without fighting the wrong dragon.

Now go write your HTDQ sequence.

Key Takeaways

  • CIDNI problems (Complex, Ill-Defined, Non-Immediate but Important) require a structured process β€” not intuition. System 1 thinking is dangerous here.
  • The HTDQ Sequence (Hero-Treasure-Dragon-Quest) is the most valuable five-minute investment you can make before starting any major initiative.
  • The four frame rules β€” Backpack, Rabbit, Dolly, Watson β€” help you build a frame that's lean, complete, consistent, and free of unexamined assumptions.
  • Diagnosis before solutions. Use Why Maps and MECE thinking to find root causes before generating alternatives.
  • How Maps systematically expand your solution space. Most of us stop too early.
  • Criteria first, alternatives second. The Scout Mindset means knowing what you value before you start comparing options.
  • Decision matrices enforce consistency and expose trade-offs that intuition hides.
  • Integrative thinking turns painful trade-offs into opportunities to design better alternatives.
  • Aristotelian persuasion β€” logos, ethos, pathos β€” is the toolkit for turning good analysis into organizational action.
  • A probabilistic mindset and Bayesian updating keep you from the trap of plan continuation bias once execution begins.

Related Posts