The Smartest Man in the Room Made Dumb Decisions

In 1720, Isaac Newton — arguably the greatest scientific mind in human history — made a catastrophic financial decision.
Newton had invested early in the South Sea Company, a British trading firm whose stock was climbing with the kind of feverish momentum that makes otherwise sane people abandon all caution. He was smart. He saw the warning signs. He sold his shares at a healthy profit and walked away with the equivalent of roughly £7,000 — a small fortune at the time.
Then he watched everyone around him get richer.
Former colleagues, neighbors, people he privately considered his intellectual inferiors were doubling and tripling their money as the stock kept climbing. The FOMO became unbearable. Newton bought back in — this time near the peak, committing a much larger sum. Months later, the South Sea Bubble collapsed. Newton lost the equivalent of £3 million in today’s money.
He reportedly said afterward: “I can calculate the motion of heavenly bodies, but not the madness of people.”
Let that sink in. The man who invented calculus, formulated the laws of motion, and revolutionized our understanding of the universe couldn’t protect himself from a basic investment bubble. Not because he lacked intelligence — he had more of it than almost anyone alive. But because raw intelligence is no defense against the way the human mind actually works under conditions of social pressure, greed, and narrative momentum.
That’s the unsettling premise at the heart of Daniel Kahneman’s Thinking, Fast and Slow. And it’s one I encountered in a far more modest — but no less humbling — way.
The Day I Realized I Didn’t Know How to Think
I used to believe I was a rational person.
I had a PhD. I’d spent years designing experiments, analyzing data, writing papers. Rigorous, systematic, evidence-based thinking was literally my job. If anyone was immune to sloppy reasoning, I figured it was me.
Then one afternoon, a colleague walked into the lab with a “can’t-miss” investment opportunity — a biotech startup, a hot new gene-editing platform, some friends-of-friends who’d gotten in early. The pitch took about ten minutes. By the time he finished, I was already mentally calculating how much I could pull from savings.
I didn’t do due diligence. I didn’t ask hard questions. I just felt like it was a good opportunity. The excitement in his voice, the familiar technical jargon, the social proof of people I respected already being involved — it all felt right. I nearly wired the money that same week.
I didn’t lose my savings. But only because a friend who happened to be a securities attorney asked one uncomfortable question: “Have you actually read the offering documents?”
I hadn’t. There weren’t any.
That near-miss started me on a quest to understand why — why someone trained in careful, evidence-based reasoning nearly made a catastrophically impulsive financial decision because of a ten-minute hallway conversation. The answer led me to Daniel Kahneman’s Thinking, Fast and Slow.
It’s one of those books that changes the way you see everything — not just investing or business, but conversations, relationships, daily choices, the news you consume, and the stories you tell yourself about who you are and why you succeed or fail.
Kahneman, a Nobel Prize-winning psychologist, spent decades studying how humans actually make decisions — and the gap between that and how we think we make decisions is, frankly, terrifying.
Meet the Two Characters Living in Your Head

System 1 and System 2: The Real Story Behind Every Decision You Make
Kahneman’s central idea is deceptively simple: you have two modes of thinking, and understanding which one is running the show at any given moment is the key to making better decisions.
He calls them System 1 and System 2.
System 1 is fast, automatic, and effortless. It operates in the background, constantly, without you asking it to. It’s the system that recognizes faces, reads emotions, completes the sentence “bread and …” without thinking, and slams your brakes before you consciously register the car stopping in front of you. System 1 is a pattern-matching machine that’s been trained by every experience you’ve ever had. It’s powerful, fast, and almost entirely outside your conscious control.
Sound familiar? It should — because System 1 is essentially your habit brain. If you’ve read James Clear’s Atomic Habits (I covered it in a previous post), you’ll recognize this immediately. The habits you build over time are System 1 in action. When you brush your teeth without thinking, take the same route to work on autopilot, or reach for coffee the moment you sit down at your desk — that’s System 1 running a pre-programmed routine so your conscious mind doesn’t have to. This is actually a feature, not a bug. If you had to consciously deliberate every micro-decision from the moment you woke up, you’d be mentally exhausted before breakfast. System 1 handles the routine so System 2 can focus on what actually matters.
System 2 is slow, deliberate, and exhausting. It’s the system you engage when you do long division in your head, read a legal contract, or try to navigate a new city without GPS. System 2 requires concentration. It burns mental energy. And — this is the key part — it’s fundamentally lazy. Given the choice, System 2 would rather let System 1 handle things.
This division of labor is elegant and mostly works beautifully. The problem arises when System 1 quietly takes over decisions that genuinely need System 2’s attention — complex, high-stakes, novel situations where fast pattern-matching is not just insufficient, but actively dangerous. System 1 was shaped by millions of years of evolution for a world that doesn’t exist anymore — a world where quick, confident decisions about predators and food and social hierarchies were more important than careful, probabilistic reasoning about stock markets, hiring decisions, or startup strategy.
The mismatch between the world System 1 was built for and the world we actually live in is the source of virtually every cognitive bias Kahneman describes. And there are many.
The Invisible Author of Your Opinions

Why You’re Probably Confident About Things You Know Nothing About
Imagine I show you two questions:
Is the height of the tallest redwood tree more or less than 1,200 feet?
What is your best guess about the height of the tallest redwood tree?
Now imagine a different group gets these questions:
Is the height of the tallest redwood tree more or less than 180 feet?
What is your best guess about the height of the tallest redwood tree?
Both groups are trying to estimate the same thing. The only difference is the number they saw first — 1,200 or 180. But studies show the two groups give dramatically different answers. The group that saw 1,200 feet guesses much higher than the group that saw 180 feet. Neither group consciously adjusts their thinking based on those numbers. But their answers are anchored to them anyway.
This is anchoring, and it’s one of System 1’s most reliable tricks. Whatever number, idea, or frame you encounter first becomes a kind of gravitational center that pulls your subsequent thinking toward it — even when you know the anchor is arbitrary.
I saw anchoring destroy a negotiation in real time during my academic years. A PI I knew was trying to hire a postdoc and opened with a salary offer of $42,000 — absurdly low, even for academia. The candidate, a sharp researcher who probably could have negotiated to $58,000, ended up accepting $49,000. They felt like they’d won because they’d moved the number up. The PI had set the anchor so low that even a “successful” negotiation landed the candidate well below market.
In business, anchoring shows up everywhere. The “original price” crossed out on a sale tag. The first number in a salary negotiation. The initial valuation floated in a funding discussion. The first number in the room has enormous power — far more than logic says it should.
What to do about it: Before entering any negotiation or major decision, do your own independent research and set your own anchor first. When you receive an anchor from someone else, consciously force yourself to consider extreme alternatives on both sides before settling on your estimate.
The Story Machine: Why Facts Don’t Change Minds

How Your Brain Builds Narratives That Feel Like Truth
Here’s a scenario: You’re deciding whether to invest in a small restaurant that just opened in your neighborhood. You stop in for lunch. The food is fantastic — creative, well-executed, exactly the kind of place your area has been missing. The chef comes out to chat. She’s passionate, articulate, clearly talented. You leave feeling excited.
Are you more likely to invest now?
Most people say yes. But here’s what you actually have: one data point (a single meal), a positive emotional experience, and a charming conversation. You know nothing about the restaurant’s finances, its customer retention, whether the chef has any business acumen, or what the rent situation looks like. You have essentially no information relevant to a sound investment decision.
But System 1 doesn’t care. It has already written the story: talented chef, great food, underserved market, passionate founder — this is the template for success. It feels like a good investment. And that feeling is so convincing that most people never notice they’re making a decision based on a story rather than evidence.
Kahneman calls this WYSIATI — What You See Is All There Is. System 1 builds the most coherent story it can from whatever information is in front of you, and then treats that story as the truth — not as an incomplete picture assembled from limited data.
This is why smart, informed people can look at identical information and reach completely opposite conclusions, each feeling certain they’re right. They’re not working from the same data; they’re working from different stories built on top of that data.
It’s also why expertise can be a trap. When I was deep in my research specialization, I had an incredibly coherent story about how the field worked, where the opportunities were, and which approaches were worth pursuing. The coherence of that story felt like insight. But coherence isn’t accuracy — sometimes a story hangs together beautifully precisely because you’ve unconsciously edited out the inconvenient facts.
What to do about it: Before making any significant decision, ask yourself: what information am I not seeing? What would change this story? Find someone who disagrees with you and listen — not to argue, but to genuinely understand what their story is built on.
The Expert Illusion: Confidence Is Not Competence

Why the People Who Sound Most Certain Are Often Most Wrong
A financial advisor who has been predicting markets for twenty years walks into a room. He’s poised, confident, and has a long list of impressive-sounding calls he’s gotten right. You’re inclined to trust him. Why wouldn’t you?
Kahneman would tell you to be very careful.
One of the most important and uncomfortable findings in Thinking, Fast and Slow is that human expertise is far more limited than we assume — particularly in complex, unpredictable environments. When researchers analyzed the long-term performance of stock-picking experts, financial advisors, and investment fund managers, they found something troubling: over the long run, the vast majority perform no better than random chance. The few who did outperform the market in one period showed no reliable tendency to outperform in the next.
But here’s the psychological puzzle: the experts themselves didn’t know this. They remembered their wins. They built narratives around their wins. They genuinely felt like they had skill, and their confidence was real. The feeling of expertise is almost entirely disconnected from actual predictive accuracy in complex domains.
This isn’t true of all expertise. A chess grandmaster genuinely has skills an amateur lacks. An experienced surgeon has pattern-recognition abilities built through thousands of cases that can’t be replicated through reading. Expertise in predictable environments, where feedback is immediate and clear, tends to be real.
But expertise in unpredictable environments — markets, politics, social trends, startup success — is far more illusory than the experts themselves realize. The feedback loops are too long, too noisy, and too prone to confounding variables for accurate skill-building to occur. Yet the experts still develop the feeling of insight, because System 1 is always finding patterns, always telling stories, always manufacturing coherence.
I think about this when I hear scientists-turned-entrepreneurs talk about their gut feeling for which startups will succeed. Maybe they’re right more often than chance. But in the absence of careful tracking over hundreds of calls, nobody really knows — including them.
A few years ago, I spent about six months seriously exploring a move into consulting. I prepared obsessively — case frameworks, practice interviews, the whole drill. During that process, I kept hearing an industry saying that stuck with me: “Frequently wrong, but never in doubt.” The winking version was even sharper: “I wasn’t wrong — you just didn’t follow the instructions correctly.” Everyone laughed when they said it. But underneath the humor was something real: in consulting, as in many expert-driven fields, the performance of confidence is often more valued than the accuracy of the prediction. Clients want certainty. Experts learn, consciously or not, to project it — even when the underlying reality is far murkier than the polished deck suggests.
What to do about it: In domains where outcomes are complex and feedback is slow, be deeply skeptical of confident predictions — including your own. Use base rates (what typically happens in situations like this?) as your anchor instead of narrative logic.
Losses Are Louder Than Gains

The Psychological Asymmetry That Controls More of Your Life Than You Think
Let’s play a game. I’ll flip a fair coin.
If it lands tails, you lose $100. If it lands heads, you win… how much would you need to win to make this feel like a fair bet?
For most people, the answer is somewhere between $150 and $250. A 50/50 chance of winning $100 feels like it should require the chance to win at least $150 to $200 to be worth the $100 downside risk.
This is loss aversion — the discovery that losses feel roughly twice as powerful, psychologically, as equivalent gains. And it’s one of the most consequential findings in all of behavioral economics.
The practical implications are enormous. Loss aversion explains:
- Why investors hold losing stocks far too long (selling would make the loss “real”)
- Why people stay in bad jobs, bad relationships, and bad situations (leaving means acknowledging a loss)
- Why sunk costs haunt our decisions (we’ve already lost that money — why throw good money after bad? But we do, again and again)
- Why startup founders pivot too late (admitting the original idea failed is a loss)
- Why people negotiate so hard over small amounts when the downside is framed as a loss
Back in China, a real estate developer invested billions of dollars with the vision of building a grand luxury resort. The warning signs appeared early — flagging demand, tightening credit, a location that was never as accessible as the projections assumed. But the developer kept pouring money in, round after round, convinced that the next phase of construction, the next amenity, the next marketing push would be the one that turned it around. They couldn’t stop because stopping would mean admitting that all that capital was gone. Eventually, the cash ran dry. The resort was never finished. Today it stands as a notorious ghost town — grand structures slowly being reclaimed by weeds, a monument not to ambition, but to the paralyzing psychology of sunk costs.
The pain of acknowledging a loss — admitting the original thesis was wrong — outweighed every rational calculation that cutting losses and redirecting resources was the obviously correct move. Loss aversion didn’t just influence the decision; it completely hijacked it.
Here’s the twist: loss aversion can sometimes be your friend. It makes entrepreneurs cautious about burning through cash too fast. It makes scientists rigorous about publishing prematurely. Appropriate caution has real value. The problem is when loss aversion locks you into bad situations because leaving would require admitting a loss.
What to do about it: When evaluating whether to continue something — a project, a strategy, a investment — try to strip away the history. Ask: if I were starting fresh today with no prior investment, would I choose this? If the answer is no, you’re probably being held hostage by sunk costs and loss aversion.
The Halo Effect and the Stories We Tell About People

Why Your First Impression Is Rewriting Everything That Comes After
Imagine two people being described to you:
Person A: Intelligent, industrious, impulsive, critical, stubborn, envious.
Person B: Envious, stubborn, critical, impulsive, industrious, intelligent.
It’s the exact same list of traits, in reverse order. But research consistently shows that Person A is rated more favorably. The first traits color how we interpret everything that follows. Intelligent and industrious create a warm initial impression; the negative traits that follow get softened, explained away, or reinterpreted. Envious and stubborn create a cold initial impression; the positive traits that follow get minimized.
This is the Halo Effect — the way a positive (or negative) impression in one area bleeds into how we evaluate everything else about a person, idea, or company.
In hiring, this is catastrophic. Interviewers who like a candidate’s handshake and eye contact in the first thirty seconds of an interview have already, unconsciously, begun constructing a narrative in which the candidate succeeds. Everything the candidate says afterward gets interpreted through that lens. “Tell me about a challenge you faced” becomes an opportunity for an already-liked candidate to shine and an already-disliked candidate to dig a hole.
In investing, the Halo Effect means we evaluate companies based on their founders. A charismatic, well-credentialed founder with a good story creates a halo that makes due diligence feel almost rude. We don’t want to poke holes in the narrative because we’ve already decided, emotionally, that we like this person.
My previous lab once hired a lab technician almost entirely on the strength of a very impressive CV and a confident interview. Within three months, it was clear the person had a deep resistance to feedback and struggled to follow protocols. The qualities they should have been testing for in the interview — coachability, attention to detail, ability to follow procedures precisely — they never really probed, because the halo from the impressive background had already sold them.
What to do about it: In hiring and in evaluating opportunities, design processes that force you to evaluate specific, relevant criteria before you’ve formed an overall impression. Structured interviews, blind resume reviews, and pre-commitment to evaluation criteria all help break the halo’s grip.
The Two Selves: Why You Remember Experiences Differently Than You Live Them

The Strange Gap Between What Feels Good and What You’ll Remember
Here’s one of the most thought-provoking ideas in the entire book.
Kahneman draws a distinction between the experiencing self — the version of you living through moments in real time — and the remembering self — the version of you that looks back and evaluates how things went.
These two selves don’t agree with each other. At all.
In a famous study, participants underwent two versions of a mildly painful experience (holding their hand in cold water). In one version, they held their hand in water at a painfully cold temperature for 60 seconds. In the second version, they did the same 60 seconds, but the water was then warmed slightly for an additional 30 seconds before they removed their hand.
Objectively, version two involved more total pain — it lasted longer. But when asked which experience they’d prefer to repeat, most people chose version two. Why? Because the ending was better. The Peak-End Rule says we evaluate experiences based on how they peaked and how they ended, not their total duration or average quality.
This has wild implications. A vacation that ends with a lost wallet and a missed flight will be remembered more negatively than a shorter, simpler vacation that ended on a high note — even if the first vacation had more genuinely wonderful moments.
For entrepreneurs and professionals, this means the way you close things matters enormously. How a client relationship ends shapes how that client remembers your entire engagement. How a team meeting ends shapes how productive people feel the whole meeting was. How a job ends shapes how an employee describes your company culture to every future employer they meet.
The experiencing self lives through your days. The remembering self writes your autobiography. And they’re writing very different books.
What to do about it: Pay deliberate attention to endings. Don’t let important projects, relationships, or presentations trail off — design the close. Ask yourself: how do I want this to end, and what will the remembering self take away?
Regression to the Mean: The Most Misunderstood Concept in Business

Why Your Best Advice Might Be Making Things Worse
An Israeli flight instructor told Kahneman something that stuck with him for years. The instructor had noticed a pattern: whenever he praised a trainee pilot for an exceptional maneuver, the pilot’s next attempt was usually worse. Whenever he criticized a pilot for a poor performance, the next attempt was usually better. His conclusion? Praise is ineffective. Criticism works.
He was drawing exactly the wrong lesson from the data — but doing so completely rationally, given what he could observe.
The real explanation is regression to the mean: extreme performances, whether exceptionally good or exceptionally bad, tend to be followed by more average performances. This is a mathematical certainty in any system with random variation. A pilot who lands a perfect maneuver has likely performed at the upper end of their ability distribution; the next attempt will probably be closer to their average. A pilot who crashes a landing has likely performed at the lower end; the next attempt will probably be better.
But because the praise or criticism happened in between the performance and the observation, our pattern-matching System 1 assigns it causal power it doesn’t have.
This is everywhere in business:
- The sales rep who had a record quarter gets promoted, then has a mediocre year. The promotion “ruined” her? No — she just regressed to mean.
- A startup that gets a glowing magazine profile almost always underperforms afterward. The profile “jinxed” it? No — extreme success generates press, and the next period is usually more average.
- A new manager who inherits a struggling team sees improvement within months. Is it his leadership? Maybe partly. But also, extremely poor performance tends to improve on its own.
The practical danger is that we build policies, reward structures, and management philosophies based on what we think causes improvement or decline — when we’re actually observing statistical noise.
What to do about it: Before attributing a change in performance to any intervention, ask: would I have expected regression to the mean here anyway? Is the change statistically significant, or is it just noise? This is genuinely hard to do intuitively, which is why having systems for tracking and analyzing data is more valuable than gut-feel interpretations of performance trends.
The Substitution Trap: Why Hard Questions Get Easy Answers

How Your Brain Quietly Swaps Complexity for Comfort
Here’s a question: How much should society invest in protecting lakes and rivers from pollution?
That’s hard. It requires thinking about economic tradeoffs, ecological science, political feasibility, comparative risk, generational equity — dozens of genuinely complex considerations.
But watch what System 1 does: it quietly swaps that question for an easier one. Something like: How much do I personally care about lakes and rivers? And it answers that question with emotion — a warm, positive feeling — and presents it as an answer to the hard question.
Kahneman calls this substitution, and it happens constantly. When we face a difficult question, System 1 replaces it with a related but much simpler question we can answer intuitively, and we rarely notice the swap has happened.
- “Is this startup a good investment?” becomes “Do I like and admire this founder?”
- “Should I take this new job offer?” becomes “How excited do I feel right now?”
- “Is this scientific paper credible?” becomes “Do I recognize the author’s institution?”
- “Is this strategy right for our company?” becomes “Does it feel innovative and bold?”
The answers to the easy questions aren’t wrong, exactly — they contain information. But they’re not the same questions. And mistaking them for the hard questions is how intelligent, experienced people make decisions that look, in retrospect, bafflingly poor.
I see this in research settings all the time. “Is this hypothesis worth testing?” often gets quietly replaced by “Would confirming this hypothesis be exciting and publishable?” Those questions have different answers, and the substitution is almost never conscious.
What to do about it: For any important decision, write down the actual question you’re trying to answer before you start deliberating. Check back after you’ve deliberated: have you actually answered that question, or have you answered a related but easier one?
Slow Down to Win: Building a System 2 Practice

Practical Strategies for Better Thinking in a System 1 World
Everything so far might leave you feeling a bit hopeless. If our biases are so automatic, so deeply baked into our cognitive architecture, what can we actually do?
Kahneman is honest: you can’t eliminate System 1. You wouldn’t want to. It handles the vast majority of your moment-to-moment life competently and efficiently, freeing System 2 for when it’s truly needed. The goal isn’t to think slowly about everything — that’s exhausting and impractical.
The goal is to know when to engage System 2, and to have practices that make that engagement more reliable.
Here are the strategies I’ve found most useful:
Create friction for high-stakes decisions. Before any decision that’s hard to reverse — a major investment, a key hire, a strategic pivot — build in mandatory waiting time. Commit to sleeping on it. The feeling of certainty you have today may look very different tomorrow morning. This is especially important when you’re emotionally activated: excited, anxious, or angry.
Pre-mortem your plans. Before launching a project or strategy, gather your team and imagine it’s twelve months in the future and the project failed catastrophically. Ask: what happened? This isn’t pessimism — it’s forcing System 2 to generate failure scenarios that System 1, with its optimistic storytelling, would never surface. Gary Klein, a psychologist who developed this technique, found it dramatically improves the quality of planning.
Use base rates aggressively. When evaluating anything — a new market, a startup idea, a candidate — resist the pull of the specific narrative and anchor yourself to what typically happens in situations like this. Most new restaurants fail. Most fundraising rounds take longer than expected. Most IT projects go over budget. These base rates feel pessimistic until you realize they’re just accurate.
Separate the gut check from the decision. It’s fine to check in with your intuition — it contains real information about patterns you’ve absorbed. But treat intuition as one input, not the conclusion. Explicitly ask System 2 to audit the intuition: what assumptions does this feeling depend on? What would have to be true for my gut to be right?
Standardize evaluations. Whether you’re hiring, investing, or evaluating vendors, design a consistent scorecard before you see any candidates or options. This limits the Halo Effect and ensures you’re comparing things on the same criteria rather than whatever quality happened to impress you most about each option.
What This All Means for Your Business and Your Life
The Competitive Advantage of Knowing How You Think
Here’s the thing that excites me most about Kahneman’s work: it’s not just descriptive. It’s a competitive advantage.
Most people — even smart, educated, analytically trained people — have no mental model for why they believe what they believe or decide what they decide. They trust their gut without knowing why the gut says what it says. They make confident predictions without understanding the limits of their expertise. They interpret data through stories without noticing that the story came first.
The person who understands cognitive biases doesn’t transcend human psychology — nobody does. But they can build systems that catch their own biases. They can recognize the feeling of anchoring, substitution, or loss aversion in real time and pause before acting. They can design their organizations to make better collective decisions, not just better individual ones.
In a world where everyone has access to the same information, the edge increasingly comes from thinking more clearly — from asking better questions, interpreting evidence more honestly, and making decisions that survive contact with reality.
Thinking, Fast and Slow is, at its heart, a manual for that kind of thinking. Not a guarantee, not a formula — a manual. You still have to do the work, ask the uncomfortable questions, and tolerate the discomfort of uncertainty. But doing that work with an understanding of how your mind actually operates puts you miles ahead of doing it blind.
My near-miss with that biotech “investment” still stings a little. But it also gave me one of the most useful lessons of my career: the confidence you feel when making a decision tells you almost nothing about whether that decision is right. Confidence is a feeling generated by System 1. Correctness is a property of the world.
Knowing the difference might be the most important thinking skill you’ll ever develop.
The Bottom Line: Practical Takeaways
Here’s your cheat sheet from Thinking, Fast and Slow:
- System 1 is running most of your life — fast, intuitive, pattern-based, and prone to systematic errors. System 2 is slow, deliberate, and lazy. Know which is driving.
- Anchoring is everywhere — the first number or idea you encounter warps every estimate that follows. Set your own anchors first.
- Coherent stories feel true even when they’re built on thin evidence. Ask what you’re not seeing before trusting a narrative.
- Expert confidence and expert accuracy are not the same thing — especially in complex, unpredictable domains. Use base rates.
- Losses loom twice as large as gains. Sunk costs and loss aversion lock people into bad situations. Judge forward, not backward.
- The Peak-End Rule shapes memory — how things end matters disproportionately.
- Regression to the mean explains many “effects” that aren’t effects at all. Don’t assign causal power to noise.
- Substitution is invisible — check that you’re answering the hard question, not a comfortable substitute.
- Pre-mortem, base rates, waiting periods, and structured evaluation are your practical defenses.
Have you caught yourself in one of these cognitive traps? I’d love to hear your story — leave a comment below or reach out directly. And if you found this useful, subscribe to the BullishBooks newsletter for weekly insights from the world’s best business and personal development books.
