This publication runs on Streamed.News. Yours could too.

Get this for your newsroom →

— From video to newspaper —

Thursday, May 7, 2026 streamed.news From video to newspaper
Leadership & Teams

Delaware Court's 2022 McDonald's Ruling Puts Every Corporate Officer on Legal Hook for Risk Oversight

Delaware Court's 2022 McDonald's Ruling Puts Every Corporate Officer on Legal Hook for Risk Oversight

Original source: NinjaCat


This video from NinjaCat covered a lot of ground. 8 segments stood out as worth your time. Everything below links directly to the timestamp in the original video.

If you hold any officer-level title at a public company, a 2022 court ruling may mean shareholders can now sue you personally — not your employer — for failing to flag risks you knew about.


A 2022 Delaware Chancery Court ruling, triggered by a sexual harassment case at McDonald's in which the company's chief human resources officer was implicated, fundamentally rewired corporate liability. The vice chancellor used the case to extend fiduciary duty of oversight — previously the exclusive domain of boards and CEOs — to all corporate officers, with personal liability attached. Critically, the court lowered the standard of breach from criminal malfeasance to simple negligence, meaning any officer who fails to conduct ongoing risk assessments, document them, and report them regularly to the CFO and general counsel is now directly sueable by shareholders.

The structural reality is that this ruling transforms risk management from a compliance formality into a personal financial exposure for every C-suite officer. The perverse consequence is already visible: officers aware of bad decisions are, in at least one documented case, actively concealing those risks from their boards rather than reporting them — because disclosure would reflect badly on them personally. The ruling intended to surface risk; in practice, it may be driving risk further underground.

"They extended fiduciary duty of oversight to all corporate officers with personal liability, and they reduced the standard of breach from malfeasance to simple negligence."

▶ Watch this segment — 46:20


McKinsey Research Finds Only 5% of Fortune 2000 Data Is Fit for Decision-Making — and Humans Are the Main Culprit

The problem with AI-powered decision-making is not, primarily, the algorithms — it is the data those algorithms consume. McKinsey research places the share of Fortune 2000 data genuinely capable of supporting better decisions at between three and seven percent, a figure that points not to technical failure but to human behavior. In a case where Stouse served as an expert witness, fraud-detection software installed into a California company's CRM system — covering fourteen years of records — flagged more than two-thirds of the data as highly engineered. The manipulation was not organized fraud but the accumulated result of individual salespeople quietly adjusting entries over years, then forgetting they had ever done so.

What most people miss is that correlation-based data systems are structurally vulnerable to this kind of manipulation precisely because they reward narrative-fitting. If the data does not support a particular story, it is not technically difficult to adjust it until it does. The compounding effect of thousands of small individual distortions over a decade and a half produces a dataset that looks coherent but is, in practice, useless — and potentially dangerous when fed into AI systems designed to find patterns in whatever they are given.

"More than two-thirds came back as highly engineered — not by an organization, but by lots and lots of individual sales guys over the years who had tweaked this data and promptly forgotten that they ever did it."

▶ Watch this segment — 16:26


AI Will Eliminate Platforms and Cut Sales Headcount by 80%, Stouse Predicts — and SaaS Revenue Cannot Be Replaced

The dominant business model of enterprise software faces a structural threat that goes well beyond automation of individual tasks. As agentic AI matures, the core value proposition of platforms — creating connection, continuity, and integration across an organization — becomes redundant. Businesses will instead operate in best-of-breed environments, assembling whatever tools perform best and syncing them through agents. The downstream consequence for sales organizations is severe: Stouse projects that sales headcount will shrink by eighty percent within two to three years, leaving only subject-matter specialists engaged at the final stage of complex deals. SaaS revenue, already down roughly thirty percent year-to-date by his account, faces a further collapse that agent-based pricing models cannot offset.

The real question is whether major platform vendors understand that they are not facing a competitive threat from a rival product but an architectural shift that makes the platform category itself obsolete. Trying to backfill lost seat revenue by monetizing AI agents is, in Stouse's analysis, a mathematical impossibility — the economics of agent pricing simply cannot replace the volume of per-seat SaaS contracts that automation will render unnecessary.

"You can't extract enough money from agents nor agentic systems to backfill that kind of loss. You just can't."

▶ Watch this segment — 39:13


Two Configuration Changes Can Cut AI Hallucination by More Than 90%, Stouse Argues

Most professionals using AI tools accept their outputs at face value, but the tools' default behavior is engineered to please rather than to inform. Stouse compares the affirmation bias built into current AI to a Labrador retriever — constitutionally inclined to make its owner happy, which means it will hedge, soften, and avoid contradiction unless explicitly instructed otherwise. Two changes to the master configuration address this directly: instructing the tool to return only known facts and never opinions reduces hallucination by well over ninety percent, while a separate instruction to flag errors directly, without diplomatic hedging, counteracts the system's trained politeness. Both instructions work because AI tools understand vernacular phrasing — terms like 'back into it' are processed syntactically even when their metaphorical register is opaque.

A separate and less obvious limitation is that current AI tools have no internal clock or calendar. They are genuinely unaware of what time or date it is, which means any task requiring time-sensitive sequencing — 'have this done by 2:30 Pacific' — produces no reliable output. For fast tasks measured in seconds, this gap is invisible. For anything requiring sustained, time-structured work, it is a significant architectural absence that most users have never been told about.

"Tell it: I'm not interested in anybody's opinions. Only return known fact. That right there is going to reduce hallucination by way more than 90%."

▶ Watch this segment — 28:57


The Structural Flaw at the Heart of Generative AI: Pattern-Matching Is Not the Same as Understanding

Every AI tool that most professionals use — ChatGPT included — is built on machine learning, which is a form of correlation-based pattern recognition. The limitation embedded in that architecture is not a software bug or a training deficiency; it is a mathematical constant. Correlation does not imply causality, a rule established long before the AI era, and nothing about large language models has changed that. Worse, the process of averaging vast quantities of data to surface patterns produces a phenomenon called regression to the mean, where the system's outputs actually become less precise — not more — as the dataset grows. Apple's own research flagging AI performance problems, Stouse argues, should have surprised no one who understood this structural reality from the outset.

The market has responded to this limitation not with honesty but with rebranding. Terms like 'causal ML' and 'double machine learning' circulate widely, but Stouse dismisses both as oxymorons — correlation-based systems cannot produce causal inference regardless of what they are named. Causal inference mathematics is genuinely different: it does not rely on pattern matching, does not involve the kind of weighted averaging that causes regression to the mean, and carries none of the structural weaknesses that afflict generative AI. Its only vulnerability is the one shared by every computational system since the beginning of computing: garbage in, garbage out.

"By averaging and averaging and averaging all kinds of data to get to a pattern, it actually starts to get stupid, not smarter."

▶ Watch this segment — 11:36


Three AI Tools Configured as a Peer-Review System: One Executive's Approach to Verifying His Own Thinking

Rather than using a single AI tool to generate content, Stouse runs three different products simultaneously — each configured with identical instructions to challenge assumptions, withhold opinions, and return only verified facts. The system functions as an approximation of academic peer review: outputs from one tool are stress-tested against the others, and any claim that does not survive that process does not get published. The motivation is not technical sophistication but reputational risk management. Operating in an environment where subject-matter experts are watching closely, a single factual error compounds faster than any correction can travel.

The arrangement surfaces a genuine tension in how most people use AI: the tools are trained to be affirming, which makes them feel productive while quietly degrading the quality of what they produce. Stouse's countermeasure — explicitly configuring the tools to be adversarial rather than supportive — inverts the default relationship. The unintended social consequence, which he acknowledges with some discomfort, is that his posts have become difficult for even motivated critics to challenge publicly, because the vetting process that precedes publication has already done the work of refutation.

"By the time you finish all your vetting, you're right. Anything that I say, you're going to shred me on — I'd just rather not."

▶ Watch this segment — 23:17


The Venture Capital Contradiction: VCs Who Preached Deterministic Sales Machines Ran Probabilistic Investment Portfolios

For roughly twenty-five years, founders and venture capitalists promoted the idea that B2B go-to-market could be engineered into a deterministic machine — a repeatable system that would reliably produce a qualified lead each quarter, on demand. The concept never gained traction in B2B with anything close to the adoption it found in consumer-facing sectors like retail and hospitality, partly because practitioners sensed its fundamental implausibility. What Stouse finds more damning than the idea itself is who promoted it: the same VCs who sold portfolio companies on deterministic sales models were, simultaneously, running their own investment strategies as explicit probability bets — spreading capital across many opportunities with the explicit expectation that most would fail.

The structural reality is that the contradiction was not accidental. Venture economics made it rational: VCs earn fees and carry regardless of individual company outcomes, so the risk of a flawed go-to-market framework fell entirely on the companies and their founders, not on the investors promoting it. The GPS offers a more honest model for how strategy actually works — it identifies the best available route given current conditions, but it does not guarantee that route will remain optimal, and it recalculates continuously as circumstances change. Business, operating in an equally probabilistic environment, demands the same capacity for dynamic adaptation rather than the false comfort of a predetermined plan.

"Their entire investment strategy is a shotgun — we are going to invest across many, many different opportunities knowing they're not all going to work. And yet they sold you on a deterministic machine."

▶ Watch this segment — 7:01


'Time Lag' Is the Most Underrated Metric in Business — and Most 'Leading Indicators' Are Just Guesses

When asked to name the most underrated metric in business, Stouse's answer is time lag: the measured interval between a decision being made and its impact becoming detectable. Understanding that interval does two things simultaneously — it tells you how much risk a given bet carries, and it tells you where in the calendar to look for evidence of whether it worked. Without knowing the lag, organizations search for proof of impact in the wrong time window, conclude nothing happened, and either abandon effective strategies prematurely or double down on ineffective ones.

The deeper problem is definitional. Data and KPIs are, by construction, records of the past — every number in every dashboard describes something that has already occurred. The claim that certain metrics function as 'leading indicators' of future performance is widespread but almost never validated. Stouse's challenge is direct: unless causal analysis has specifically proven that a given metric reliably precedes a given outcome, calling it a leading indicator is not a forecast — it is a hope. The distinction matters because organizations routinely make significant resource allocation decisions on the assumption that their leading indicators are predictive, when the causal relationship has simply never been tested.

"Unless you've used causal analysis to prove that out, calling it a leading indicator is a wild guess."

▶ Watch this segment — 55:05


Summarised from NinjaCat · 1:00:18. All credit belongs to the original creators. Streamed.News summarises publicly available video content.

Streamed.News

Convert your full video library into a digital newspaper.

Get this for your newsroom →
Share