— From video to newspaper —

Sunday, April 19, 2026 streamed.news From video to newspaper
Business

Delaware Court Ruling Puts Every Corporate Officer Personally on the Hook for Risk Oversight

Delaware Court Ruling Puts Every Corporate Officer Personally on the Hook for Risk Oversight

If you hold any senior title at a corporation, a 2022 court ruling you may never have heard of has almost certainly changed your personal legal exposure. Ignorance of the obligation is not a defence.


A 2022 Delaware court decision has quietly reshaped the legal exposure of every corporate officer in America, extending fiduciary duty — long the exclusive concern of boards and CEOs — to all executives, including CMOs, CHROs, and their peers, with personal liability attached. The standard for a breach has also dropped from criminal intent or malfeasance to simple negligence, meaning an officer who fails to document ongoing risk assessments and report them up the chain to the CFO and general counsel is now legally vulnerable to direct shareholder lawsuits.

The ruling emerged from a 2022 case brought against McDonald's, in which shareholders targeted the company's then-CHRO over a systemic pattern of workplace harassment. A plaintiff's attorney added the CHRO as a named defendant — at the time on shaky legal ground — and the vice chancellor of Delaware's Chancery Court seized the moment to revisit the entire framework. The court's reasoning was that modern business is too complex for boards and CEOs alone to fulfil their duty of oversight without active support from all corporate officers.

Mark Stouse, chairman and CEO of Proof Analytics, argues that the ruling is already reverberating through how companies handle financial reporting. Under the old model, executives could hide behind aggregate roll-up numbers; now, counterfactual modelling can expose, for example, that nearly half of a marketing budget produced no measurable return, creating a calculable drag on earnings per share that shareholders can pursue. Stouse also warns of a perverse unintended consequence: officers aware of their own bad decisions may be more motivated to bury risks from their boards rather than disclose them, because disclosure would expose their own failures — paradoxically increasing the very risk the ruling was designed to reduce.

"If you are a corporate officer in charge of marketing — or fill in the blank — and you're not doing an ongoing risk assessment of your function, documenting those risks, documenting what you need to do to mitigate those risks, and sending that information up to your CFO and your general counsel on a regular basis, you are wide open for a lawsuit."

▶ Watch this segment — 42:43


McKinsey Research: Only 3–7% of Fortune 2000 Data Is Fit for AI Decision-Making

The most dangerous flaw in enterprise AI deployments is not the technology itself — it is the data being fed into it. According to McKinsey research cited by Mark Stouse, chairman and CEO of Proof Analytics, only three to seven percent of data held by Fortune 2000 companies is of sufficient quality to support sound decisions or reliable AI outputs. That figure means the vast majority of corporate data is, in effect, poisoning the systems designed to analyse it.

The human dimension of the problem is illustrated by a legal case in which Stouse served as an expert witness. A sizeable California company with 14 years of customer relationship management data had fraud-detection software — the kind typically used by credit card companies — court-ordered into its CRM system during legal discovery. The results were stark: more than two-thirds of the records came back flagged as highly manipulated. The culprits were not hackers or a single bad actor, but hundreds of individual salespeople who had quietly tweaked data entries over the years, often to hit targets or shape narratives, and had long since forgotten they had done it. The case settled for approximately $40 million — and that sum covered only the CRM data, not the company's data quality problems more broadly.

Stouse argues this is not primarily a technical failure. Data systems built on correlation-based approaches are easy to manipulate because they are designed to find patterns rather than test causes, making them especially susceptible to human interference. Causal AI, he contends, offers a partial remedy because it tests whether patterns actually reflect cause-and-effect relationships rather than coincidence — flagging anomalies that purely pattern-based systems would simply absorb and amplify.

"More than two-thirds came back as highly engineered — not by an organisation, but by lots and lots of individual sales guys over the years who had tweaked this data and tweaked that data and promptly forgotten that they ever did it."

▶ Watch this segment — 16:28


AI Agents Have No Internal Clock and Fail on Unexpected Conditions, Expert Warns

Current AI tools carry a surprisingly basic design gap: they have no internal clock or calendar. That means a user who asks an AI to complete a task by a specific time is effectively issuing a deadline into a void — the system has no mechanism to track the passage of time or honour a commitment made in a prior session. Mark Stouse, chairman and CEO of Proof Analytics, discovered this through deliberate testing, setting up extended projects as trap cases and documenting what happened when the AI ran out of memory or lost the thread of a multi-part assignment. The systems, he found, would apologise, promise to deliver overnight, and then greet the next morning as if nothing had occurred.

The deeper problem, Stouse argues, is architectural. All current AI agents — the systems increasingly marketed as autonomous workers capable of executing complex tasks — are built entirely on correlation and pattern matching. That works adequately when conditions stay predictable, but the moment something unexpected arises, the agent either stops and defers back to a human or charges ahead in a direction that is confidently wrong. He likens pattern-match AI to a road map printed in 1979: potentially useful in calm conditions, but blind to today's actual conditions on the ground.

True agentic AI, capable of navigating genuinely novel situations, requires causal reasoning — the ability to understand not just what has historically followed what, but why, and therefore what to do when the script runs out. Stouse predicts this limitation will become commercially decisive, with the SaaS sector already absorbing the shock: he estimates SaaS revenue is down 30 percent year-to-date and on track for a 50 percent year-over-year decline, while sales headcount across industries could shrink by as much as 80 percent as automation handles routine work and only the highest-skilled specialists survive.

"Pattern match is a map that was printed in 1979 that may not even reflect the latest streets, and sure as hell doesn't reflect the current conditions on that street at that moment."

▶ Watch this segment — 29:00


Why 'Causal ML' Is an Oxymoron — and Why It Matters for Every AI User

Every AI tool most people interact with daily — ChatGPT, and virtually every enterprise platform built on machine learning — operates on correlation: it finds patterns in data rather than understanding causes. That distinction, familiar from high school statistics as the axiom 'correlation does not imply causality', has never changed, but it is being obscured by a wave of marketing language designed to make pattern-matching AI sound more rigorous than it is.

Mark Stouse, chairman and CEO of Proof Analytics, singles out terms like 'causal ML' and 'double machine learning' as outright oxymorons. Machine learning, he explains, is by definition correlational; causal inference is a fundamentally different branch of mathematics that tests whether one thing is actually making another thing happen, rather than simply co-occurring with it. The two approaches cannot be meaningfully combined, and claiming otherwise is, in his view, a financial pitch dressed up as a technical claim.

The practical consequence of this confusion is a phenomenon called regression to the mean — the statistical tendency of systems that average vast quantities of data to become progressively blunter rather than sharper. An Apple research report that drew widespread attention in the AI community pointed to exactly this dynamic: as these models scale, their reasoning on novel problems can degrade. Stouse argues the result was predictable to anyone who understood the underlying mathematics, and that the surprise reaction from the industry reflects how successfully the limits of correlation-based AI have been papered over by those with financial incentives to do so.

"By averaging and averaging and averaging all kinds of data to get to a pattern, it actually starts to get stupid, not smarter."

▶ Watch this segment — 11:37


Summarised from NinjaCat · 1:00:18. All credit belongs to the original creators. Streamed.News summarises publicly available video content.