This publication runs on Streamed.News. Yours could too.

Get this for your newsroom →

— From video to newspaper —

Thursday, May 7, 2026 streamed.news From video to newspaper
Marketing Effectiveness

Johnson Controls Saved $6 Million in Spring 2020 by Trusting Analytics Over Instinct

Johnson Controls Saved $6 Million in Spring 2020 by Trusting Analytics Over Instinct

Original source: The Analytic Mind


This video from The Analytic Mind covered a lot of ground. 12 segments stood out as worth your time. Everything below links directly to the timestamp in the original video.

Most companies found out their event marketing had stopped working when COVID arrived. Johnson Controls found out before the rest of the world had a name for the problem.


Johnson Controls Saved $6 Million in Spring 2020 by Trusting Analytics Over Instinct

When analytics began flagging that Johnson Controls' field marketing investments — historically among its strongest performing channels — were deteriorating rapidly in early spring 2020, the company acted before any external explanation was obvious. Instruments were showing trouble before the pilots could see it. The company pulled the plug on event-based spending early, redirecting roughly $6 million into channels that were showing strength in the emerging COVID-era environment. The result was both cost savings and revenue preservation from better-allocated spend.

The structural reality is that this story is not really about COVID. It is about what happens when a business has the analytical infrastructure to detect signal degradation in real time rather than waiting for a quarterly review to confirm what the lagging indicators already knew months earlier. The decision to listen to instruments over intuition is one that most organisations are structurally incapable of making — not for lack of data, but for lack of models that translate data into directional clarity fast enough to matter.

"They decided to listen to their instruments — and they started pulling the plug on a lot of these event investments early, before anybody else started doing it."

▶ Watch this segment — 33:41


Brand Is Not Soft: How Awareness, Confidence, and Trust Map Directly to Deal Speed

The persistent belief that brand investment resists quantification turns out to be a time-lag problem, not a measurement problem. Stouse breaks brand down into three components — awareness, which drives deal volume at the top of the funnel; confidence, which expands deal size in the middle; and trust, which governs the speed of a final decision at the bottom. Each operates on a different timeline and with different measurable effects on deal velocity. The confidence-versus-trust distinction is clarified through a pointed analogy: a brilliant colleague you would never trust and a deeply trustworthy one who is functionally incompetent are not the same problem, and conflating them produces bad strategy.

What most people miss is that framing brand as a soft concept is itself a category error rooted in the wrong analytical lens. Once awareness, confidence, and trust are treated as independent variables with discrete funnel-stage effects, they become modelable — and the question shifts from whether brand spending works to how much, how fast, and against which competitive and macroeconomic headwinds it needs to work harder. Every dollar spent on brand that fails to generate its expected return is simultaneously a lost investment and a lost alternative. That dual exposure is the actual cost of unmeasured brand strategy.

"Every dollar that you spend here is a dollar you can't spend over there — and if the dollar you spend over there busts, you're actually a two-time loser."

▶ Watch this segment — 27:19


Why a 95% Confidence Score in Business Analytics Is Often a Sign Something Has Gone Wrong

The dominant cultural clash in enterprise analytics is not about budget or access to data — it is a collision between two value systems. Data scientists optimise for precision; business leaders optimise for speed and directional accuracy. The gap between those two orientations has made the analytics investment largely unusable for operational decision-making at most companies. Marketing mix models pioneered by Procter and Gamble some three decades ago were recomputed at best every six months, meaning forecasts were frequently outdated before they reached the people who needed them. The 95% confidence threshold, imported wholesale from academic statistics, compounds the problem: in models of human behaviour, achieving that score is more likely to indicate the model has been overfit to historical data than that it has captured a genuine signal.

The structural reality is that multivariable regression — mathematically unglamorous but robustly validated across decades — is the correct foundation for business analytics precisely because its reliability makes it automatable. When models recalculate autonomously every time new data arrives, they begin to function like a GPS: continuously re-optimising a route in response to changing conditions rather than handing the driver a printed map from last year. The real question for most organisations is not whether they can achieve precision, but whether they can achieve relevance — and those two objectives are more often in tension than aligned.

"If you get to 95 percent in business analytics, that usually means you've overfit the model rather dramatically. The 95 percent actually represents the introduction of mathematical bias."

▶ Watch this segment — 10:01


Former GE CFO Dave Cody Reprimanded His Own Analytics Team for Chasing High Confidence Scores

The argument for minimum viable models — analytical outputs scoped tightly to answer one specific business question rather than constructed to maximise statistical confidence — gains unusual authority from the source. Dave Cody, who served as both global CFO of GE and chairman and CEO of Honeywell, confronted his own analytics team directly upon learning they were spending heavily to push confidence scores higher. His position was precise: the major decisions he made as a senior executive were, if modelled at all, operating on confidence levels in the range of 20 to 40 percent. Getting to 50 or 60 would be genuinely valuable. Climbing to 80 or 90 was a waste of money that would produce overfitted outputs.

The implication runs deeper than budget management. Organisations that have internalised academic data science standards without adjusting them for the operational context of business decision-making are not just spending inefficiently — they are building models that will mislead rather than guide. Continuous recalculation changes the risk calculus entirely: when a model updates with every new data input, the need for a high single-point confidence score is replaced by the error-correcting property of continuous revision. The two approaches represent different architectural philosophies, and most enterprise analytics infrastructure was built around the wrong one.

"Do you have any idea — I make big decisions based on what would probably be 20 and 30 and 40 confidence score type models. Do not spend all this money trying to scale those heights. The heights don't matter."

▶ Watch this segment — 56:02


Why Comparing Q2 Marketing Spend to Q2 Sales Results Is a Fundamental Analytical Error

The instinct to pair a quarter's marketing activity with the same quarter's sales performance is nearly universal in business reviews — and almost entirely wrong. Marketing operates as a non-linear multiplier applied to the linear output function of sales, and the two run on fundamentally different timelines. Q2 sales figures are far more likely to reflect marketing activity from the previous Q4 or even Q3, meaning that the comparison most organisations make in their quarterly reviews has almost no causal relationship. The time lag is not merely qualitative intuition; it is a calculable figure, and knowing it precisely is what makes it possible to locate the actual value of a marketing investment in the calendar at all.

This misalignment creates a structural problem for how marketing justifies its existence inside organisations. When budget decisions are made by comparing activity to outcomes in the same reporting period, the relationship being measured is largely fictitious. The consequence is a persistent undervaluation of brand and long-cycle marketing investments in favour of short-cycle demand generation that produces visible numbers in the same quarter — a bias that favours the legible over the effective. Analytics that correctly account for lag do not just improve measurement; they restructure the internal political economy of how marketing investment decisions get made.

"If there's one thing I say to business people and marketers that makes the light bulb go off above their head — it's that you can actually calculate the time lag, and if you don't know it going forward, you will never find the value in the calendar. You won't know where to look."

▶ Watch this segment — 1:03:22


Improving a Single Business Decision by 1% Daily Compounds to Nearly 4,000% Gain Over a Year

At its core, cause-and-effect analysis in business analytics is an answer to one recurring question: given a specific outcome I need to produce, which of the many things I am doing is contributing to it, by how much, and on what timeline? Each investment in a programme carries its own lag — the time it takes for an action to produce a detectable effect — and its own efficacy within that window. Understanding how those inputs combine, including cases where they reinforce each other in unintended ways, is the operational problem that most marketing and business strategy teams are navigating without adequate instrumentation. The compounding arithmetic is stark: a 1% daily improvement in the quality of a recurring decision yields a compound value approaching 4,000% over a twelve-month period.

The design challenge is not primarily mathematical. Research on business user behaviour reveals that a significant proportion of executives experience psychological shutdown when presented with graph-based visualisations, even when the underlying data is sound — while the same individuals respond normally to chart formats. If the output of an analytical system cannot be processed quickly and intuitively by the person who needs to act on it, the model's accuracy is irrelevant. The real friction in analytical adoption is not data access or modelling sophistication. It is the gap between what the model knows and what the decision-maker can actually absorb at speed.

"If you improve one decision one percent every day for a year, the compound value of that at the end of the year is almost four thousand percent. The individual hurdle is not huge — you just need to make it really clear to the business user what they need to do next."

▶ Watch this segment — 19:50


Most Business Decisions Are Lean Data Problems, Not Big Data Problems

A persistent mismatch shapes how organisations think about analytics infrastructure: the assumption that the complexity of the underlying decision scales with the volume of data required to model it. Data scientists broadly agree that 80 to 85 percent of business questions are answerable through multivariable regression — a form of analysis that operates on relatively structured, modest-scale datasets. The organisations that have invested most heavily in big data infrastructure are frequently not better positioned to answer their core operating questions; they have solved a storage and ingestion problem while leaving the modelling problem largely unaddressed. The OODA loop — observe, orient, decide, act — originated in U.S. Air Force doctrine for supersonic combat, where the speed of incoming information exceeded the capacity of unaided human judgement. The same structural challenge applies to business, where the relevant question is whether analytical outputs can be delivered at the clock speed of the decisions that need to be made.

Clock speed varies meaningfully by industry. A retail business tracking hourly consumer behaviour has a fundamentally different analytical cadence than an aerospace company where data is aggregated monthly. If the latency of the analytical system exceeds the latency of the decisions it is meant to support, the system is not operationally useful regardless of its statistical sophistication. That alignment between analytical refresh rate and decision frequency is the operational problem that most enterprise data investments have failed to address.

"Most of the questions business leaders have about how to make better decisions are not big data questions. They're actually lean data questions."

▶ Watch this segment — 6:10


Enterprises Spent Years Building Data Plumbing — and Forgot to Build the Tap

A pattern is emerging across enterprise data science functions under current macroeconomic pressure: organisations are turning to their analytics teams for answers and discovering that years of investment in data management infrastructure have not produced a corresponding capability for modelling. The plumbing is elaborate; the water is not potable. Companies that spent five to seven years and millions of dollars on data warehousing, integration, and governance are finding that they cannot generate actionable business intelligence from the systems they built, because the modelling layer — the component that translates managed data into causal insight — was consistently underfunded relative to the storage and management layers.

This is not an edge case. It is a structural pattern in how chief data officer mandates evolved over the past decade, with a strong bias toward infrastructure investment and a weaker emphasis on the analytical outputs that infrastructure was ostensibly meant to enable. If the underlying data management systems are sound, the gap can be closed relatively quickly by attaching modelling capability to the front end of existing infrastructure. The more difficult problem is that some organisations have spent the past several years collecting data they still cannot interpret — and the pressure to produce answers has arrived before the modelling capability to answer them has been built.

"They have this incredibly complex plumbing system for data — but no good water is coming out the other end of the pipe that the business can drink."

▶ Watch this segment — 53:11


Intuition Is Just Machine Learning Trained on Old Data — and That Is the Problem

Human intuition is not an alternative to data-driven decision-making — it is a cruder version of it. The pattern recognition that experienced marketers and executives rely on is built from accumulated experience, which functions as training data for an internal model. That model will produce reliable outputs as long as the environment it was trained on resembles the environment in which it is being applied. The structural failure arrives when conditions change radically enough that the historical patterns no longer hold. In that scenario, acting on intuition is not the same as acting on experience — it is acting on a model trained on data that no longer describes the world.

The paradox is that intuition feels most authoritative precisely when it is most likely to mislead: in conditions of high stress and rapid change, where the absence of analytical grounding makes pattern recognition feel like the only available tool. For marketers operating today in environments that differ sharply from the conditions under which their professional instincts were formed, the reliance on intuition is not a pragmatic fallback — it is an active source of strategic error. What most people miss is that the question is not whether to trust experience, but whether the experience being trusted was acquired in conditions sufficiently similar to the present ones to be valid.

"If you are operating intuitively right now — where past is not prologue — you're screwed."

▶ Watch this segment — 1:06:36


At Honeywell, a 4.5% Improvement in Deal Velocity on $12 Billion in Revenue Made Marketing Finance's Best Friend

The scale argument for brand investment becomes concrete at Honeywell, where a 4.5% improvement in average deal velocity on a $12 billion revenue base produced the kind of result that turns a marketing function into a strategic asset rather than a cost centre. The figure was significant enough to reframe the relationship between marketing and finance in a regulated, complex industrial environment where deal velocity was considered practically impossible to move at scale. The same logic applies at the opposite end of the size spectrum: early-stage companies without established brand have to substitute the founder's personal reputation for the company's non-existent one, because the initial purchase decisions made by early customers are based on the trust and confidence they have in a person, not a business.

The real question brand investment raises is not whether it works — the Honeywell data makes that answerable — but whether organisations have the analytical infrastructure to observe its effects across the timeframe in which those effects actually manifest. Executives at large companies who have never operated without inherited brand equity tend to treat it as ambient rather than constructed, which means they systematically underinvest in it and overestimate how much of their deal performance is attributable to other causes. The dark-to-light framing is precise: the companies that best understand the value of brand are the ones that have experienced building it from nothing.

"The big company CEOs are spoiled in many cases because they've never been dark before — they've never had to drive business with no brand power."

▶ Watch this segment — 35:43


Budget Cuts at HP Sent a Non-Mathematical Executive on a Path to Founding an Analytics Company

The origin of Proof Analytics runs through a specific moment of frustration: roughly sixteen or seventeen years ago, during a senior role at HP, repeated budget cuts prompted a search for defensible evidence rather than a continued argument from conviction. Stouse describes himself at that point as someone with no particular mathematical aptitude — a self-assessment his high school teachers would apparently have confirmed. The pivot from intuition to analysis began with multivariable regression, which he engaged with not merely as a professional tool but as something he found genuinely compelling, eventually solving mathematical problems recreationally. The final stage of the journey came after a successful but costly analytics programme at Honeywell Aerospace as CMO: the conclusion that automation could remove the latency and expense that had made the approach inaccessible to most organisations.

What this arc reveals is less about an individual biography and more about a structural gap that has persisted in enterprise analytics for decades. The practitioner with deep operational context — someone who had to justify marketing investment to finance under real business pressure — arrived at different design priorities than academic data scientists or engineering-led software teams. The system he eventually built was shaped by the experience of needing answers fast and cheaply, not the experience of producing technically perfect models. That difference in starting point produces a fundamentally different product.

"Instead of cursing the darkness, I wanted to strike a match. I was not particularly mathematically inclined in school — and so this was really starting from scratch."

▶ Watch this segment — 0:53


For B2B Companies, Commenting on Others' LinkedIn Posts Builds Trust Faster Than Broadcasting From Their Own Pages

The conventional content marketing instinct — publish on your own channel, build an audience, broadcast to them — produces a set-piece communications dynamic that is slow to generate the trust and confidence that drive actual buying decisions. A more effective alternative starts with a precise definition of the ideal customer: not the total addressable market, but the narrow subset of potential buyers who are most likely to convert. That group is small enough to locate, and in many B2B contexts, the location is a specific cluster of LinkedIn communities. Engaging dynamically in those spaces — contributing to other people's conversations rather than only posting on a company page — creates a rapid signal about whether the person engaging actually knows the subject. Trust and confidence build faster in that format than in polished one-way publishing.

The underlying mechanism is that buyers use dynamic back-and-forth exchanges to evaluate competence and credibility in real time. A static post can be drafted by anyone; a thread contribution under pressure cannot be faked at the same quality level. For companies without large brand budgets, this means the competitive moat can be constructed earlier than conventional wisdom suggests — provided the founder or key executive has genuine knowledge worth sharing and is willing to engage in spaces they do not control. The personal authority of the communicator is itself a brand asset, particularly in early-stage companies where the institutional brand has not yet been built.

"We don't just post on our own page. We go to other people's threads and comment — and now instead of a set-piece communications strategy, this is very dynamic. It builds trust and confidence really fast."

▶ Watch this segment — 46:42


Summarised from The Analytic Mind · 1:12:14. All credit belongs to the original creators. Streamed.News summarises publicly available video content.

Streamed.News

Convert your full video library into a digital newspaper.

Get this for your newsroom →
Share