This publication runs on Streamed.News. Yours could too.

Get this for your newsroom →

— From video to newspaper —

Thursday, May 7, 2026 streamed.news From video to newspaper
AI Strategy & Leadership

AI Strategy Requires Data Readiness, Skilled Workforce, and a Dual-Speed Approach, MIT Researcher Argues

AI Strategy Requires Data Readiness, Skilled Workforce, and a Dual-Speed Approach, MIT Researcher Argues

Original source: MIT Sloan Executive Education


This video from MIT Sloan Executive Education covered a lot of ground. 8 segments stood out as worth your time. Everything below links directly to the timestamp in the original video.

Most organisations are using AI at the level of a proof of concept while boardrooms debate billion-dollar strategies. The gap between those two speeds may be the defining management challenge of the next four years.


AI Strategy Requires Data Readiness, Skilled Workforce, and a Dual-Speed Approach, MIT Researcher Argues

Effective machine learning deployment rests on three conditions, according to Paul McDonagh-Smith: a precisely defined problem, an appropriate data set, and the right algorithms working in concert. Drawing on McKinsey research and an MIT study co-authored by Erik Brynjolfsson and Danielle Li, he notes that while roughly 60 to 70 percent of organisations are already using generative AI experimentally, and approximately 40 percent of boards are actively considering investment, the gap between expectation and operational reality remains dangerously wide. His prescription borrows Daniel Kahneman's 'thinking fast and slow' framework — organisations must experiment rapidly in proof-of-concept mode while simultaneously constructing a sober, medium-to-long-term enterprise strategy spanning the next two to four years.

The underlying dynamic is one of compounding risk: organisations that experiment without strategic anchoring waste resources, while those that strategise without experimenting lack the empirical data needed to calibrate their plans. That distinction is important for any leadership team navigating what McDonagh-Smith characterises as a familiar AI hype cycle.

"We've set pretty high expectations in some areas around what we're going to achieve — but how do we manage the return on the expectation?"

▶ Watch this segment — 26:29


Hands-On Experimentation Is the Antidote to AI Anxiety, Says MIT Educator

The disorientation many leaders feel when confronting AI's pace of change is, in Paul McDonagh-Smith's assessment, largely a function of unfamiliarity rather than genuine unpredictability. Drawing on his own background in high-capacity optical transmission networks during the late 1990s and early 2000s, he argues that sustained hands-on engagement with AI tools reveals a coherent developmental continuum — from large language models through image generation into video synthesis — that makes subsequent transitions legible rather than alarming. The trajectory from AlphaGo to AlphaFold, he contends, is not a discontinuous leap but an intelligible extension of the same underlying algorithmic principles.

The risk, in this context, is that organisations which observe AI only from the outside experience every new model release as an existential disruption, while those actively experimenting begin to distinguish signal from noise — building the institutional confidence necessary to act decisively rather than defensively.

"If we don't pull back the curtain and start playing with things and experimenting and exploring, every change that we see appears to be a transition into something new — when in lots of ways it's almost a continuum."

▶ Watch this segment — 59:19


AI in Healthcare Demonstrates 'Human Multiplied by Machine' Model, Not Simple Automation

Two medical AI applications illustrate a structural distinction that Paul McDonagh-Smith considers fundamental: the difference between replacing human judgment and amplifying it. Google DeepMind's AlphaFold protein-folding model — publicly available and built on game-theoretic learning algorithms — is accelerating drug discovery pathways for conditions including Alzheimer's disease at a rate no conventional research team could match. Separately, MIT researchers including Regina Barzilay and Dina Katabi have documented how machine-learning pattern recognition can surface anomalies in breast cancer X-rays with a speed and consistency that even a large team of human consultants cannot replicate, with the physician then applying clinical expertise to determine treatment.

The underlying dynamic is not substitution but multiplicative augmentation — a framing that carries significant implications for workforce planning and regulatory design in healthcare, where the instinct to treat AI as either a threat or a panacea consistently obscures the more productive question of how human and machine capabilities can be structurally recombined.

"It's not humans plus machines but humans multiplied by machines — augmenting our capabilities to provide treatment in a way we never really could on our own."

▶ Watch this segment — 23:06


Leaders Need AI Literacy, Not Computer Science Degrees, to Navigate Workforce Transformation

Responding to a question from a physicist asking whether she needed to retrain as a computer scientist to remain relevant, Paul McDonagh-Smith's answer was an unambiguous no — though with an important qualification. Foundational exposure to programming languages such as Python or Java, he argues, builds intuition for how AI systems behave in ways that purely conceptual engagement cannot, and expertise spanning two or more disciplines creates a wider innovation aperture than deep specialisation in any single domain. The more consequential shift, however, is cultural: organisations must transition from command-and-control hierarchies toward what he describes as a scientific-method orientation — forming hypotheses, running structured experiments, collecting data, and being prepared to abandon approaches that the evidence no longer supports.

The risk, in this context, is that organisations mistake continuous busyness for continuous development. McDonagh-Smith's insistence on building in deliberate moments of reflection — pausing to establish current position before selecting a route forward — represents a structural corrective to the 'move fast' instinct that dominates many AI adoption discussions.

"Some of the things that led you to success in the past are actually going to be inhibitors or impedance in your system moving forward."

▶ Watch this segment — 50:01


Five Forces Will Define AI in 2024: LLMs, Multimodal Tools, Regulation, Low-Code Access, and Disinformation

Surveying the AI landscape from January 2024, Paul McDonagh-Smith identifies five structural dynamics that will shape the year. Large language models will function less as conversational novelties and more as language user interfaces — a conceptual reframe analogous to the shift from command-line to graphical interfaces that reshaped computing. Developer toolkits from Google, OpenAI, and others are enabling non-programmers to build generative AI applications, lowering the barrier to entry dramatically. Meanwhile, the multimodal trajectory — from text through images toward high-quality video synthesis — is advancing faster than most organisations have registered, with AI-generated disinformation in major elections, including those in the United States and United Kingdom, representing the near-term risk most likely to accelerate regulatory intervention.

The translation challenge between technical and non-technical teams remains a binding constraint throughout. McDonagh-Smith argues that building digital fluency requires the same approach as learning any language — sustained immersive exposure, not occasional formal instruction — a structural observation that sits beneath all five of his 2024 forecasts.

"Perhaps we should think of large language models as a language user interface — because language as an input isn't necessarily language as an output."

▶ Watch this segment — 7:23


AI Will Accomplish What Billions in Change Management Could Not: Breaking Down Organisational Silos

Paul McDonagh-Smith makes a pointed observation: corporations have spent hundreds of millions, in some cases billions, of dollars on desilo initiatives over the past two decades, largely without success. His argument is that AI may achieve structurally what managerial effort could not, because extracting genuine value from AI technologies requires genuinely cross-functional teams — sales, marketing, finance, legal, and in some cases philosophers and social scientists — working in concert. Drawing on sociologist Manuel Castells's 35-year-old network theory, he shifts the analytical frame from the number of nodes in an organisational network to the quality of flows between them: flows of intelligence, innovation, and insight that AI can amplify.

The implication is that organisations clinging to vertical measures of value — revenue per function, departmental ROI — will systematically underestimate AI's contribution, because the technology's compounding returns are horizontal and relational rather than siloed and transactional.

"The degree to which our organisations can take their people, take their machines, create networks with them, and improve the flow of information between the nodes is going to be far more powerful than the number of nodes in itself."

▶ Watch this segment — 46:26


Linear Improvement Is No Longer Sufficient: AI as a 'Punctuated Equilibria' Event for Industry

Paul McDonagh-Smith borrows the evolutionary concept of punctuated equilibria — the idea that long periods of stability are interrupted by rapid, discontinuous change — and applies it directly to AI's impact on industries. His claim is pointed: organisations that continue doing what worked over the past decade, even with incremental improvements, will not achieve the same results in the next five to fifteen years. AI, framed as a general-purpose technology analogous to electricity or the internet, is not accelerating an existing trajectory but punctuating it, forcing a structural break that demands exploration and experimentation rather than optimisation of inherited processes.

The computational thinking framework he advocates — decomposing large problems into sub-problems, identifying cross-industry patterns, and then abstracting signal from noise — is presented not as a technical methodology but as an organisational habit. That distinction is important: the bottleneck is rarely analytical capacity and almost always the institutional willingness to create deliberate space for reflection.

"AI as a general-purpose technology is punctuating the equilibrium of how we've operated — linear extension from where we've been isn't going to make us sustainable and profitable for the next five to fifteen years."

▶ Watch this segment — 33:13


Engineering Teams Adopt AI Faster Because They Have an Organisational DNA That Others Lack

The observation that engineering and technology teams absorb AI capabilities roughly twice as effectively as retail and other business functions — a differential that one research source cited puts at approximately 9 percent value uplift versus 4 percent — has a structural explanation, in Paul McDonagh-Smith's analysis. Technology organisations have accumulated, through successive waves of implementation including social media, CRM, and ERP systems, an institutional DNA of adaptability: established processes for evaluating, absorbing, and operationalising new technologies that other sectors simply have not built. The pattern-recognition advantage they hold is organisational, not purely technical.

The implication is significant: hiring more machine learning engineers will move an organisation from the 20th floor to perhaps the 50th or 60th floor of AI's potential, in McDonagh-Smith's skyscraper analogy, but reaching the 85th requires cultivating creativity and curiosity as organisational capabilities — a form of institutional investment that most workforce development frameworks have yet to formalise.

"You're not going to get to the 85th floor by hiring a thousand machine learning engineers — you're going to get to the 40th or 50th floor, which is progress, but not the full potential."

▶ Watch this segment — 39:58


Summarised from MIT Sloan Executive Education · 1:08:26. All credit belongs to the original creators. Paul McDonagh-Smith summarises publicly available video content.

Streamed.News

Convert your full video library into a digital newspaper.

Get this for your newsroom →
Share