This publication runs on Streamed.News. Yours could too.

Get this for your newsroom →

— From video to newspaper —

Thursday, May 7, 2026 streamed.news From video to newspaper
Business

A Single Glance Changed Everything: What Theater Taught Ashley Faus About Authenticity

A Single Glance Changed Everything: What Theater Taught Ashley Faus About Authenticity

🌐 This article is also available in Spanish.

Original source: Leveraging Thought Leadership Podcast
This article is an editorial summary and interpretation of that content. The ideas belong to the original authors; the selection and writing are by Streamed.News.


This video from Leveraging Thought Leadership Podcast covered a lot of ground. 6 segments stood out as worth your time. Everything below links directly to the timestamp in the original video.

The most revealing moments of who we really are rarely come from what we say — they come from the micro-gestures we don't even know we're making.


A Single Glance Changed Everything: What Theater Taught Ashley Faus About Authenticity

While performing in a musical theater production, Ashley Faus discovered that looking up to make eye contact with a fellow actor — rather than sitting with her head in her hands — produced a visceral emotional reaction that the scripted words alone never could. Her castmate confessed she felt genuine concern for Faus as a person, not merely as a character. The words, pauses, and blocking were identical each night; only that small moment of direct eye contact shifted the emotional truth of the scene.

The anecdote crystallizes a broader argument Faus makes about online identity: when a person's digital presence and their in-person manner diverge, audiences feel the dissonance instantly. AI-generated content, she argues, fails precisely here — it cannot replicate the idiosyncratic linguistic fingerprint that makes a voice feel consistent and real across contexts.

"Sometimes I just feel so bad for you in that moment — more than just what my character should feel. Like, I genuinely, are you okay? And I realized that I feel that when you look at me, when you make eye contact with me versus when you sit with your head in your hands."

▶ Watch this segment — 31:19


AI Flagged a Fortune 500 CEO's Own Words as 'Demeaning' — and That Failure Reveals a Deeper Problem

When Bill Sherman fed an AI tool a direct quote from a retired Fortune 500 CEO as part of a manuscript review, the system flagged the language as demeaning — apparently unable to reconcile the executive's actual voice with its trained model of how a senior leader should communicate. Ashley Faus argues the incident exposes a structural gap: AI handles logic competently and can mimic empathy by learning codified social norms, but authenticity — the quality that makes a person recognizable as themselves — resists codification entirely.

Faus frames this using trust researcher Frances Fry's three-part model of logic, empathy, and authenticity, concluding that AI's heavy weighting toward the first element means it consistently flattens the human complexity that real trust requires.

"You can't codify authenticity. You can't say these are the words that we use or this is the posture that you use. Logic and empathy we've codified more, so AI can mimic them — but it can't mimic authenticity."

▶ Watch this segment — 39:59


Author Wrote Entire Book on Human Trust Without AI — and Says the Choice Was the Point

Ashley Faus spent years delaying her book on human-centered marketing because she refused to let any ghostwriter — human or artificial — put words to her ideas. She drew a sharp distinction between a ghost writer, which she considers acceptable, and a ghost thinker, which she does not: the original thinking must come from the human. For a book explicitly about building trust with people, she felt using AI to generate the prose would have been a foundational contradiction.

Her framework for others is built on transparency rather than abstraction: if AI is used, disclose the model, the prompt, and the proportion of AI involvement. Concealment, not AI itself, is what erodes trust.

"You can have a ghost writer. You cannot have a ghost thinker."

▶ Watch this segment — 25:19


A Palm Springs Trip Planned Three Ways Shows Why Human Recommendations Still Beat AI

Planning an anniversary trip to Palm Springs, Ashley Faus ran the same question through Google, ChatGPT, and a trusted colleague who had just married there. Google returned an overwhelming flood of listicles; ChatGPT produced what she described as a repackaged summary of the same results. Her colleague's response — a curated list annotated with notes like 'very chic, dress up' and 'casual spot' — was immediately actionable because it carried implicit knowledge of Faus's income, taste, and relationship dynamic that no prompt could fully transfer.

The story illustrates why human recommendations retain a structural advantage over AI: shared context between two people who know each other cannot be replicated simply by describing yourself in a prompt box.

"I need someone who's in my same stage of life, a similar income bracket, a similar mindset. She knows me, so she knows kind of what I'm looking for — and I trust her to cut through the noise much more than the broad Google results or the tailored ChatGPT results."

▶ Watch this segment — 2:33


Stanford Research Explains Why Video Calls Are Exhausting — and Why That Matters for Trust

Research from the Stanford Human Computer Interaction Institute identified three mechanisms behind Zoom fatigue: video proximity forces eyes to focus at an unnaturally close range; a slight processing delay causes the brain to work harder than it consciously registers; and partial body language — hands cut off by a camera frame — triggers a low-level threat-detection response as the viewer's brain tries to fill in missing social cues. Together these forces make sustained video interaction significantly more cognitively taxing than face-to-face contact.

Faus connects this directly to marketing strategy, arguing that as AI-generated video becomes indistinguishable from real footage, the 'uncanny valley' problem compounds the existing cognitive load of screens — making in-person moments increasingly valuable as trust-building tools.

"If the only experience someone has of you is with a screen between you, it gets harder and harder to keep building that trust."

▶ Watch this segment — 5:58


650 Podcast Episodes, 2.6 Million Words — and an AI That Invented a Tina Fey Interview

Facing a corpus of roughly 2.6 million words drawn from 650 recorded interviews, Bill Sherman turned to AI to surface the most significant moments — a task no human team could complete on a realistic timeline. The tool performed well at identifying passages worth revisiting, but its limits became unmistakable when it listed Tina Fey as one of the podcast's guests. Sherman's ability to immediately spot the hallucination depended entirely on his own deep familiarity with the material — the very knowledge the AI was meant to help process.

The episode reinforces a point both Sherman and Faus press throughout the conversation: AI can compress time and sharpen thinking, but accuracy and judgment remain non-delegable human responsibilities.

"You have to be close enough to the material to know. You cannot outsource the thinking. You cannot outsource the accuracy — but you can use it to really sharpen that thinking."

▶ Watch this segment — 28:00


Also mentioned in this video


Summarised from Leveraging Thought Leadership Podcast · 55:06. All credit belongs to the original creators. Streamed.News summarises publicly available video content.

Streamed.News

This publication is generated automatically from YouTube.

Convert your full video library into a digital newspaper.

Get this for your newsroom →
Share