Narrative Theory & Systems Engineering

The Unreliable
Narrator

In literature, the narrator tells a story they believe is true, but their perspective is limited by what they don't know.

At work, our artifacts — dashboards, roadmaps, and KPIs — are narrators too. They tell coherent stories. But coherence is not the same thing as truth.

Scroll to begin

A Note from the Author

As a fan of movies and content, I sometimes find myself thinking about what happens in cinema when we have a meeting that goes off the rails.


I look at "Green" dashboards that feel like fiction, and I realize: we are living in a movie with an unreliable narrator.


This exploration is a study of that friction—the gap between the story we tell ourselves and the reality of the systems we build.

Rashomon (1950)

Multiple Truths Appear

Rashomon

Four witnesses describe the same crime. Each account is Convincing. None agree. Everyone is telling the truth as they experienced it.

Interactive: The Meeting Context Shift
"Select a perspective to reveal the hidden narrative..."
The Usual Suspects (1995)

Artifacts as Evidence

The Usual Suspects Lineup

A story pulled from objects in the room. You trusted the structure, so you trusted the story.

This Is What Control Looks Like
Burn-Down Chart
Sprint complete
CI/CD Pipeline
Build Test Deploy
Gantt Chart
CVE Scan
Low Risk
3 known / 3 patched
Roadmap
NOW
4
NEXT
6
LATER
5
Dashboard
99.9%
uptime
94%
coverage
Every artifact tells a story. Every story is internally consistent. This is the evidence board — assembled from real data, arranged to convince.
Burn-down
Completed
CI/CD Pipeline
● ● ● Success
Uptime
99.9%
Hover overTap the artifacts to see the "Hidden Truth." Coherence is often manufactured.
Fight Club (1999)

The System Is the Narrator

Fight Club
CI/CD Pipeline — Green Isn't the Same as Healthy
Auth Service
API Gateway
Database
Deploy
Architectural strain — services coupled through shared state
Cross-feature coupling — auth changes break checkout
Semantic drift — "deployed" means different things to different teams
The pipeline passed every check. The checks don't cover what actually breaks.
Validation ≠ resilience.
Burn-Down Chart — Progress Without Meaning
40 20 0 SPRINT COMPLETE
Deferred dependency — moved to "tech debt" backlog, quarter 3
Unresolved context — stakeholder intent never reconciled
Temporary workaround — hardcoded, shipped, forgotten
Scope reclassified — "done" because definition changed mid-sprint
The work burned down. The context didn't.
Linear progress ≠ systemic progress.
Validation ≠ Resilience
Auth Service
API Gateway
Architectural strain — services coupled through shared state
Semantic drift — "deployed" means different things to different teams
The pipeline passed every check. The checks don't cover what actually breaks.
Memento (2000)

Decisions Without Memory

Memento
Now / Next / Later — Temporal Sleight of Hand
Now
Auth v2 migration
Checkout redesign
Perf monitoring
Next
API versioning
Mobile parity
SSO rollout
Onboarding flow v3
Later
Platform consolidation
Data pipeline overhaul
Multi-region failover
Legacy decommission
Observability revamp (Q2?)
Service mesh migration
Contract testing
Tech debt sprint (recurring)
Dependency audit
"Later" isn't a plan. It's deferred accountability.
Roadmaps compress uncertainty into optimism.
Gantt Chart — Certainty Projected Onto Uncertainty
Discovery
Design
Build
Integration
QA
Launch
Slack absorbed — QA compressed from 4 weeks to 2
Dependency overlap — integration started before build stabilized
Nothing turned red. The schedule just quietly got tighter.
Gantt charts don't lie. They just assume the future agrees with the past.
Temporal Sleight of Hand
Now
Auth v2 Migration
Next
Mobile Parity
Later
Observability Revamp
Tech Debt Sprint
"Later" isn't a plan. It's deferred accountability. Roadmaps compress uncertainty into optimism.
Westworld (2016)

Timeline Collapse

Everything looks aligned — until integration time.

Westworld
CVE Scan — Safety Without Understanding
Low
Risk Score
Dependency Complexity (Growing Quietly)
Q1
Q2
Q3
Q4
3 CVEs patched. 14 transitive dependencies added.
Attack surface grew 3x while the risk score stayed flat.
We fixed every known vulnerability. We increased our exposure anyway.
Compliance ≠ robustness.
Status Reports — Parallel Timelines
Backend
Ready

Against last week's spec

Frontend
Ready

Against the Figma, not the API

PM
Ready

Against the stakeholder commitment

System
Wait

None of these are the same "ready"

Legacy systems and new platforms coexist.
Teams operate on different clocks.
Leadership reports as if it's one timeline.
Quantifying Linguistic Debt
Backend
Ready

"The API schema is frozen."

Frontend
Ready

"Mock components are built."

Linguistic Debt: When "Ready" means three different things, the gap isn't just a misunderstanding—it’s a quantifiable tax on your speed.
Ghost in the Shell (1995)

The Ghost in the Machine

Ghost in the Shell
Ideas planted years ago by people who have since left still dictate our reality.
Artifact Decay: ADR-004 (2021)
STATUS: APPROVED

"To ensure speed, we will use a Shared Database for all services..."

[RISK: 14 legacy dependencies rely on this outdated decision]
The narrator isn't just in the room—they are built into the walls. We follow rules for reasons we no longer remember.
Looper (2012)

Closing the Loop

Looper

We ship a "temporary" fix today, knowing it will break the life of our future selves in 18 months.

The Fossilized Backlog
  • • Fix Auth Bypass workaround Added 640 days ago
  • • Reconcile divergent schemas Added 412 days ago
  • • Technical Debt Sprint DEFERRED
The "Later" column is the Unreliable Narrator's favorite hiding spot. It transforms "Broken" into "Planned."
Scenes from the Daily Stand-Up

The Green Light Theater

Every one of these is green. Every one of these is telling you a story. The interesting part is what the story leaves out.

Grafana Dashboard
All Systems Nominal
12 of 12 monitored services healthy
Security Scan
0 Vulnerabilities Found
Last scan: 3 hours ago
Test Coverage
95.2%
Above team target of 90%
SLA Compliance
99.95%
Q4 target exceeded
Sprint Velocity
↑ 23% QoQ
Highest velocity in 4 sprints
Open Incidents
2 (Down from 6)
67% reduction this month
HoverTap to see what's behind the green. None of these are lies — they're all technically accurate. That's what makes an unreliable narrator so effective.
The Truman Show (1998)

Managing the Map,
Ignoring the Ground

The Truman Show

A perfectly constructed world. Every metric says it's real — until you walk to the edge and touch the painted sky.

The Map (Coherent)
95% Test Coverage
Green Build Pipeline
On-Time Sprint
The Territory (Real)
Critical Path Fragility
Linguistic Debt
Knowledge Silos

Alignment is often a measurement of how well we’ve agreed to ignore the territory.

The Point of All This

The Reveal

This isn't an indictment. It's a recognition.

Based on Actual Events

Every organization has unreliable narrators.
Not because people are dishonest,
but because systems are partial by design.

The Unreliable Narrator

A dashboard can only show what it was built to measure. A roadmap can only reflect what was agreed to discuss. A retrospective can only surface what feels safe to say.


That's not failure — that's the nature of any narrative. The challenge isn't eliminating the narrator. It's remembering there is one.

Field Guide to Narrative Drift

Patterns worth noticing — not to assign blame, but to start better conversations.

The Performance Mirage

Metrics stay green while delivery slows. The system is optimizing for the narrator rather than the work.

Try asking: "What would this dashboard look like if we added the things we chose not to measure?"

Semantic Drift

Integrations fail despite "successful" milestones. The same words mean different things to different teams.

Try asking: "When we say 'ready,' what would need to be true for you to ship tomorrow?"

The Archive of Optimism

The "Later" column grows quietly. Deferred decisions compound interest that eventually comes due.

Try asking: "If we could only keep 3 items in 'Later,' which would survive? What does that tell us?"

The Blind Spot Report

Tools report clean results — not because everything is clean, but because they can only see what they were built to see.

Try asking: "What can't this tool see? What language, service, or path isn't covered?"

The unreliable narrator isn't a villain. It's a lens. Once you see it, you start asking different questions — not "is this metric right?" but "what story is this metric trying to tell, and what did it leave out?"

That's where the interesting work begins.

A Strategic Operations Research Project

Applying US Patent 12,106,240 B2 to the challenge of organizational sensemaking.