Principles over tools.
This is a field guide for building modern applications and systems. No vendor logos — just the concepts that keep meaning stable, risk bounded, and systems trustworthy.
This work was inspired by Stewart Brand's Pace Layering and Martin Kleppmann's Designing Data-Intensive Applications. We're grateful for their frameworks and for sharing them generously with the community.
Alignment
A quick reference for what to consider when building applications and systems. Start with the Map, then flip on lenses to see risk, drift, and pace.
Bring a notebook. This guide is meant for annotating tradeoffs, not choosing tools.
Use the Lenses tab to reveal overlays and the legend key.
Toggle a lens to see where meaning shifts, risk concentrates, or pace changes. Use this to annotate your architecture decisions.
Where meaning is most likely to fracture.
Drift likelihood × blast radius.
How fast each part should change.
Reveal a second layer of detail.
How entangled — high coupling means changes here cascade.
Silent failures accumulate undetected; loud failures alert immediately.
Draw a boundary where language changes, not where teams sit. Most modernization failures happen after the tools are chosen — because meanings were never aligned.
Quick Reference — Beyond Semantic Risk
Guardrails, fallback behavior, and safe failure modes when the model is wrong.
Where LLM output must obey rules (transactions, workflows, calculations).
Strict schemas for tool calls, events, and outputs; validation is non‑negotiable.
Semantic regression tests, drift alerts, and traceability for model decisions.
Prompt injection, data exfiltration, and policy compliance across tools.
Clear handoff paths, overrides, and accountability when the model is uncertain.
Red‑team tests, regression suites, and prompt/model versioning discipline.
Token budget, caching, and model routing that preserves UX and margins.
Field Notes — Analogies & Rubrics
Use these analogies to explain the stack without mentioning tools. Pair them with the rubric to assess LLM‑specific risk.
Like building codes: you rarely change them, but everything depends on them.
Like architectural blueprints: they define the shape of meaning across systems.
Like logistics networks: timing and routing determine what the system can promise.
Like storefronts: fast‑changing experiences that must still honor the core language.
Like air‑traffic control: continuous coordination that keeps the whole system safe.
Concern = drift likelihood × blast radius. High means meaning can change and many systems/users are affected.
Risk Intersection — The Danger Zone
When high semantic drift, high LLM concern, and fast pace converge, you get the highest-risk nodes. These are where meaning shifts quickly, models amplify the drift, and users feel it immediately.
Surface contracts change often, many consumers depend on them, and LLM output flows through here.
Relevance models shift, labels drift, and users notice quality changes immediately.
Event schemas bind producers and consumers. Schema evolution is silent until something breaks.
Feature stores, training/serving skew, and label drift compound over time without detection.
Terminology changes cascade to user understanding. Misaligned labels erode trust.
The line of visibility. When contracts slip, both sides of the boundary lose alignment.
Semantic Drift
Meaning diverges quietly at boundaries — teams, services, schema versions — until the same word refers to different things in different places.
"Customer" in billing is not "customer" in support. "Active" in analytics is not "active" in auth. Each drift is defensible in isolation; together they make integration expensive and trust fragile. The nodes highlighted in this lens are where divergence is most likely to compound undetected.
LLM Concern
Concern = drift likelihood × blast radius.
A concept has high LLM concern when model output directly shapes decisions and many users or systems are affected. Hallucination in search is worse than in a batch log. Schema violation in an API contract breaks downstream consumers.
Pace Layers
Not all parts of a system should change at the same speed — and forcing them to is a primary source of architectural debt.
Fast layers (serving, product interfaces) change weekly or monthly. Slow layers (data models, storage infrastructure, foundational schemas) change yearly. The biggest risk is pace mismatch: a fast-layer team that owns a slow-layer dependency, or a slow-layer team pressured to ship at fast-layer velocity. Cross-cutting ops concerns don't change on a schedule — they're always active.
Coupling
How entangled a concept is with the rest of the system.
High coupling means changes cascade — data models, encoding formats, transaction semantics. Low coupling means you can swap implementations — storage engines, consensus algorithms. When modernizing, start with low-coupling concepts; when protecting, prioritize high-coupling ones.
Failure Modes
The failures you don't see are worse than the ones that page you at 3am.
Loud failures — storage outages, consensus failures, crashed services — are hard to miss and tend to be well-instrumented. Silent failures are the opposite: semantic drift in event schemas, stale derived data presented as fresh, governance gaps that compound over quarters. Toggle the Failure Mode lens on the map to see which nodes carry which risk. The most dangerous intersection is silent + high LLM concern: the model amplifies a drift no one has noticed.
Storage, consensus, replication, transactions — hard to miss, usually alertable. Design for fast recovery.
Encoding drift, consistency violations, governance gaps, stale caches — compound over weeks. Design for detection, not just prevention.
Cross-cutting capabilities like governance and security usually cost less when they’re designed early, not retrofitted.