Quick Reference Cards
All five domain cheat sheets in one place. Print-friendly summaries of the most important concepts for the exam.
AGENTIC LOOP:
stop_reason == "tool_use" → execute tools, append results, continue
stop_reason == "end_turn" → done
ANTI-PATTERNS:
✗ Parse natural language ("I'm done")
✗ Arbitrary iteration caps as primary mechanism
✗ Check content[0].type == "text"
MULTI-AGENT:
✓ Hub-and-spoke (coordinator at centre)
✓ Subagents are isolated (no shared memory, no inherited history)
✓ All communication through coordinator
✓ Trace failures to root cause (usually coordinator decomposition)
ENFORCEMENT:
High stakes (financial/security/compliance) → Programmatic (hooks/gates)
Low stakes (style/format) → Prompt-based guidance
HOOKS:
PostToolUse → Normalise data after tool execution
Pre-execution → Block/redirect before tool execution
DECOMPOSITION:
Fixed pipeline → Predictable tasks
Dynamic adaptive → Open-ended investigation
Attention dilution → Split into per-file + cross-file passes
SESSION:
Resume → Context still valid
Fork → Divergent exploration
Fresh+summary → Stale context or degraded session
TOOL DESCRIPTIONS:
Include: purpose, inputs, examples, edge cases, boundaries
Fix misrouting with better descriptions FIRST
Check system prompt for keyword conflicts
ERROR CATEGORIES:
Transient → retry (timeout, unavailable)
Validation → fix + retry (bad input)
Business → alt workflow (policy violation)
Permission → escalate (access denied)
Access failure ≠ valid empty result
TOOL DISTRIBUTION:
4-5 tools per agent, scoped to role
tool_choice: auto / any / specific tool
Scoped cross-role tools for high-frequency simple ops
MCP:
.mcp.json = project (shared, version-controlled)
~/.claude.json = user (personal)
Community servers before custom builds
${ENV_VAR} for credentials
BUILT-IN TOOLS:
Grep = search contents
Glob = match file paths
Edit = targeted modification
Read + Write = fallback when Edit fails
CLAUDE.md HIERARCHY:
~/.claude/CLAUDE.md → personal, NOT shared
.claude/CLAUDE.md → project, shared via git
subdirectory/CLAUDE.md → directory-scoped
New hire missing instructions? → Probably in user-level config
PATH-SPECIFIC RULES:
.claude/rules/testing.md with paths: ["**/*.test.tsx"]
Loads only when editing matching files
Glob patterns match across entire codebase
COMMANDS & SKILLS:
.claude/commands/ → project (shared)
~/.claude/commands/ → personal
Skills: context: fork, allowed-tools, argument-hint
Skills = on-demand | CLAUDE.md = always-loaded
PLAN MODE:
Complex/multi-file/architectural → Plan mode
Simple/clear/single-file → Direct execution
Common: Plan first, then direct execution
ITERATIVE REFINEMENT:
Input/output examples > prose descriptions
Batch feedback when fixes interact
Sequential when issues are independent
CI/CD:
-p flag = non-interactive mode (MEMORISE THIS)
--output-format json for structured output
Independent review instance (not same session)
Include prior findings to prevent duplicate comments
EXPLICIT CRITERIA:
Specific categories > vague confidence instructions
Code examples for severity levels
Disable high false-positive categories temporarily
FEW-SHOT:
Most effective for consistency
2-4 examples with reasoning
Deploy when prose produces inconsistent results
STRUCTURED OUTPUT:
tool_use eliminates syntax errors, NOT semantic errors
Nullable fields prevent fabrication
"unclear" enum for ambiguous cases
"other" + detail string for extensibility
VALIDATION-RETRY:
Effective: format, structural, placement errors
Ineffective: information absent from source
Send: original doc + failed extraction + specific error
BATCH API:
50% cost savings, up to 24h, no latency SLA
Blocking workflows → synchronous
Latency-tolerant → batch
No multi-turn tool calling in batch
MULTI-INSTANCE:
Same session = biased self-review
Independent instance = objective review
Per-file passes + cross-file integration pass
Confidence routing → human review for low-confidence
CONTEXT PRESERVATION:
Persistent case facts block → never summarise transactional data
Key findings at beginning (lost-in-middle effect)
Trim tool results to relevant fields
Full conversation history in every API request
ESCALATION:
Customer says "I want a human" → immediate, no investigation
Policy gap → escalate
Frustration alone → NOT a valid trigger
Self-reported confidence → NOT reliable
ERROR PROPAGATION:
Include: failure type, what was attempted, partial results
Never: silent suppression or full workflow termination
Access failure ≠ valid empty result
CODEBASE EXPLORATION:
Scratchpad files for persistent findings
Subagent delegation for investigations
/compact when context is bloated
Crash recovery via structured manifests
CONFIDENCE CALIBRATION:
Validate by document type AND field, not just aggregate
Stratified random sampling for ongoing verification
Field-level confidence → route low-confidence to humans
PROVENANCE:
Structured claim-source mappings through entire pipeline
Conflicting sources → annotate both, let consumer decide
Temporal differences ≠ contradictions
Content-appropriate rendering (tables, prose, lists)