-
-
Notifications
You must be signed in to change notification settings - Fork 0
Description
Meta-Skill Feedback: domain-discovery
Library: typography.js
Agent: claude-code
Artifact quality: mixed
Rating: mixed
Interview quality: bad
What Worked
The concept inventory approach is sound — listing library name, repo, docs, scope, and concepts per domain produces a useful map. The domain_map.yaml + skill_spec.md artifacts are a clean separation of concerns.
What Failed
"Target audience" is a meaningless input for Intent skills — skills are consumed by AI coding agents, not end users. The scaffold prompt dumps all three phases at once, causing the AI to skip the maintainer interview and autonomously generate everything. No instruction to load the actual domain-discovery SKILL.md before starting — the scaffold gives a summary but the full spec has critical format details. No success criteria step before writing skills. No guidance on identifying cross-cutting concerns that should become cross-references across skills (skills should form a semi-lattice, not isolated trees).
Suggestions
Present phases sequentially — only reveal Phase 2 after Phase 1 artifacts are confirmed. Replace "target audience" with "top use cases the AI should handle well". Add explicit instruction: "Load node_modules/@tanstack/intent/meta/domain-discovery/SKILL.md before starting." Add a guardrail: "STOP after generating artifacts. Present to maintainer for review. Do NOT proceed until they confirm." Add a step to identify cross-cutting concerns that will need cross-references across all skills. Add success criteria step: define what good looks like before writing skills. Teach maintainers to start with one basic skill, try tasks against it, iterate — not generate everything in one pass. The scaffold says to produce Markdown feedback but the CLI expects JSON — these are misaligned.