LLM Component Schema
A JSON schema, CLI, and governance toolkit that makes component intent machine-readable, so AI coding agents generate to spec, not from vibes. Two published npm packages, a CI drift-and-eval pipeline, and a reusable playbook.

How it works
Schema Definition
- Props & variant typing
- Accessibility contract
- Token bindings
- Generative rules
Agent Generation
- Schema-guided output
- Constrained prop APIs
- Token-aware styling
- ARIA-complete markup
Validation
- CLI schema validation
- Drift detection (CI)
- Fixture test corpus
- Agent eval harness
Governance
- Change-request templates
- Health-review templates
- Benchmark lineage
- Immutable release artifacts
Schema Anatomy
What a component contract defines
Props & variants. Every prop typed with allowed values, defaults, and constraints. Two style packs: base (styling-agnostic) and tailwind (class-name-aware). Same schema shape, different stylingSystem.
Accessibility contract. Required ARIA attributes, keyboard interaction patterns, focus management rules. Per variant, not per component. Token bindings. Which tokens map to which visual properties. No ambiguity.
Published Packages
Contracts + CLI
@petritapanilahdelma/llm-component-cli ships four commands: validate (strict structure + JSON contract checks), drift-check (compare schema against component props and Storybook args), migrate (v1 → v2 schema upgrade), and init (scaffold a full component starter).
Both packages versioned in lockstep via npm workspace root. Tag-release workflow publishes immutable artifacts: schema pack archive, validator output, drift report, eval report, dashboard snapshot.
Enforcement
Drift detection & agent evals
Drift detection is a failing build, not a warning. The CI workflow treats contract drift the same way Project Spine treats export drift. If the schema and the component diverge, the pipeline stops. The golden test corpus provides permanent regression coverage.
The eval harness scores agent output against the contract so the feedback loop is measurable. The failing-examples/ library provides intentionally bad fixtures as anti-pattern regression tests.
Governance
Schema evolution as reviewed process
Schema change-request and system-health-review templates live in-repo so contract evolution is reviewed, not ambient. When a component needs a new prop or variant, the change goes through the governance template: documenting why, what breaks, and what the migration path is.
The contract pattern here is the ancestor of per-component .contract.json files in client projects. This repo is where the pattern was proven before being deployed in production design systems.
2 npm Packages
Contracts + CLI, versioned in lockstep. Workspace root at packages/.
4 CLI Commands
validate, drift-check, migrate, init. CI enforcement on every push.
2 Style Packs
Base (styling-agnostic) and Tailwind (class-name-aware). Same schema shape.
Eval Harness
Agent output scored against contracts. Failing-examples library as regression surface.
Positioning
Component-level context for agents
Where Project Spine compiles repo-level agent context, LLM Component Schema compiles component-level context. Both are repo-native. Both use hashed drift reports. Both address the core failure mode: generic agent prompts drift; explicit schemas + CI don't.
Interested in working together?
Let's discuss how design systems, AI and thoughtful UX can elevate your product.