Unit Testing (Pro): Packs and Running Tests
This page expands the “how to adopt tests gradually” workflow: packs, connection prerequisites, and running targeted subsets.
Packs (starter coverage)
Packs generate a sensible starter set of tests so you don’t have to write everything by hand.
Built-in packs currently available:
- Metadata Quality: broad documentation, naming, formatting, display-folder, and relationship-metadata checks.
- Documentation Baseline: focused description coverage for tables, columns, and measures.
- Presentation Hygiene: format-string and display-folder checks for visible fields.
- Referential Integrity: orphan-key checks plus a relationship governance sanity check.
- Relationship Governance: directionality, cardinality, and key-column visibility checks.
- Time Intelligence: date-table checks, including marked-date-table validation and continuity heuristics.
Prompts:
“List available packs and explain what each covers.” “Preview the recommended starter packs for this model, show the generated tests, then apply only the ones we agree on.”Practical note: packs_apply now behaves like a preview/apply workflow. The assistant can show the generated tests first, then apply them once you confirm the pack choice. Canonical requests use spec.pack_id.
Model connection (important)
Some operations require an active model connection (running tests, applying packs, managing baselines, checking capabilities).
Best practice prompt:
“Confirm which model we’re connected to. Then list packs, preview the best starter pack set, and apply it for this model.”Where tests and baselines live
Tests and baselines are stored by the MCP server (not your chat client), so they persist across sessions and are shared across clients on the same machine. For details on state isolation and cross-client behavior, see Server state and isolation.
Recommended workflow (adopt tests gradually)
- Start with a pack (metadata + referential integrity are common baselines).
A common sequence is
metadata-quality, thenreferential-integrityorrelationship-governancedepending on how much relationship risk the model has. - Add a handful of critical measure assertions (top KPIs).
- Add one performance budget for your most important query.
- Run tests after changes and export results for review.
Copy/paste prompt:
“Preview and applymetadata-quality, then add 3 measure assertion tests for our core KPIs. Run them and export results as Markdown.”
Running only what you need
You can ask the assistant to run:
- all enabled tests,
- only specific tests by ID,
- only certain tags (for example
smoke,critical), - only specific test types,
- and optionally stop early on first failure.
Prompts:
“Run only tests taggedsmoke and summarize failures first.”
“Run only measure assertions and performance budgets.”
“Run these test IDs only: kpi-sales, kpi-margin, ri-sales-customer.”
“Stop on first failure for this run so we can debug quickly.”