Limitations / Gotchas
Current behavioral constraints in the MVP implementation.
-
Provider support: only
llm.provider = "openai"is supported by the CLI today. The config loader has aproviderfield, but the CLI will reject anything else with a config error. -
Generated dir name: runtime forwarding for
@jaunt.magiccurrently imports usinggenerated_dir="__generated__"(hardcoded). If you setpaths.generated_dirto something else,jaunt buildmay still write files there, but calling your@jaunt.magicfunctions will fail because the runtime decorator imports from__generated__/. Workaround: keep__generated__, or import the generated module directly (not recommended for "spec is the API" workflows). -
Prompt overrides:
prompts.*are treated as file paths by the OpenAI backend. If you set them, those files must exist and contain the prompt text. If you want to tweak prompts, version the prompt files alongside your repo. -
Dependency context plumbing: the dependency DAG, ordering, and staleness propagation work, but the backend does not currently get rich "dependency implementation source" context for dependents. You may need to be more explicit in docstrings (or add
prompt=) when a spec depends on non-trivial behavior from another generated spec. -
Auto-generated PyPI skills (if enabled): this can add extra network calls (PyPI) and extra OpenAI calls during
jaunt build. Failures warn and continue, so the build output can differ based on environment, connectivity, and what’s installed in the active venv.
Next: Writing Specs.