The basics
Create a prompt in the dashboard (Dashboard → Prompts → Create), give it a
name like customer-support-system, write the prompt body in markdown with
variables:
rendered is now a string ready to pass to your LLM.
How versioning works
Each prompt has multiple versions, and exactly one version is live at a time. Fetching via the API always returns the live version. Workflow:- Create a new version (edits are automatically versioned)
- Preview it in the dashboard
- Mark it as live when ready
- Next API call picks up the change immediately — no redeploy
Environment scoping
Prompts are environment-scoped, same as files. A prompt namedcustomer-support-system in your staging environment is a completely
different resource from a prompt with the same name in production. This
lets you experiment in staging without risk.
Variables
Variables use{{ variable_name }} syntax (double curly braces). The SDK
detects them automatically when you fetch the prompt and expects you to
supply values in render().
If you pass a variable that isn’t in the prompt, it’s ignored. If you don’t
pass a variable the prompt expects, render() raises an error.
Use cases
- Prompt tuning without deploys — let prompt engineering happen in the dashboard, not in code
- A/B testing — run one version live, measure quality, switch versions
- Non-technical collaborators — let product/ops teams edit prompts
- Rollbacks — if a new prompt version tanks quality, revert in one click
Related
API reference
GET /prompts/{name} — fetch the live version.