Skip to main content
Memic’s prompt management lets you store prompts in the dashboard, version them, and fetch them at runtime via the API. This means you can iterate on prompts without shipping new code — useful for prompt engineering, A/B testing, or letting non-developers tune LLM behavior.

The basics

Create a prompt in the dashboard (Dashboard → Prompts → Create), give it a name like customer-support-system, write the prompt body in markdown with variables:
You are a helpful customer support assistant for {{ company_name }}.

Answer the user's question using only the context provided below.
If you can't answer from the context, say so.

Context:
{{ context }}
Mark a version as live, then fetch it at runtime:
prompt = client.prompts.get("customer-support-system")

rendered = prompt.render(
    company_name="Acme Corp",
    context=search_results_as_string,
)
rendered is now a string ready to pass to your LLM.

How versioning works

Each prompt has multiple versions, and exactly one version is live at a time. Fetching via the API always returns the live version. Workflow:
  1. Create a new version (edits are automatically versioned)
  2. Preview it in the dashboard
  3. Mark it as live when ready
  4. Next API call picks up the change immediately — no redeploy
Old versions are kept indefinitely for rollback.

Environment scoping

Prompts are environment-scoped, same as files. A prompt named customer-support-system in your staging environment is a completely different resource from a prompt with the same name in production. This lets you experiment in staging without risk.

Variables

Variables use {{ variable_name }} syntax (double curly braces). The SDK detects them automatically when you fetch the prompt and expects you to supply values in render(). If you pass a variable that isn’t in the prompt, it’s ignored. If you don’t pass a variable the prompt expects, render() raises an error.

Use cases

  • Prompt tuning without deploys — let prompt engineering happen in the dashboard, not in code
  • A/B testing — run one version live, measure quality, switch versions
  • Non-technical collaborators — let product/ops teams edit prompts
  • Rollbacks — if a new prompt version tanks quality, revert in one click

API reference

GET /prompts/{name} — fetch the live version.