language guide

Usage

A guided tour of the plum language, built up one feature at a time.

Install plum

plum runs .plum files locally. Three things need to be in place: the plum CLI, ollama, and at least one model.

Quickest path

Run the installer:

shell one-line install
curl -fsSL https://plumlang.dev/install.sh | sh

The installer sets up everything you need: plum, uv, and ollama (with at least one model).

Hello plum

A .plum file is a Python file that may also contain plum's AI operators. Anything that's valid Python is valid plum.

hello.plum
# hello.plum
print("hello from plum")

Run it:

shell
plum hello.plum

That just runs as Python. plum becomes interesting when you start calling models.

Calling a model: ?[prompt | model]

The simplest AI call is the prefix form. The ? sigil and the [ ... ] bracket mark an AI expression. Inside, the prompt comes first, then a |, then the model name.

summarize.plum
# summarize.plum
text = "The cat sat on the mat. It was a quiet afternoon."
summary = ?["Summarize in five words: " + text | llama3.2]
print(summary)

The prompt can be any Python expression that evaluates to a string. The result is whatever the model returns — a str.

A default model chain: use

Naming the model on every line gets repetitive. Declare a default with use at the top of the file:

review.plum
# review.plum
use llama3.2 | mistral | gemma3

review = "The food was cold and the service was slow."
sentiment = ?["What is the sentiment of this review? Answer positive or negative: " + review]
print(sentiment)

use sets the default model so you can drop the | model on every expression. Listing more than one model defines a fallback chain: plum tries llama3.2 first, falls back to mistral, then gemma3 if a model is unreachable. You can still override the chain on a single expression by writing ?["..." | gemma3] inline.

Postfix shorthand: expr?

When the prompt is the expression, put ? right after it. Reads naturally: "f-string, AI it."

spam.plum
use llama3.2

email = "Click here to win a free iPhone now!!!"
verdict = f"Is this email spam? Answer yes or no: {email}"?
print(verdict)

The postfix form requires a use declaration — there's no inline model selector for it.

Typed results: -> Type

By default an AI expression returns a str. That's fine for printing, but dangerous in a conditional — any non-empty string is truthy, including "false". Use -> Type to coerce the output to bool, int, float, or str.

typed.plum
use llama3.2

if f"Is this spam? Answer true or false: {email}"? -> bool:
    quarantine(email)

count = ?["How many vowels in 'engineering'? Answer with just the number."] -> int
print(count + 1)

-> bool accepts only true/false (case-insensitive, whitespace stripped). -> int and -> float parse a clean number. Anything else raises PlumExecutionError with the raw model output so you can tighten the prompt.

Example: spam classifier

A small program that exercises the use chain, postfix form, and type coercion together.

spam.plum
# spam.plum
use llama3.2 | mistral

emails = [
    "Click here to win a free iPhone!!!",
    "Hey, are we still on for lunch tomorrow?",
    "URGENT: your account has been compromised, click here",
]

for email in emails:
    is_spam = f"Is this email spam? Answer only 'true' or 'false':\n\n{email}"? -> bool
    label = "SPAM" if is_spam else "ok"
    print(f"[{label}] {email[:50]}...")

Importing other .plum files

A .plum file can import another .plum file just like a Python module:

helpers.plum
# helpers.plum
use llama3.2

def categorize(text: str) -> str:
    return f"Categorize in one word: {text}"?
main.plum
# main.plum
from helpers import categorize

print(categorize("a guide to making sourdough bread"))

The use chain is local to each file — main.plum doesn't inherit helpers.plum's chain. .plum files can import from regular .py files too; the other direction is not supported.