agent_runtime — the turn-by-turn planner{{stepN.field}}Imagine writing to Metnos: «read the file ~/notes/diary.md and tell me the last three lines». From that moment on a turn begins. The planner — the module we describe here — receives the sentence, decides what to do, sets up the steps, and produces the answer.
To understand what "deciding what to do" means, let us follow a concrete example. The request is the one above. The planner has no hardcoded logic that says "if the user says 'read a file' call fs_read"; the idea is different:
The key point: useful behaviour emerges from composition, not from hardcoded rules. If tomorrow the user asks "fetch a page, save it to a file, and tell me how many bytes you wrote", the planner will compose web_fetch + fs_write + final_answer with no need for any special case. The same pipeline, a different request, a different sequence of steps.
Now let us see the same path more precisely. A turn always has this structure:
executors/); for each it verifies that the manifest is signed by the author's key and that the code digest matches. Those failing verification are discarded with a reason. The catalog lives in memory.turn_id is generated to identify this turn in the logs.The pre-filter ranks the catalog by relevance to the user query and selects a subset: the top-K. Without a pre-filter, we would have to pass to the LLM all executors (potentially dozens or hundreds), and the quality of the choice would degrade together with the latency. The pre-filter solves the problem in a trivial yet effective way (see ch. 5).
At this point the turn enters a loop. For each step:
{{stepN.field}}, it substitutes them with the real value (see ch. 7).{ok, content?, metadata?, error?}) returns to the planner.The turn ends in one of the following ways:
final_kind = "answer" — the LLM produced the final answer.final_kind = "cap_steps" — the step limit was exceeded (default 5).final_kind = "cap_same_executor" — the same executor was called too many times (default cap 2).final_kind = "error" — an external component is unreachable (e.g. Ollama down), or the catalog is empty, etc.
In every case, the planner writes a complete JSONL record of the turn (see ch. 11) and returns (log, final_message).
Query: «fetch https://httpbin.org/uuid and tell me only the UUID».
Step 1 — the LLM proposes web_fetch(url=https://httpbin.org/uuid). Execution: returns JSON {"uuid":"7c089d54-..."}.
Step 2 — the LLM reads the history, understands it has what it needs, produces final_answer: "7c089d54-...".
Total latency: 3.3 seconds (Qwen3:8b local). No pre-coding for "extract uuid": the LLM reasons over the JSON and extracts the field on its own.
The planner works in three modes, chosen by configuration. The main difference among the three is how many round-trips to the LLM are needed to complete a turn and where the LLM runs.
| Mode | Loop | Typical LLM | When it makes sense |
|---|---|---|---|
local | multistep ReAct (one round-trip per step) | local (Ollama) | Default. Maximum privacy, zero cost, decent latency. |
online | single-shot (one round-trip for the whole plan) | frontier (Anthropic, OpenAI) | Complex tasks worth the money spent. A frontier model succeeds in one call where a local one would have needed five. |
hybrid | local by default; escalation to online for critical tasks | mixed | Balance of cost and quality. For domestic Metnos. |
The principle: the shape of the loop follows from the per-call cost. A free local LLM can afford to iterate; an expensive frontier LLM compresses everything into one call. Same planner, behaviour adapted to cost.
In v1.1 we tested the local mode; online is wired as scaffolding (Anthropic provider stub) but not exercised; hybrid has the ModeRouter in place which in PoC always returns "local". Extensions will require: configuring an online provider + enabling the routing rules in [runtime.hybrid].
Independently of the mode (which says where the LLM runs), the runtime exposes three tiers: three pointers to LLMs with different characteristics. Each runtime component picks the tier it needs.
| Tier | Characteristic | Candidate |
|---|---|---|
| fast & furious | Small, fast, always LOCAL, always available. | qwen3:8b with think=false |
| middle & trustable | Intermediate reasoning capacity. | gemma3:12b local, or claude-haiku-4-5 online. |
| slow & wise | Maximum reasoning capacity, accepts latency/cost. | claude-sonnet-4-6 online, or qwen3:32b with think=true on a powerful local. |
The standard planner uses fast. The constitutional vaglio (when real) will use middle. The synthesiser of new executors (synt) will use wise. When a section is missing from the configuration, the pointer aliases to the lower tier: middle → fast, wise → middle → fast. Never a crash, never an exception: the presence of fast alone guarantees operation.
think=true and asks for deep reflection. Same model, different roles, different outputs. The user obtains three points of view even with a single brain.
Minimal config schema:
[runtime.llm.fast] provider = "ollama" model = "qwen3:8b" think = false [runtime.llm.middle] provider = "ollama" model = "gemma3:12b" [runtime.llm.wise] provider = "anthropic" model = "claude-sonnet-4-6"
In v1.1 the structure is implemented but full differentiation is not yet: in PoC the three pointers are identical (qwen3:8b) and the prompt is a single one. The per-tier prompt differentiation is deferred until we have a second backend or a use case that requires it.
With a catalog of 30+ executors, passing them all as "tools" to the LLM blows up the prompt and dilutes the attention. The pre-filter ranks by relevance and passes to the LLM only the most promising ones.
In v1.1 the ranking is bag-of-words: it tokenises the user query, tokenises the affinity declared in each manifest, sums the matches (matches on the affinity count double those on the description). Extracting the tokens means: lowercase, alphanumeric words, no accents.
Query: «fetch https://httpbin.org/get». Tokens: {fetch, httpbin, org, get, https}.
Match against affinity of web_fetch: web, http, url, fetch, scarica, leggi, pagina, api, rest. Match: fetch. Score: 2.
Match against affinity of fs_read: fs, read, leggi, lettura, file, ... No match. Score: 0.
web_fetch wins with high confidence.
K (number of executors to pass to the LLM) is not fixed: it depends on the confidence with which the pre-filter distinguishes the top-1 from the others.
k_min=5). No point in passing zero-score candidates.k_max=40). Let the LLM decide.Measured in the POC: pre-filter sub-millisecond up to 300+ executors; LLM Qwen3:8b chooses correctly up to K=200; latency grows linearly with K above 40. Practical sweet spot: K=20-40 (~3 seconds of LLM latency).
The form with semantic embedding (local MiniLM model, ~100 MB) is deferred: the bag-of-words proved sufficient for the real cases of the POC.
Modern LLMs (Anthropic, OpenAI from the start; Ollama for Qwen 2.5/3, Llama 3.1+, Mistral, Gemma) support a native tool-calling protocol. The runtime declares the available tools in the API, each with its JSON Schema taken from the manifest. When the LLM decides to call a tool, it returns a structured field:
{
"tool_calls": [
{
"id": "call_abc123",
"function": {
"name": "fs_read",
"arguments": {"path": "/tmp/note.txt", "tail_bytes": 100}
}
}
]
}
It is already a Python dict, parsed by the HTTP protocol. No regex on text, no markdown blocks to extract, no edge cases of "the LLM forgot to close the brackets". When the LLM is ready for the final answer, it simply does not call any tool and produces only text: the planner recognises this as final_answer.
[args]) and that JSON Schema is passed directly to the provider as parameters of the tool. A single source of truth on the shape of the arguments, zero translation. See executor.html for the manifest details.
{{stepN.field}}
In multistep, the LLM at step N+1 needs to refer to the output of step N. Classic example: "fetch X and save it in Y" — at step 2, fs_write must receive as content the body returned by web_fetch at step 1.
The syntax is {{stepN.field}}:
N is the step number (1-indexed: step1 is the first).field is the key inside the observation of that step. It can be nested: {{step1.metadata.path}}, {{step2.content}}.The runtime intercepts the args before invoking the executor, discovers the placeholders, retrieves the real value from the turn history, substitutes.
// Step 1
{ "tool": "web_fetch", "args": { "url": "https://httpbin.org/get" } }
// observation: {ok: true, content: "", metadata: {...}}
// Step 2 (proposed by the LLM)
{ "tool": "fs_write", "args": {
"path": "/tmp/out.txt",
"content": "{{step1.content}}"
} }
// The runtime substitutes and invokes:
{ "tool": "fs_write", "args": {
"path": "/tmp/out.txt",
"content": ""
} }
{{step1.bytes_written}}. The planner's prompt explicitly instructs the LLM about this limit. A violation of this rule was one of the bugs caught by the POC.
The reference must be the sole value of the arg, not interpolated within longer strings (limit of v1.1; a future extension may allow "prefix {{step1.content}} suffix").
It is the result of an architectural choice. When the POC started without any data-piping mechanism, the LLM tried to invent a shell-style syntax ($(web_fetch(...)['content'])) that the runtime could not interpret: the file ended up containing that literal string instead of the data. It was an open architectural tension: a mechanism was needed, among five possible alternatives (verbatim copy, named variables, pipe primitives, defer multistep). The {{stepN.field}} syntax is the simplest one that works: 30 lines of template substitution in the runtime, a brief instruction in the prompt, and the LLM Qwen3:8b follows it zero-shot.
When an executor returns a lot of content (a 100 KB file, a long HTML page, a verbose API body), passing it whole into the LLM history blows up the context. Cutting at 1500 characters loses useful information.
The v1.1 solution: scratchpad. When an observation exceeds the threshold (4 KB of serialised JSON), the runtime saves it to a local SQLite (~/.local/share/metnos/scratchpad.db) and puts in the history a synthetic observation:
{
"ok": true,
"scratchpad_id": "eae04122bd704636",
"size_bytes": 14144,
"kind": "text",
"summary": "hello this is a test note\n\n[... 13900 characters omitted ...]\n\nINFO 2026-04-26 23:59:59 LAST_CRITICAL_EVENT\n",
"metadata": {"path": "/tmp/big_log.txt", "bytes": 14144, ...}
}
The summary is a smart truncation: the first 500 characters + a placeholder with the number of characters omitted + the last 500. This way the LLM sees the start and end of the content.
The LLM at the next step, seeing the summary, decides:
scratchpad_read with mode:
full: the entire content (not recommended if very large).head: the first N characters (default 2000).tail: the last N characters.range: an interval [start, end).
scratchpad_read is a builtin: it lives in the runtime, has no manifest on disk, is added to the tool catalog dynamically when active scratchpad entries exist in the current turn.
Full details in the dedicated doc: scratchpad.html.
The vaglio is the constitutional evaluator: before a tool_call
becomes action, it decides whether it is lawful. In multistep it runs between
one step and the next; in single-shot it runs post-hoc on the entire plan.
Since 27/4 the vaglio is real (no longer a stub) and works in two
distinct phases as per ch. 11 of the Architecture.
Blocks violations of the 4 Laws. In v1.1 the encoded rules are:
~/.ssh,
/etc/passwd|shadow|sudoers, /root,
/boot, /sys, /proc,
/dev/sd*|nvme*, ~/.aws/credentials,
~/.config/*/credentials.env, ~/.gnupg.
If even just MENTIONED in a tool argument, the action is denied.rm -rf /, rm -rf ~,
mkfs, dd of=/dev/..., fork bomb, recursive
chmod 7XX on the root. Match only for executors with capability:
code:exec.
The list does not relax with autonomy level: it is the "non-negotiable
core" of ch. 5. If the guardia stops, no score: the verdict is
blocked_by="guard".
If the guardia lets it through, the giudice measures the alignment of the
action to the user's telos in [0, 1]. Below the threshold
METNOS_JUDGE_THRESHOLD (default 0.30, configurable via env)
the action is denied with blocked_by="judge". Above, it is
approved.
In v1.1 the giudice is rule-based: local heuristics,
microseconds, zero cost. Base score 0.7, bonus if the intent mentions
the executor name (signal of explicit intent), penalty for
.. in path (possible path traversal), penalty for keys in
args with non-alphanumeric characters (anomaly). The LLM
giudice (middle tier, separate context from the proposer to avoid
self-confirmation) is deferred to v1.2: it requires middle tier configured +
explicit budget. The deontology/teleology split is already in the right
place, the giudice's implementation can evolve without touching the
guardia.
The Verdict exposed by the vaglio module contains
{approved, reason, score, blocked_by, judge_kind, ts}. The JSONL
log on ~/.local/share/metnos/vaglio/YYYY-MM.jsonl records only
the keys of args (not the values), for privacy.
The planner has three safety mechanisms against loops and ill-posed actions. They appeared in the POC as a response to real LLM behaviour, not as abstract concern.
| Mechanism | What it does | Default |
|---|---|---|
cap_steps | Maximum number of steps per turn. | 5 |
cap_same_executor | Limit of calls of the same executor in the turn. | 2 |
| guard duplicate read | If the LLM re-invokes fs_read/fs_write/web_fetch with the same path/url as a previous step, the runtime does not re-execute: it returns an observation that says "you already have this data at step X, formulate the final_answer". | active |
The guard duplicate read emerged from the POC: without it, small LLMs (Qwen3:8b in particular) tended to re-read the same file with slightly different args hoping for a better result, ending up in cap_same. The guard intercepts upstream and unblocks the formulation.
Exception: scratchpad_read is not subject to the guard, because calling it more than once with different mode/range on the same scratchpad_id is the normal use case.
For each turn the planner writes a JSONL line to ~/.local/share/metnos/turns/YYYY-MM-DD.jsonl with:
turn_id — uuid of the turn.ts_start, ts_end — Unix timestamps.user_query — original user text.mode — chosen mode (local/online/hybrid).candidates — names of the executors passed to the LLM after pre-filter.steps — list of steps with: number, llm in/out tokens, latency, tool called, raw and resolved args, validation/sandbox/vaglio outcome, result.final_message — final text to the user.final_kind — one of answer | error | cap_steps | cap_same_executor.JSONL append-only, one file per day. No automatic rotation in v1.1 (with normal use ~3 MB/month, negligible).
In parallel with the JSONL log, the planner updates the mnestome (SQLite, single file). Two simple hooks, activated only when there is observed piping between steps:
obs.ok = true,
if the raw_args contained at least one resolved {{stepM.field}}
reference and executor M was real (not a proto, not scratchpad), it invokes
Mnestoma.record_passing(src=executor_M, dst=current_executor, dst_exists=True).
The mnest grows or is born with bootstrap weight; every future passage reinforces it.nonexistent_executor) and the raw_args
had references to previous steps, it invokes
record_passing(src=executor_M, dst=desired_name, dst_exists=False,
desired_signature=...). The desired signature is inferred
conservatively from the requested tool name, args and turn context
(build_desired_signature).Without piping no mnest: the isolated invocation of a single executor does not represent a «passage between A and B» in the sense of ch. 2 of mnest. The write is fail-safe: an error on the mnestome is logged (verbosely) but does not interrupt the turn.
When a proto-mnest is just registered (the case above: nonexistent tool with
piping from a previous step), the planner immediately tries a reactive
synthesis compose-only by calling
Synt.react(req) with router=None (the
generate mode stays reserved for the nightly scheduler and the
introspective cycles of ch. 11.2). The outcome of this call, if positive,
is added to the observation as a synt field:
state == "composed": the composer found a chain of
signed executors that closes the proto-mnest. The observation carries
{strategy: "compose", state: "composed", chain: [...], first_hop: "X",
suggestion: "Retry by invoking 'X' as the next step"}.
The planner does not re-launch the first hop automatically: it lets
the LLM decide whether to follow the suggestion at the next step
(preserving ReAct discipline — the LLM remains the master of
the sequence).state == "abandoned" or "rejected":
the observation carries {state: "abandoned",
suggestion: "There is no executor available for this need,
look for another way"}. Non-retreat telos: the planner
does not give up at the first error, it tells the LLM the road is
closed so it can search for another.
Cost: one call to Composer.find_chain() (BFS on the mnestome,
milliseconds) and a possible lock-check. No LLM, no budget.
The call is fail-safe: any synt exception does not
propagate and the observation remains the default one
("nonexistent executor: X"). See
runtime/agent_runtime.py:_try_synt_compose.
The scheduler is a builtin of the runtime, not an executor: it has no capability, is not signed, has no sandbox. It is a cron-style loop that runs recurring system tasks without user input. Three schedule supports:
daily@HH:MM — every day at hour HH:MM (UTC), once.every_N_minutes — every N minutes since the last run (or immediately if never).manual — only via scheduler run-now <task>.State persisted in workspace/.scheduler/state.sqlite (table
tasks with last_run_at + last_status, table runs
append-only). The due-time check is idempotent: two ticks in the same slot
do not run the task twice.
Built-in tasks registered by default:
| Name | Schedule | What it does |
|---|---|---|
apply_ager | daily@04:00 | Calls Mnestoma.apply_ager(): decay + demote + proto purge on the mnestome. |
synt_suggest | daily@04:30 | For each recurring proto-mnest (uses≥3, weight≥0.30) calls Synt.react() in compose-only and logs the outcome. |
The daemon loop (scheduler daemon) is a single process that runs
tick() every 60s and goes idle. No concurrency, no inter-process
locking: the scheduler in v1.1 is a singleton on metnos-server.
The error policy is "do not let the loop fall": every task exception
is caught, marked in last_status='error' with traceback in
last_output, and the tick continues onto the following tasks.
The scheduler design as builtin is consistent with the decision in the Dialogue on executors: system maintenance (decay, nightly synthesis) is part of the runtime, not an executor that the system "decides to call". See also the memory builtin executors proposals for the future builtin triad (scheduler, ager, snapshot).
The POC verified five typical failure modes. They became permanent test cases of the test framework.
| What breaks | What happens |
|---|---|
| Ollama unreachable | The provider raises ProviderError; the planner ends the turn with final_kind=error and a clear message. |
| Executor crashes (uncaught Python exception) | The subprocess exits with stderr, the runtime returns {ok: false, error: "non-JSON output: …; stderr: …"}. |
| Executor code modified after signing | The loader rejects it at load time (digest mismatch). The executor never enters the catalog. |
| Executor returns non-JSON stdout | The runtime detects it and returns {ok: false, error: "non-JSON output"}. |
| Empty catalog | The turn ends cleanly with final_kind=error, message "(empty catalog)". |
The principle: do not let the system fall; always return a structured response, even when it is an error. The user understands, the LLM at the next step (in multistep) can correct, downstream executors do not see strange input.
| Feature | When it lands |
|---|---|
Real online mode | When an Anthropic provider with API key is configured. |
hybrid mode with real escalation rules | When the vocabulary of critical_capabilities is decided and auto-routing is desired. |
| Probabilistic vaglio (LLM-judge) | When constitution.html v1.1 exists as reference. |
| Auto-escalation of tier (fast → middle → wise if steps fail) | When at least two distinct tiers are configured and a use case motivates it. |
| Per-tier prompt differentiation (fast/middle/wise) even with the same model | When the use case requires multiple points of view on the same situation. |
| Parallel-tool-call within a turn | When a real case shows significant latency win. Binding constraint v1.2. |
| Async approval (deferred execution with durable per_target grants) | When channel + scheduler require asynchronous interaction. |
| Mnestome history-driven in the pre-filter (boost from history) | When the operational mnestome exists. |
| Embedding (local MiniLM) in the pre-filter | When bag-of-words shows practical limits (not surfaced in the POC). |
| Automatic replay of orphan turns from a crash | When executors are guaranteed idempotent. |
| Automatic JSONL rotation | When the volume becomes relevant. |
This document describes what the planner does today, validated by tests. The "deferred to v1.2" sections describe what we expect, not promises. When an extension lands, this doc will be updated exactly as POC v1.1 updated the previous version: rather than speculating on how it will be, we will write what works.