executor — anatomy of an executable capabilityThis document defines what an executor is, how it is built, how it is authenticated, how it is isolated, how it is born, how it dies. It is the first of three canonical docs introduced by the Dialogue on executors and distributed memory (24 April 2026), together with mnest and mnestome. It replaces the old neuron.html: technically the same object, but the name «neuron» carried a biological metaphor that became cumbersome at the implementation level. Executor is more precise: it is executable code.
This document covers:
It does not cover, and defers to other docs:
An executor is a unit of code that Metnos can run as a step of its own reasoning. A function with a public contract, a signed identity, an isolation profile, and a manifest that describes what it does and which resources it touches. Nothing more. Not an agent, not a microservice, not a plugin: a small executable block under the project's signature.
| Category | Description | Lives in | Signature |
|---|---|---|---|
| Seed | The ones Metnos finds at first boot: about twenty, hand-written in the repository, indispensable to basic operation. Examples: fs_read, fs_write, shell_exec, web_fetch, telegram_send. |
workspace/executors/<n>/ |
Ed25519 per-instance |
| Synthesised | Born over time from the synt (ch. 7) to fill a gap detected by use. They share the same anatomy as the seeds and are born through the same signing and approval pipeline. | workspace/executors/<n>/ |
Ed25519 per-instance |
| Builtin | Part of the runtime, not workspace artefacts. They implement services that need the system clock, the gateway's async loop, or transversal persistent state (e.g. scheduler). They may carry capabilities that no user-level executor would ever obtain. |
Metnos source code | release-signed |
The distinction between seed and synthesised is historical, not structural: a seed is generated manually before first boot, a synthesised one is generated automatically while the system is alive. Once signed, the gateway treats them in exactly the same way.
Builtins are different: they share the external contract
with seeds and synthesised (same run(args, ctx) -> dict, same
audit, same I/O schema), but they do not have a separate manifest.toml,
no per-instance profile.lock, no synthesis pipeline. They live in the
Metnos sources and are signed as part of the release. The synt
can never generate a builtin (ch. 7): they are foundational, not synthesisable.
An executor lives under workspace/executors/<name>/.
The directory contains six artefacts, in this reading order:
| Artefact | File | Description |
|---|---|---|
| Manifest | manifest.toml |
Identity, version, author, description, declared sandbox profile, error contract. See ch. 4. |
| Signature | manifest.sig |
Ed25519 signature over the manifest and the executable hash. Without a valid signature, the gateway refuses to invoke. |
| Code | main.py |
The implementation. A single entry function exposed as def run(args: dict, ctx: ExecutorCtx) -> dict. No dynamic import, no eval, no exec. |
| I/O schema | schema.json |
JSON Schema for input parameters and result. Validated both on entry (gateway) and on exit (executor host process). |
| Birth tests | tests/birth.py |
A handful of cases that demonstrate the expected behaviour. The synthesiser (ch. 7) imposes them as a barrier for approving a new executor; the loader re-runs them at every boot to guarantee continuity. |
| Profile lock | profile.lock |
A frozen hash of the effective sandbox profile (ch. 5) computed at signing time. If at runtime the concrete profile does not match the hash, the invocation fails: a guard against silent modifications. |
The manifest is the identity document of an executor. Example for
fs_read:
[executor]
name = "fs_read"
version = "1.0.0"
created_at = 2026-04-25T08:12:00Z
created_by = "seed" # or: "synt:<run_id>"
summary = "Read a file from the workspace and return its content."
[contract]
input_schema = "schema.json#/definitions/Input"
output_schema = "schema.json#/definitions/Output"
error_classes = ["NotFound", "PermissionDenied", "TooLarge"]
idempotent = true
side_effects = false
[sandbox]
profile = "fs-read-workspace" # see sandbox.html
hash = "blake3:abc123..." # fingerprint of the profile
[audit]
trace_topic = "executor.fs_read"
Three fields are normative:
name and version form the identity the gateway uses to resolve every invocation. name is immutable for the entire life of the executor; version follows semver with one hard rule: any change to the code or the contract bumps the major.contract is the usage schema. You cannot add or remove an input field without bumping the version.sandbox.hash ties the identity to the profile. Changing profile means changing identity.
The file manifest.sig contains the Ed25519 signature
(curve 25519, private key under ~/.config/metnos/keys/
with 600 permissions, never versioned) over these concatenated bytes:
blake3(manifest.toml) || blake3(main.py) || blake3(schema.json) || profile.lock
The gateway loader, at boot and at every request, verifies:
If even one check fails, the executor is rejected. Not loaded, not invoked, not deleted: it is moved to quarantine (ch. 6) and surfaced to the user.
A version of an executor is never modified «in place». To
change something, a new version is created (e.g. 1.0.0
→ 2.0.0) with its own signature. The gateway can keep
multiple versions loaded in parallel as long as there are mnests citing
them (see mnest.html). Promoting a new version
to default is an explicit, signed act: it is recorded in
workspace/executors/<name>/CURRENT.
The profile is the point where an executor's identity meets perimeter safety (Architecture ch. 5). The section is longer than usual because two distinctions must be pinned down, and the question «can the executor read outside the sandbox?» deserves an operational answer, not a slogan.
Two different things, coupled, often confused:
| Layer | What it is | Lives in |
|---|---|---|
| Declared profile | A declaration, written in the executor's manifest, of which resources the executor needs: which paths to read, which to write, which domains to reach, which binaries (if any) to execute, with which time and memory caps. | manifest.toml (static, signed). |
| Applied sandbox | The technical enforcement of those limits on the process running the executor's code. On metnos-server we use bubblewrap + landlock + mount namespace + seccomp + a dedicated network interface. That process, physically, cannot see or call anything the profile has not granted. |
Gateway runtime configuration, generated from the profile. |
The two layers are kept in sync by the profile lock
described in ch. 3: the hash of the concrete profile, computed at
signing time, is frozen in profile.lock. At every
invocation, the gateway loader recomputes the hash of the sandbox it
is about to apply and compares it with the lock; on mismatch, the
invocation fails. A silent profile change — by anyone —
is intercepted.
fs_read on
/etc/passwd) is rejected before any sandbox is opened.
(b) It lets the synt generate minimal profiles at synthesis
time (ch. 7). (c) It is living documentation: anyone reading
the manifest sees at once what the executor demands, without having
to read the runtime configuration.
| Dimension | Example policies |
|---|---|
| Filesystem read | none; workspace/inbox/; workspace/<sub> read-only; HOME read-only. |
| Filesystem write | none; workspace/<sub>; private temp directory. |
| Shell execution | forbidden; allowed only for listed binaries; free (discouraged). |
| Outbound network | none; allow-list of domains; free (discouraged). |
| Resources | max memory, max CPU, max duration. |
Short answer: no. What the profile does not grant, the executor's process does not see and cannot see. This is not a style recommendation, it is a kernel constraint.
Long answer, in three steps:
bubblewrap opens the process in a private mount namespace where only the paths listed in the profile are visible (mounted read-only or read-write according to the field). The rest of the filesystem simply does not exist from the process's point of view. An open('/etc/passwd') returns ENOENT (file not found), as if the file had never been there.landlock policy that further restricts: even paths mounted read-only cannot be opened for writing, even writable paths cannot be opened with dangerous flags. A violation returns EACCES.EPERM on connect(). An executor with allow-listed network goes through a local proxy that blocks anything outside the allow-list.This is defence in depth: the first level (mount namespace) already suffices for unlisted paths; the second (landlock) handles read/write nuances; the third (seccomp) closes the network side. Three independent kernel mechanisms, configured by the profile. Bypassing them would require a kernel bug, not an executor bug.
fs_read called wrong
Suppose the gateway receives the request:
fs_read({path: "/etc/passwd"}). The executor
fs_read has profile fs-read-workspace which
grants read-only on workspace/. What happens?
| # | Component | What it does |
|---|---|---|
| 1 | Policy validation in the gateway | Before opening any sandbox, the gateway compares the path argument with the declared profile. /etc/passwd is not under workspace/: the call is rejected with a structured PolicyViolation error. No sandbox is even opened. |
| 2 | (Defence in depth) | If, by absurd hypothesis, the gateway had been confused and had opened the sandbox, the executor's process would have been in a mount namespace without /etc/. open('/etc/passwd') would have returned ENOENT. The executor would have returned NotFound. |
| 3 | (Further defence in depth) | If the path /etc/passwd had been mistakenly mounted, landlock would have rejected the read() with EACCES. |
| 4 | Audit | In all cases, the gateway writes to the audit log the request, the outcome (PolicyViolation, NotFound, PermissionDenied), the sender, the timestamp. |
The correct design is that the error is intercepted at step 1 (policy-time), not at step 2 or 3 (kernel-time): it is faster, more informative for the caller, and leaves no traces in the sandbox. Steps 2 and 3 are the safety net that exists if the first should fail because of a bug of ours.
Some paths no profile can grant, even if the manifest declared them. They are the core forbidden paths, hard-coded in the gateway's code, irreducible, modifiable only by rite (code change, review, release):
~/.ssh/, ~/.gnupg/ — the user's cryptographic keys;~/.config/myclaw/, ~/.config/metnos/ — the project's secrets vault;workspace/.audit/ — the append-only ledger of actions; not even modifiable by the audit log writer itself, which opens the file in append mode via dedicated syscall;/etc/, /proc/, /sys/, with explicit exceptions for innocuous files read by the Python runtime).
Conceptual difference: the profile is per-executor declared
modulation; the core forbidden paths are universal system
prohibitions. They live on two different planes of
axis 2 of perimeter
safety (Architecture ch. 5): the modulable shell (profiles) and
the hard core (forbidden paths). The Full autonomy level
can widen the shell, never the core.
api.openai.com is not approved. The synt, during
synthesis (ch. 7), tries to generate the tightest profile that still
satisfies the birth tests; if it cannot, it suspends synthesis and asks
Roberto to narrow it manually.
Yes. Nothing in the design forbids a profile from declaring paths
outside workspace/. The profile is an explicit
declaration: one can declare ~/Pictures/,
~/Documents/work/, a network-mounted folder, a backup
partition. What the profile declares, the sandbox grants; what the
core forbids stays forbidden anyway (ch. 5.5).
The derogation, however, is a weighed act. It has three constraints:
~/Pictures/ and not just the workspace.profile.lock hash; changing the profile means changing the version, and changing the version requires a new signature and a new approval. The sandbox cannot be widened in secret.~/ read+write cannot access ~/.ssh/, ~/.gnupg/, ~/.config/myclaw/: the core forbidden paths (ch. 5.5) are masked by the mount namespace even inside a «wide» profile.
A real case. The photos live in ~/Pictures/ (on the
server) or in C:\Users\Roberto\Pictures\ (on the laptop).
The two situations are distinct and are solved distinctly.
| Case | How |
|---|---|
Tidy on metnos-server (photos in the server's ~/Pictures/) |
Create an executor photo_organize with dedicated profile: fs-read-write on ~/Pictures/; no shell; no network; max duration 30s; bounded output. The profile is explicit, signed, subject to synt synthesis and to Roberto's explicit approval. The core continues to forbid ~/.ssh etc., so even a bug of the executor cannot leave ~/Pictures/. |
Tidy on the laptop (photos in C:\Users\Roberto\Pictures\, off metnos-server) |
The remote executor case: the process performing the action runs on the laptop, not on the server, because syncing gigabytes of photos just to tidy them does not make sense (see Architecture ch. 4 for the full motivation). Sandboxing logic changes: see ch. 12 of this doc. |
| State | Meaning | Outgoing transition |
|---|---|---|
| Seed | Executor present at first boot, hand-written in the repo. | → Active (after signature check). |
| Active | Loaded by the gateway, invocable, with at least one mnest citing it. | → Fused, Quarantined, Below threshold. |
| Fused | Two executors with overlapping traces have been merged by the synt into a new executor that covers both. The old ones stay loaded as long as residual mnests cite them, then archived. | → Archived. |
| Quarantined | Invalid signature, failed birth test at boot, profile lock mismatch. Not invocable, not deleted. Surfaced to Roberto with a warning. | → Active (after re-approval) or → Archived. |
| Below threshold | Usage in the last N days below the configured threshold. The ager proposes archival. Stays loaded until Roberto confirms. | → Active (use resumed) or → Archived. |
| Archived | Moved to workspace/executors/.archive/, not loaded by the gateway, but kept for historical trace and possible rehabilitation. | → Active (explicit rehabilitation). |
.archive/; defusing a fusion reloads the
two original executors; a quarantined one returns to active after a
new valid signature. The only thing that is irreversible is the trace
in the audit log.
The synt is the process that brings new executors to life when use demands it. The trigger source is a recurring proto-mnest: a trace of unsatisfied desire («the output of A was meant to go to B which does not yet exist») that recurs several times in the same month.
The pipeline has seven stages. Each stage may suspend and ask the user.
| # | Stage | What it does |
|---|---|---|
| 1 | Pattern detect | The ager counts recurring proto-mnests; if one crosses the recurrence threshold, it offers the synt a motivation card. |
| 2 | Specification | The synt drafts a textual specification (what the executor must do, which inputs, which outputs, which errors). Roberto approves or amends. |
| 3 | Skeleton | Generates main.py and schema.json by calling the LLM with the specification as context. |
| 4 | Profile | Computes the tightest sandbox profile compatible with the birth tests. If it cannot, it suspends and asks. |
| 5 | Birth tests | Generates or accepts from the user the cases in tests/birth.py. Runs them: all must pass with the profile from step 4. |
| 6 | Human approval | Roberto sees manifest, code, tests, profile. Approves, amends, or rejects. On approval, signing starts. |
| 7 | Sign and install | Computes the hashes, generates the Ed25519 signature, writes profile.lock, moves the executor into workspace/executors/<name>/ and marks it as Active. |
The split is sharp: synt proposes, the user approves. No synthesis without a human filter; this is the third of the six principles (ch. 14 of the Architecture). The detail of the pipeline lives in synthesizer.html (in Italian and pending rewrite).
The seven-stage pipeline above is the generation arm: it produces a brand-new executor. But it is not the first tool the synt reaches for when a proto-mnest recurs. The canonical sequence is a cascade:
| When | Strategy | What it does | Frontier-LLM cost |
|---|---|---|---|
| Reactive (user turn) | Compose | Scans the active pool and tries to build a chain of existing executors that closes the proto-mnest. If found, it proposes the chain as orchestration — no new executor, no signature to issue. It leaves a proto-mnest pointing at the composition: if it recurs, that becomes a candidate for promotion (an introspective step). | zero |
| Reactive (fallback) | Generate | Seven-stage pipeline (above). It fires only if no composition exists or if the chain is too long to remain readable. | ~1 Spec call + 1–2 Draft |
| Introspective (nightly homeostasis) | Merge | Two executors with overlapping traces and compatible profiles get fused into one. See lifecycle fused (ch. 6). | 1 Spec call for the fused |
| Introspective | Generalise | N specialised executors with the same shape get proposed as a single broader, parametric executor. Trigger: threshold of recurring proto-mnests on adjacent dimensions. | 1–2 calls |
| Introspective (rare) | Specialise | From a general executor, derive a specialised version for a hot case. Only with measurable benefit (not preemptive optimisation). | 1 call |
The three introspective steps (merge, generalise, specialise) do not
yet have a dedicated doc; the natural home is either an extra chapter
within the future rewritten synt, or a new
consolidator.html. Open decision.
The flow of an invocation from the gateway to an executor goes through eight steps. Here is the high-level sequence; the detail of intermediate steps (Policy, Vaglio, sandbox) lives in their respective docs, all pending rewrite.
{...}.X via workspace/executors/X/CURRENT; if there is no CURRENT or it is quarantined, it fails.schema.json. If invalid, structured-error failure.run(args, ctx). Time limit, memory limit, network limit.schema.json. If invalid, failure.Along the flow, the gateway updates two structures:
Every invocation produces a JSONL line in
workspace/.audit/executors/YYYY-MM-DD.jsonl:
{
"ts": "2026-04-25T08:14:33.881Z",
"trace_id": "01HW...",
"turn_id": "01HX...",
"executor": "fs_read",
"version": "1.0.0",
"caller": {"kind": "user", "sender": "roberto", "channel": "telegram"},
"input": {"path": "workspace/notes/journal.md"},
"output": {"size": 4231, "sha":"blake3:..."},
"duration_ms": 18,
"exit": "ok"
}
Three invariants the loader and the runtime guarantee:
fs_readReads a file from the workspace. Profile: read-only filesystem on workspace/; no shell, no network; max duration 2 seconds; max output 4 MB. Errors: NotFound, PermissionDenied, TooLarge. Idempotent.
web_fetchFetches a web page and returns it as text or markdown. Profile: no filesystem, outbound network on HTTP/HTTPS toward an allow-list of domains (configurable per instance); max duration 15 seconds; max output 2 MB. Errors: DnsError, Timeout, Forbidden, TooLarge. Idempotent.
telegram_sendSends a message to the user's trusted chat. Profile: no filesystem, outbound network only to api.telegram.org; max duration 5 seconds. Side effects: yes (sends a message). Errors: RateLimited, Unauthorized, Network. Not idempotent: two calls with the same input send two messages.
side_effects = true and idempotent = false is
subject to reinforced Vaglio: the gateway asks for confirmation even
within the Full autonomy level. Precautionary asymmetry
(ch. 5 of the Architecture) is the foundation.
scheduler
scheduler is the first (and so far only) builtin
executor proposed: it invokes another executor (or chain) at a defined
cadence. It is the mechanism that makes «requests with
schedule» possible without inventing a separate design category.
It replaces the wrong idea of «agreed routines» as a
stand-alone object.
| Field | Type | Meaning |
|---|---|---|
target_executor | str (or chain) | The executor to invoke at every firing. |
args | dict | Arguments passed to the target. |
schedule | str | Cron-like ("0 8 * * *") or NL ("every morning at 8") translated to cron at creation time. |
delivery_channel | str | Channel for the result (Telegram, mail, voice, …). |
count | int | None | None = unlimited firings; 1 = one-shot; N ≥ 1 = exactly N firings. |
expiry | timestamp | None | Time-based expiration. The first of count and expiry closes the schedule. |
on_failure | enum | skip / retry / notify / cancel. Default notify after 1 retry. |
max_consecutive_failures | int | None | Auto-cancel after N consecutive failures. Default 3, None to disable. |
paused | bool | Manual pause without cancelling. Default false. |
Output: {schedule_id: ULID, next_fire: timestamp}.
Sibling operations, also builtin: scheduler.list
(list active schedules), scheduler.cancel (close a
schedule), scheduler.modify (change cadence, expiry, count
or channel). Every modification passes through the human gate if it
changes cadence or target_executor.
workspace/.runtime/scheduler.sqlite (the schedule store).system:scheduler (access to the gateway's async loop, never grantable to a seed or synthesised).
Every firing emits a JSONL line like any other executor invocation,
with caller = {"kind": "scheduler", "schedule_id": "…"}.
A schedule's history is reconstructable by schedule_id:
creation, modifications, every firing, eventual closure.
scheduler,
like all builtins, does not pass through the state machine of ch. 6: it
has only active and disabled in config. Its
schedules, however (the rows in the schedule store), have their
own small lifecycle: active, paused,
completed (count or expiry reached), cancelled
(by user or by on_failure=cancel). Lifecycle internal to
the builtin, not reflected in the mnestome graph.
scheduler is the first builtin because it solves a recurring
case (mailbox monitoring, daily summary, periodic checks) with a
mechanism that composes naturally with the synt
cascade: a recurring schedule becomes a candidate for fusion (synt
ch. 5.1) into a single executor that incorporates the chain.
The previous microdesign used the term neuron (neuron.html, now deprecated and marked untrusted). The Dialogue on executors and distributed memory renamed it to executor for two reasons:
The biological term still works well to describe the emergent mnestome as a graph: that is a different layer of metaphor (graph as tissue), to which we return in the dedicated docs.
The topology
(Architecture ch. 4) states that some executors, in the future,
will not run on metnos-server but on the user's machines:
the laptop first of all. This chapter says what changes, in concrete
terms, for the executor microdesign when the process no longer lives
on the server.
The contract of a remote executor is identical to that of a
local one. Same anatomy (ch. 3): TOML manifest, Ed25519 signature,
main.py code, I/O schema, birth tests, profile lock. Same
synthesis pipeline (ch. 7). Same gateway-side identity (ch. 4). Same
lifecycle (ch. 6). Same central audit log, on metnos-server
(ch. 9): who calls a remote executor and what comes back is traced
exactly as for a local one.
The remote executor's manifest still lives on the server
metnos-server, under workspace/executors/<name>/.
The server is the canonical source of the «who» and the
«what». On the laptop only the execution lives: a
small confined process, launched and supervised from the server side
through the Headscale channel (see Architecture ch. 4).
| Dimension | On metnos-server | On the laptop (remote executor) |
|---|---|---|
| Technical sandbox | Strong: bubblewrap + landlock + seccomp + mount namespace. Three independent Linux kernel mechanisms. |
Weaker: Windows has no direct equivalents. AppContainer / Job Object / NTFS permissions help but do not give the same boundaries. The sandbox is more a confined user-space process than a true kernel jail. |
| Core forbidden paths | Enforced by the system: ~/.ssh, ~/.gnupg, etc. |
Enforced twice: once declared in the profile (server-side, part of the signature), once verified again by the small runtime on the laptop before every operation. We do not trust a single layer when the technical sandbox is weak. |
| Pairing | Implicit: the process runs on the same machine as the gateway, of which it is a child. | Explicit: the device (laptop) must be paired through a ceremony signed by the user from an already-trusted channel. A new laptop cannot run remote executors until it has been admitted. |
| Channel authentication | Not needed: all in-process. | mTLS inside the Headscale overlay, or WireGuard preshared key + per-call signed token. The gateway rejects unauthenticated calls. |
| Idempotency | Recommended, not strictly required. | Mandatory. The network between metnos-server and the laptop can drop mid-call. Every invocation carries a unique identifier; two invocations with the same id produce the same outcome (the second is a no-op). |
| Reversibility | Generic model (ch. 6). | «Before/after» model: before performing an operation that changes laptop state, the gateway writes on metnos-server a manifest of the «before» (list of touched paths, hashes, sizes). If Roberto says «undo», the same pattern is applied in reverse on the laptop. See Architecture ch. 4 sec. «The before/after pattern». |
| Audit | One JSONL line on metnos-server. |
Two corresponding lines: one on the server (who called, when), one on the laptop (what was done on the filesystem, when). The two correlate by trace_id. |
| Version drift | One single directory workspace/executors/<name>/. |
Two copies of the binary: one on the server (canonical), one on the laptop (cache). At every invocation, the gateway verifies that the binary hash on the laptop matches the signed manifest; if not, it suspends and requests update. |
photo_organize on the laptop
The canonical example from ch. 5.6: tidy a photo archive of ten
thousand files that live in C:\Users\Roberto\Pictures\.
Syncing them to the server, tidying them, syncing back implies
gigabytes of traffic, double disk usage, time windows in which the
same file exists in two states on two machines. An in-place
rename performed directly by the laptop solves it in one
second. But the process that performs that rename must
be trusted and traced like every other Metnos executor.
The flow, in concrete terms:
photo_organize with manifest declaring: input = list of sort rules, output = number of tidied files, profile = read-write on C:\Users\Roberto\Pictures\ (excluding reserved sub-folders), no shell, no network. Roberto approves, the manifest is signed on the server.photo_organize to the laptop, which verifies it against the manifest and writes it to local cache.trace_id, writes on metnos-server the «before» manifest (list of paths that will be renamed, hashes), then sends the call to the laptop via Headscale + mTLS.rename, returns the outcome.metnos-server (who called, when, outcome), one on the laptop (list of paths actually touched, hashes after). Correlated by trace_id.~/.config/metnos/keys/. When a new device must be allowed to sign new executors (e.g. Roberto's laptop on the road), how is it authorised? Device pairing (ch. 4 of the Architecture) plus time-bound delegation? Open.
Metnos — executor microdesign v1.1 — 2026-04-24
New canonical doc. Replaces neuron.html (deprecated).