ludus ops — Infrastructure Changes
What It Does
The ludus ops subcommand group modifies state on the ludus host. This includes file
sync, agent registration, sandbox configuration, cron management, image builds,
and intercom lifecycle.
Subcommands: deploy, ssh.
Sub-groups: cron, images, intercom, sandbox.
(Skipped: ssh — execs into an interactive session, not recordable.)
Environment
- Working directory:
ludus/ - Ludus host reachable via SSH
- OpenClaw gateway running on the ludus host
agents/agent-map.jsonexists with agent definitions- Container images already built on the ludus host (for most tests)
1. ludus ops deploy
1.1 Quick deploy (sync only)
ludus ops deploy --quick --dry-run
Verify
- Output shows each sync target with status
- No files are actually transferred
- Exit code 0
ludus --json ops deploy --quick --dry-run
Verify (JSON)
- Valid JSON with
syncdict anddry_run: true - Each key in
syncis a directory label - All values are
"ok"(dry-run succeeded)
ludus ops deploy --quick
Verify
- Output shows each directory synced successfully
- Exit code 0
ludus --json ops deploy --quick
Verify (JSON)
- Valid JSON with
syncdict dry_runisfalse- All sync labels show
"ok"
1.2 Full deploy (sync + register + configure)
ludus ops deploy
Verify (human)
- Output shows "Deploying to
..." - Lists sync steps, registration, sandbox configuration
- Ends with "Deploy complete."
- Exit code 0
ludus --json ops deploy
Verify (JSON)
- Valid JSON with
deployarray - Each element has
stepandokfields - All steps show
ok: true
2. ludus ops cron
Crontab management on the ludus host.
2.1 Deploy cron entries
ludus --json ops cron deploy
Verify (JSON)
successistrue- Exit code 0
ludus --json info cron
Verify (round-trip)
entriesarray has >= 1 entry- At least one entry mentions
beads-watcher - At least one entry mentions
gh-event-poll
2.2 Remove cron entries (dry-run)
ludus ops cron remove
Verify
- Shows "Would remove" and lists the entries
- Does NOT actually remove them
- Exit code 0
3. ludus ops images build
Build container images on the ludus host.
ludus --json ops images build
Verify
successistrue(orfalsewith error detail if rsync/build fails)- Exit code 0 on success, 1 on failure
Note: This takes several minutes. Showboat should allow extended timeout.
4. ludus ops intercom
Shared intercom clone lifecycle on the ludus host.
4.1 Bootstrap
ludus --json ops intercom bootstrap
Verify
successistrueintercom_dirmatches expected path
4.2 Sync to GitHub
ludus --json ops intercom sync
Verify
successistrue- Exit code 0
4.3 Reset (dry-run)
ludus ops intercom reset
Verify
- Shows "Would reset" with description of actions
- Does NOT actually reset
- Exit code 0
ludus --json ops intercom reset
Verify (JSON)
dry_runistrue
5. ludus ops sandbox
Agent registration and sandbox configuration.
5.1 Register agents (idempotent)
ludus ops sandbox register
Verify
- Shows registration status for each agent
- Already-registered agents are skipped (not duplicated)
- Exit code 0
5.2 Configure sandbox
ludus ops sandbox configure
Verify
- Shows sandbox defaults being set
- Shows per-agent overrides (memory, cpus, network)
- Shows BD_ACTOR configuration
- Gateway restart confirmation
- Exit code 0
Edge Cases
Quick deploy with unreachable host
LUDUS_HOST=nonexistent ludus --json ops deploy --quick 2>&1 || true
Verify
- At least one sync target shows
"error" - No unhandled exception
Deploy with --full flag
ludus --json ops deploy --full
Verify
- First step is "images build"
- Remaining steps are sync + register + configure
- All steps show
ok: true(if images build succeeds)