Skip to main content

Skill: ca-leash

Source: ludus/skills/ca-leash/SKILL.md


name: ca-leash description: Spawn focused Claude Code sub-agents for research, analysis, and document creation metadata: openclaw: emoji: "\U0001F9BE" requires: env: []

ca-leash — Your Personal Sub-Agent

ca-leash lets you spawn a headless Claude Code agent as a background process for focused work. Think of it as hiring a specialist who has the tools they need, for as long as they need. You keep your context lean; the sub-agent handles the heavy lifting.

When to use ca-leash: Deep research, multi-file analysis, document drafting, or any work that produces output (code, PRDs, reviews, reports). See the decision table below.

When NOT to use ca-leash: Intercom (beads) is for work that requires another agent's role expertise. ca-leash is for work within your role's scope — just too large to do inline.

Permissions

ca-leash runs in bypassPermissions mode by default. All tools are auto-accepted — no env vars needed. Role boundaries are enforced by your SOUL.md, not by tool restrictions.

The Core Pattern

Read Conversation → Write Prompt File → Start → Monitor → Collect → Act on Result
  1. Read the conversation (intercom read <id>) to understand the task
  2. Write a prompt file to /workspace/prompts/<conversation-id>.md — include scope, deliverable, constraints, output location (most important step)
  3. Start ca-leash (detaches by default, returns session ID immediately):
    SESSION=$(ca-leash --json start "$(cat /workspace/prompts/<conversation-id>.md)" --cwd /workspace | jq -r .session_id)
  4. Monitor — poll status until complete. Optionally check the log for progress:
    ca-leash --json status $SESSION   # → status: RUNNING / STOPPED / ERROR
    ca-leash log $SESSION -n 5 # → last 5 output messages
  5. Collect results — read the final output when status is STOPPED:
    ca-leash log $SESSION -n 3        # last messages = summary
  6. Act — post results to conversation, close conversation, or create follow-up conversations

Monitoring the Session

IMPORTANT: Do NOT use sleep or loops inside a single exec call — OpenClaw auto-backgrounds any exec that runs longer than 10 seconds, which breaks the polling pattern. Instead, make separate short exec calls for each status check:

# Step 1: Start (returns immediately)
SESSION=$(ca-leash --json start "$(cat /workspace/prompts/<id>.md)" --cwd /workspace | jq -r .session_id)

# Step 2: Check status (separate exec calls, each <1s)
ca-leash --json status $SESSION | jq -r .status # → RUNNING
ca-leash --json status $SESSION | jq -r .status # → RUNNING
ca-leash --json status $SESSION | jq -r .status # → STOPPED ← done!

# Step 3: Read result
ca-leash log $SESSION -n 5

Check status every few turns. Between checks, you can do other work or simply check again. Each ca-leash status call is fast (<1s) so exec returns immediately.

You can also use ca-leash interrupt $SESSION or ca-leash stop $SESSION to intervene at any point.

Why Prompt Files

Writing the prompt to a file before executing has key advantages:

  • Inspectable: You can re-read the prompt if something goes wrong
  • Resumable: If ca-leash times out, re-run with the same prompt + checkpoint notes
  • Auditable: The prompt file persists in the workspace for debugging

Quick Reference

CommandDescription
ca-leash start "<prompt>" --cwd <path>Start session (detaches by default)
ca-leash start -f "<prompt>"Start in foreground (blocks, streams output)
ca-leash --json status <id>Check status: RUNNING, STOPPED, ERROR
ca-leash log <id> -n 10Last 10 output messages
ca-leash log <id> --offset NRead from message N (for incremental reads)
ca-leash attach <id>Stream live output (blocks, Ctrl+C to detach)
ca-leash send <id> "msg"Send follow-up message to running session
ca-leash interrupt <id>Interrupt current query (session stays alive)
ca-leash stop <id>Gracefully stop session

All commands support --json for machine-readable output.

Output Patterns

ca-leash start detaches by default and returns a session ID. Use ca-leash log <id> to read results after completion. Use ca-leash start -f if you want blocking foreground mode (not recommended inside OpenClaw — exec auto-backgrounds long-running commands).

Pattern A — Short Results (read from log)

SESSION=$(ca-leash --json start "Analyze X. Summarize in under 200 words." --cwd /workspace | jq -r .session_id)
# ... poll until STOPPED ...
SUMMARY=$(ca-leash log $SESSION -n 1)
intercom post <bead-id> "FINDINGS: $SUMMARY"

Pattern B — Sub-Agent Writes File (documents, reports)

For output > 50 lines, instruct the sub-agent to write a file. Reference the path in the bead.

SESSION=$(ca-leash --json start \
"Analyze X and write the result to ./output.md. Do NOT modify any other files." \
--cwd /workspace | jq -r .session_id)
# ... poll until STOPPED ...
intercom post <bead-id> "RESULT: written to /workspace/output.md"

When to Use ca-leash vs. Direct Work

Use ca-leash when...Work directly when...
Need to read/analyze many filesSimple status update or lookup
Producing a document > 50 linesAnswering a quick question
Deep research across reposRelaying a single finding
Task would consume many turns in your contextTask takes 1-2 turns
Any work that writes code, docs, or reportsReading bead metadata

Role-Specific Routing Guide

Forge — Implement Feature End-to-End

Full implementation cycle: clone → worktree → implement → test → commit → push → PR.

ca-leash start \
"Task: <title from bead>
Context: <paste bead body>
Atlas design decision: <operator/interface choice>

Steps:
1. Clone if not present: git clone https://github.com/b4arena/<repo>.git /workspace/repos/<repo>
2. cd /workspace/repos/<repo> && git checkout main && git pull --ff-only && git worktree prune
3. Create worktree: git worktree add /workspace/repos/<repo>-wt/<bead-id> -b feat/<bead-id>-<slug> main
4. cd /workspace/repos/<repo>-wt/<bead-id>
5. Implement the feature. Run tests: <test command>. Fix until green.
6. git add -p && git commit -m 'feat(<scope>): <description>'
7. git push -u origin feat/<bead-id>-<slug>
8. gh pr create --repo b4arena/<repo> --title '<title>' --body '<description>'
9. Write summary to stdout: PR URL, test results, what changed.
STOP HERE — do NOT run gh pr merge. Atlas handles the merge after review." \
--cwd /workspace

After ca-leash completes:

# Capture PR URL from stdout, then:
ATLAS_BEAD=$(intercom new @atlas "Review PR #<N> in b4arena/<repo>" \
--thread <parent-bead-id> \
--body "PR: <url>\nWhat changed: <summary>\nAtlas bead: this bead ID is the required review reference.")
intercom done <bead-id> "PR <url> created. Review requested from Atlas (bead: $ATLAS_BEAD)."

Atlas — Review a PR (no local clone needed)

gh CLI gives full PR access without checkout. Use this pattern instead of waiting for a human diff.

ca-leash start \
"Review PR #<N> in b4arena/<repo>.

Gather context:
gh pr view <N> --repo b4arena/<repo> --json title,body,files,reviews,statusCheckRollup
gh pr diff <N> --repo b4arena/<repo>

Review criteria:
- Correctness: does the implementation match the bead spec?
- Tests: are edge cases covered? Do tests pass?
- Interface: does the change follow existing conventions?
- Security: any dangerous patterns (shell injection, unchecked input)?

Post your review:
gh pr review <N> --repo b4arena/<repo> --approve --body '<review notes>'
OR
gh pr review <N> --repo b4arena/<repo> --request-changes --body '<what to fix>'

Write to stdout: review verdict (APPROVED/CHANGES_REQUESTED) and key findings." \
--cwd /workspace/artifacts/atlas

After ca-leash completes:

intercom done <bead-id> "PR #<N> reviewed: <APPROVED/CHANGES_REQUESTED>. <one-line summary>."

Priya — Research, Write Artifacts, and Create PR

Full cycle: research → write docs → commit to repo → create PR for review.

ca-leash start \
"Task: <title from bead>
Context from bead: <paste bead body>
Target repo: b4arena/<repo>

Steps:
1. Clone if not present: git clone https://github.com/b4arena/<repo>.git /workspace/repos/<repo>
2. cd /workspace/repos/<repo> && git checkout main && git pull --ff-only && git worktree prune
3. Create worktree: git worktree add /workspace/repos/<repo>-wt/<bead-id> -b docs/<bead-id>-<slug> main
4. cd /workspace/repos/<repo>-wt/<bead-id>
5. Read existing docs in the repo for context (ls docs/, read README.md, etc.)
6. Research the feature area (web search, read related repos, competitive analysis)
7. Write your artifacts (PRD, roadmap, user stories, etc.) to the docs/ directory
8. git add docs/ && git commit -m 'docs(<scope>): <description> (<bead-id>)'
9. git push -u origin docs/<bead-id>-<slug>
10. gh pr create --repo b4arena/<repo> --title 'docs: <title> (<bead-id>)' --body '<summary of artifacts>'
11. Write to stdout: PR URL and a summary of what was created.
STOP HERE — do NOT merge. Atlas reviews all PRs." \
--cwd /workspace

After ca-leash completes:

# Capture PR URL from stdout, then request Atlas review:
ATLAS_BEAD=$(intercom new @atlas "Review PR #<N> in b4arena/<repo>" \
--thread <parent-bead-id> \
--body "PR: <url>\nWhat changed: <summary>\nAtlas bead: this bead ID is the required review reference.")
intercom done <bead-id> "Artifacts in PR <url>. Review requested from Atlas (bead: $ATLAS_BEAD)."

Indago — Deep Research

ca-leash start \
"Research task: <title from bead>
Question: <specific question to answer>
Scope: <what to search — web, specific repos, internal docs>

Steps:
1. Gather information (web search, read repos, read internal docs)
2. Synthesize findings
3. Write research report to /workspace/repos/research/<bead-id>-<slug>.md
Format: Summary, Sources, Key Findings, Open Questions
4. git -C /workspace/repos/research add <file> && git -C /workspace/repos/research commit -m 'research(<bead-id>): <slug>'
5. git -C /workspace/repos/research push
6. Write to stdout: file path and 3-sentence summary." \
--cwd /workspace/repos/research

General — Clone + Read + Analyze

For any agent that needs to read repo content (not produce code):

ca-leash start \
"Analyze: <what to look for>
Repo: https://github.com/b4arena/<repo>.git

Steps:
1. If not present: git clone https://github.com/b4arena/<repo>.git /workspace/repos/<repo>
2. Read: <specific files or patterns>
3. Analyze: <what to look for>
4. Write findings to stdout (under 300 words).
Do NOT modify any files." \
--cwd /workspace

Passing Context from Brain to ca-leash

The prompt is the only communication channel. Include everything the sub-agent needs:

  1. Task title — from the bead subject
  2. Bead body — paste verbatim (spec, constraints, context)
  3. Prior decisions — atlas bead ID + decision text if relevant
  4. Output location — exact file path OR "write to stdout"
  5. Constraints — what NOT to do (don't push, don't modify X, don't open issues)

Bad prompt (too vague):

"Implement the factorial feature."

Good prompt (complete context):

"Task: Add factorial operation to test-calculator.
Spec: calc.sh must support 'fact N' returning N! (0! = 1, negative = error).
Atlas decision (ic-okp.1): use 'fact' as the operator name (avoids shell conflicts).
Tests must pass: bash test.sh
Output: PR URL to stdout. Do not close any beads."

Assessing ca-leash Output

When ca-leash finishes (status=STOPPED), read the log and evaluate:

SignalWhat It MeansAction
PR URL presentCode work completedPost to bead, create atlas bead
"Tests pass: N/N"Implementation correctProceed with PR flow
"Tests fail: X/N"Partial — needs retryRe-run with failure details in prompt
"Error: permission denied"Tool or auth issueEscalate via intercom new @main
Truncated / no summarySession incompleteRe-run with tighter scope
"Nothing to do"Wrong scopeCheck prompt — did it find the right repo/files?

When output is ambiguous: Post it to the bead as-is with "ca-leash output (unverified):" prefix. Don't guess at success.

Error Handling

Incomplete work (sub-agent stopped or timed out)

# Re-run with tighter scope, resuming from checkpoint
ca-leash start \
"<original prompt>

NOTE: Previous run did not complete. Focus only on: <the remaining step>.
Skip steps already done: <what was completed>." \
--cwd <same-cwd>

Tool failure (auth, permission, network)

# Escalate immediately — don't retry indefinitely
intercom new @main "BLOCKED: ca-leash tool failure in <bead-id>" \
--body "Tool: <which tool failed>\nError: <exact error message>\nContext: <what was being done>\nRecommendation: check <what>."

Resuming from checkpoint

Check what was done (git log, file existence), then re-run ca-leash with a prompt that starts from the checkpoint:

ca-leash start \
"Continue from checkpoint:
Already done: [clone, worktree created at /workspace/repos/test-calculator-wt/ic-abc]
Remaining: implement feature, run tests, commit, push, open PR.
<original spec>" \
--cwd /workspace

Anti-Patterns

  • No nested ca-leash — sub-agents must NOT spawn another ca-leash session
  • No role boundary violations — stay within your role (e.g., Priya writes docs not code, Forge implements not designs)
  • No trivial lookups — 1-2 file reads? Do it yourself. ca-leash overhead is not worth it.
  • No orphaned sessions — always ca-leash stop <id> when done or on error
  • No guessing at output — if the summary is unclear, post it verbatim to the bead

Memory

After a successful ca-leash run, capture what worked by posting to the bead:

intercom post <bead-id> "LEARNING: ca-leash completed full PR cycle for test-calculator in single session"

This builds institutional knowledge so you and other agents refine prompts over time.