Skip to content

DeepAgent Runtime (LangGraph)

DeepAgentExecutor runs in-process agents via LangGraph’s createReactAgent. This is the default runtime for all agents defined in workspace/agents/*.yaml.

Type string: deep-agent Package: @langchain/langgraph + @langchain/openai Registered by: AgentRuntimePlugin — one executor per agent YAML file at install() time.

  1. AgentRuntimePlugin scans workspace/agents/ on startup and creates one DeepAgentExecutor per YAML file
  2. Each executor is registered in ExecutorRegistry for the skills listed in the agent’s YAML
  3. When SkillDispatcherPlugin routes a SkillRequest to this executor, it:
    • Creates a LangGraph ReAct agent with ChatOpenAI pointed at the LiteLLM gateway
    • Injects the agent’s systemPrompt as a SystemMessage
    • Provides LangChain tools matching the tools whitelist from the agent YAML
    • Runs the agent loop with recursionLimit derived from maxTurns
    • Returns the final AI message as SkillResult.text
workspace/agents/ava.yaml
name: ava
role: general
model: claude-sonnet-4-6
systemPrompt: |
You are Ava, the chief-of-staff protoAgent...
tools:
- chat_with_agent
- delegate_task
- get_world_state
- manage_board
- create_github_issue
- manage_cron
maxTurns: 10
discordBotTokenEnvKey: DISCORD_BOT_TOKEN_AVA
skills:
- name: chat
description: Conversational hub with system visibility and delegation.
keywords: []

See Agent Skills Reference for the full YAML schema and available tools.

Tools are defined as LangChain tools with zod schemas in DeepAgentExecutor. Each wraps an HTTP call to workstacean’s own API:

ToolAPI endpointPurpose
chat_with_agentPOST /api/a2a/chatMulti-turn A2A conversation
delegate_taskPOST /api/a2a/delegateFire-and-forget dispatch
get_world_stateGET /api/world-stateSystem health snapshot
manage_boardPOST /api/board/features/*Board feature CRUD
create_github_issuePOST /api/github/issuesFile GitHub issues
manage_cronPOST /api/ceremonies/*Ceremony CRUD
get_projectsGET /api/projectsList projects
get_ci_healthGET /api/ci-healthCI pass rates
get_pr_pipelineGET /api/pr-pipelineOpen PRs and CI status
get_branch_driftGET /api/branch-driftDev vs main divergence
get_incidentsGET /api/incidentsOpen incidents
report_incidentPOST /api/incidentsFile incident
publish_eventPOST /publishRaw bus event

Agents only get the tools listed in their tools: array — unlisted tools are not available.

All LLM calls route through LiteLLM at LLM_GATEWAY_URL (or OPENAI_BASE_URL). The executor creates a ChatOpenAI instance with the gateway as baseURL and OPENAI_API_KEY for auth. Model aliases (e.g. claude-sonnet-4-6) are resolved by the gateway.

correlationId from the bus message is propagated through the LangGraph invocation. When LANGFUSE_* env vars are set, the LangChain callback handler traces every LLM call and tool invocation to Langfuse.

DeepAgentExecutor replaces the previous ProtoSdkExecutor which spawned @protolabsai/sdk CLI subprocesses. Key differences:

  • No subprocess — runs in-process via LangGraph, faster startup
  • No verification prompts — the SDK injected coding-agent verification steps inappropriate for conversational agents
  • Standard LangChain tools — zod schemas, same as the rest of the LangChain ecosystem
  • ChatOpenAI — native OpenAI-compatible client, works with any gateway

Use DeepAgentExecutor for any agent that should run inside the workstacean process with direct access to bus tools. This is the right choice for most agents.

Use A2A instead when the agent lives in a separate service (Quinn, protoMaker team, protoContent) or needs its own resource isolation.