Skip to content

Architecture

ArchAgent is an orchestration platform for multi-agent AI systems. It sits above agent runtimes (OpenClaw, Mastra, Custom) and provides infrastructure they cannot: multi-agent coordination, channel adapters, integration proxy, deployment, and governance.

System Overview

Monorepo Layout

The codebase is a pnpm workspace monorepo with six packages:

PackagePurposeDeploys to
shared/Types, Firestore paths, topology buildersBundled by consumers
functions/Express API on Cloud Functions v2Firebase Cloud Functions
web/React SPA (Vite + Tailwind)Firebase Hosting (web target)
bridge/Agent runtime container -- runs agent loops, channel adaptersCloud Run via Cloud Build
cli/@archagent/cli npm package -- terminal management toolnpm registry
admin/Internal admin dashboard (React SPA)Firebase Hosting (admin target)

Dependency Graph

shared <-- functions
shared <-- bridge
shared <-- web
shared <-- cli (bundled by tsup)
shared <-- admin

shared has zero runtime dependencies and must be built first (pnpm build:shared).

Deployment Model

Firebase (Platform)

The API runs as a single Express app exported as a Cloud Functions v2 HTTP function. It handles all CRUD operations for instances, agents, channels, integrations, billing, and copilot chat.

Firebase Hosting serves three targets:

  • web -- consumer dashboard
  • admin -- internal admin dashboard
  • docs -- documentation site (VitePress)

Cloud Run (Agent Runtime)

Each workspace (called "instance" in code) gets its own Cloud Run service. The bridge container:

  1. Starts a health server on port 8080
  2. Reads instance config from Firestore and loads all agents
  3. Loads user API keys from encrypted secrets
  4. Starts the OpenClaw gateway singleton (if any OpenClaw agents exist)
  5. Starts agent loops per runtime type, each watching its agent doc via onSnapshot
  6. Starts channel adapters per agent via idempotent convergence (ensureChannelAdapter)
  7. Watches for new/modified agents -- hot-starts loops without container restart

The bridge Docker image is built via GCP Cloud Build (cloudbuild.yaml) using Kaniko.

Channel Adapter Pattern

Channel adapters are runtime-agnostic. They write incoming messages to agents/{id}/messages in Firestore, and the agent loop (any runtime) picks them up. Responses are written back to the same message doc as responseContent.

Four adapters are supported:

ChannelProtocolNotes
SlackSocket ModeRequires both SLACK_BOT_TOKEN and SLACK_APP_TOKEN
DiscordGatewayStandard bot token
TelegramLong pollingStandard bot token
WhatsAppWebhookRuns on separate port (8081)

Channel health is event-driven: one Firestore write on connect, one on disconnect/error. No heartbeats.

Integration Proxy

The integration proxy sits between agents and external APIs (currently GitHub). Agents see a GITHUB_API_BASE environment variable and make standard HTTP requests. The proxy holds real credentials, enforces per-agent access policies, and logs all requests.

Key properties:

  • Transparent: agents do not know they are talking to a proxy
  • Zero credential exposure: real tokens never reach the agent container
  • Policy-enforced: per-agent rules (read-only, docs-only, full-access, or custom)
  • Auditable: every request logged to agents/{id}/integrationAudit

Agent Runtimes

Three runtimes are supported, each with distinct strengths:

RuntimeCommunicationKey Features
OpenClawHTTP to gateway child processSOUL.md personality, ClawHub skills, model switching, thinking levels
MastraHTTP to scaffolded serverMCP server integrations, thread memory, generated TypeScript agents
CustomDirect Anthropic SDKShell access, structured tasks, full control

All runtimes share the same message flow through Firestore and the same channel adapter infrastructure.

Multi-Agent Topology

Agents in the same workspace collaborate via shared context channels, not direct messaging. The topology defines roles, permissions, and relationships:

TopologyPattern
singleOne agent, no coordination
supervisorCoordinator delegates via tasks, reviews via context
sequentialPipeline stages with handoffs
group_chatPeers coordinate via context, avoid duplicate work
monitorObserver reads status channel, creates corrective tasks

Context channels (instances/{id}/context) carry typed entries: artifact, status, decision, handoff. Each agent's topology role defines which channels it can read and write.

CI/CD

TriggerAction
Push to mainDeploy to dev (selective per package via dorny/paths-filter)
Push tag v*Deploy to production + publish CLI to npm
Cloud BuildBridge Docker image built and pushed to Cloud Run

Environments:

  • Dev: Firebase project amlcloud-monitor-dev
  • Prod: Firebase project agent-coder-ai