Build AI-powered CLI tools from a single JSON.

cargo-ai TM

Declarative AI agents. Built locally. Shareable in minutes.

  • Use text, images, or files as inputs.
  • Require the AI to return a specific output structure.
  • Use that result to decide which actions run next.
  • Send an email, run a command, call a child agent, and more.

MIT licensed and fully inspectable.

Single JSON definition

{
  "inputs": [...],
  "agent_schema": {...},
  "actions": [...]
}

Hatch locally

cargo ai hatch agent_x

Run anywhere

macOS, Linux, Windows

MIT licensed and fully inspectable.

Why cargo-ai

Open Source and Fully Auditable

Generate code your team can read, review, and approve for production use.

Handles Real Inputs

Work with text, images, URLs, and common files instead of limiting agents to a single prompt string.

Real Actions, Not Just Prompts

Run local commands, call child agents, pass command-line arguments, and send email follow-ups.

Supports Advanced Logic

Add conditions and follow-up behavior without hand-building a custom app or orchestration layer.

No Extra Token Plumbing Required

Use your existing Codex workflow when it fits, or bring your own model access when you want direct provider control.

Built to Grow With You

Start with one clear definition, then add child agents, richer actions, and shared workflows as your needs grow.

Choose the model path that fits your workflow.

GPT-5, or local Ollama models like Mistral, Qwen2.5, and Qwen2.5-VL.

Share, hatch, repeat

Cargo-AI is not just about local execution. Create a free account for email alerts and to publish your definitions in minutes. Keep them private, or make them public when you want other people to hatch them locally on their own machines.

cargo ai account register you@example.com
cargo ai account agents hatch weather_test --owner-handle alice

Built for AI-assisted iteration

Keep the agent readable, diffable, and easy to improve with tools like Codex without losing trust in what it does.

From Definition to Executables in Minutes with Codex Support

Below is a real example of how you can work with Codex to build a Cargo AI agent. Once it is built, you can inspect the JSON file directly and confirm that the agent is doing exactly what you want.

Build your agent

cargo ai add guidance --style codex

me > "Hey, Codex, can you build me a Cargo AI agent?"

me > "It should read the attached file and return three things: summary, category, and the confidence_level associated with that category."

me > "Make confidence_level one to five."

me > "For the categories, do market_update, financial_result, regulatory_notice, or other."

me > "And if the category is market_update with a confidence greater than 3, send me an email."

codex > "Market alert agent built and checked."

Inspect agent file

vim market_alert.json
{
  "version": "2026-03-11.r1",
  "inputs": [
    {
      "type": "text",
      "text": "Read the attached PDF."
    },
    {
      "type": "file",
      "path": "./document.pdf"
    }
  ],
  "agent_schema": {
    "type": "object",
    "properties": {
      "summary": {
        "type": "string",
        "description": "1-2 sentence plain-English summary of the document's main message."
      },
      "category": {
        "type": "string",
        "description": "Single best-fit category based on the document's primary purpose.",
        "enum": ["market_update", "financial_result", "regulatory_notice", "other"]
      },
      "confidence_level": {
        "type": "integer",
        "description": "Confidence in the category choice from 1 to 5, where 5 means the category is very clear.",
        "minimum": 1,
        "maximum": 5
      }
    }
  },
  "actions": [
    {
      "name": "email_market_update",
      "logic": {
        "and": [
          { "==": [{ "var": "category" }, "market_update"] },
          { ">": [{ "var": "confidence_level" }, 3] }
        ]
      },
      "run": [
        {
          "kind": "email_me",
          "subject": ["Document category: ", { "var": "category" }],
          "text": [
            "Summary: ",
            { "var": "summary" },
            "\nConfidence level: ",
            { "var": "confidence_level" }
          ]
        }
      ]
    }
  ]
}

Hatch your agent

cargo ai hatch market_alert

Run your agent

./market_alert
Using default profile 'my_open_ai'
✅ Email sent to bob@gmail.com.

If the file matches your action logic, Cargo AI will email you the results immediately.

SUBJECT: Document category: market_update

BODY:
Summary: A daily market update reporting that major U.S. stock indexes were mostly lower on March 11, 2026.
Confidence level: 5

Run it again and again with different inputs, or switch to a private local model when you want more control over sensitive work.

./market_alert --input-text "Summarize and classify this report." --input-file "./another-report.docx"
./market_alert --input-text "Summarize and classify this chart." --input-image "./oil-chart.png"
./market_alert --input-text "Summarize and classify this page." --input-url "https://example.com/market-update"
./market_alert --server ollama --model mistral \
  --input-text "Summarize and classify this internal market note."

1. Review market_alert.json

Check the agent definition, schema, and action logic.

2. Hatch market_alert

Turn the JSON definition into a local agent you can run directly.

3. Run market_alert

Run it locally with real inputs and contract-enforced outputs.

If you want to track the work, star the project on GitHub. If you have an agent idea or a workflow you want to automate, reach out on LinkedIn or open an issue on GitHub.