Open Source · MIT License · Java 21+

Orchestrate
teams of AI agents
that work together.

A powerful Java framework for building multi-agent systems that collaborate to accomplish complex tasks. Built on LangChain4j, works with OpenAI, Anthropic, Ollama, and more.

Java 21 Required
MIT License
7+ LLM Providers
10+ Built-in Tools
var model = OpenAiChatModel.builder()
    .apiKey(System.getenv("OPENAI_API_KEY"))
    .modelName("gpt-4o-mini")
    .build();

// Zero ceremony -- agents synthesized from task descriptions
EnsembleOutput output = Ensemble.run(model,
    Task.of("Research the latest AI agent frameworks"),
    Task.of("Write a concise technical summary"),
    Task.of("Generate actionable recommendations"));

System.out.println(output.getRaw());
Why AgentEnsemble?

The right choice for
Java multi-agent work

Existing solutions are either Python-first, require you to build orchestration yourself on top of raw LangChain4j primitives, or both. AgentEnsemble is the missing production-ready layer for Java teams.

Hand-rolled vs Framework

AgentEnsemble vs hand-rolled LangChain4j orchestration

LangChain4j gives you excellent building blocks. But stitching multiple agents together yourself means writing the same boilerplate every time: prompt assembly, context threading, error recovery, retry logic, and delegation plumbing. AgentEnsemble is that layer, already built and battle-tested.

  • Three lines instead of hundredsA working multi-agent pipeline runs with a single Ensemble.run(model, task1, task2, task3) call. Sequential, hierarchical, parallel, and MapReduce strategies come built-in.
  • Workflow strategies that composeSEQUENTIAL, HIERARCHICAL (manager delegates to workers), PARALLEL (DAG-based concurrent execution via virtual threads), and MapReduce for large-context workloads. Switching between them is one enum value.
  • Production concerns handled for youMemory across runs, review gates for human-in-the-loop approval, input/output guardrails, structured output with automatic retry, delegation guards and lifecycle events — none of this has to be invented from scratch.
  • Full observability out of the boxEvery run produces token counts, LLM latency, tool timing, and a complete execution trace. Export to JSON, stream to a live browser dashboard, or push to Micrometer. Zero configuration required.
Built for Java

Why JVM teams need a production-minded agent framework

Python agent frameworks are not designed for Java engineering constraints. AgentEnsemble is written in Java 21, distributed as standard Maven/Gradle artifacts, and fits directly into the toolchains, testing practices, and deployment pipelines that JVM teams already use.

  • Idiomatic Java 21Fluent builders, records for structured output, sealed interfaces, and Java virtual threads for concurrent execution. No reflection tricks, no annotation processors, no runtime surprises.
  • Gradle and Maven with a BOMAdd the BOM and pull the modules you need. Versions align automatically. The same dependency management your team uses for every other library.
  • Plugs into your existing stackMicrometer metrics integrate with Prometheus and Grafana. SLF4J logging works with Logback and Log4j2. The live dashboard is a plain embedded WebSocket server — no Docker, no npm, no sidecar process.
  • Type-safe from input to outputDeclare outputType(MyRecord.class) on a task and receive a fully typed, schema-validated Java object. Parse failures trigger automatic correction prompts before any exception is thrown.
JVM vs Python runtime

Why AgentEnsemble instead of Python-first agent frameworks

Frameworks like LangChain and CrewAI are excellent in their ecosystem. Bringing them into a Java service means a Python runtime, an HTTP sidecar or subprocess, serialization overhead, and two languages to test and deploy. AgentEnsemble runs on the same JVM as your service.

  • No Python runtime or interop taxDeploy as a library JAR. No subprocess management, no inter-process serialization, no latency from crossing a process boundary on every agent call.
  • LLM-agnostic via LangChain4jOpenAI, Anthropic, Ollama, Azure OpenAI, Amazon Bedrock, Google Vertex AI — and any provider LangChain4j adds in the future. Switching providers is a one-line change.
  • Feature parity with Python frameworksSequential, hierarchical, and parallel workflows. MapReduce for large workloads. Multi-level memory. Tool pipelines. Human-in-the-loop review gates. Delegation with guards. Structured typed output. All in Java.
  • One language to test and deployUnit tests with JUnit, integration tests with your existing test containers, CI with the same Gradle tasks. No Python virtualenv to maintain, no separate test suite to keep in sync.
Capabilities

Everything you need to build
production-grade AI systems

From simple two-agent pipelines to complex hierarchical workflows with memory, observability, and human-in-the-loop review.

  • Zero-Setup Agent Synthesis

    Run a multi-agent ensemble in three lines. No agent declarations required — personas are synthesized automatically from task descriptions.

  • Flexible Workflow Patterns

    Sequential, hierarchical, parallel, and MapReduce execution strategies. Build manager-led teams, fan-out pipelines, and adaptive workflows.

  • Persistent Memory

    Short-term, long-term, entity, and embedding-based memory stores. Persist context across tasks and ensemble runs using pluggable MemoryStore SPI.

  • Rich Tool Ecosystem

    Ten built-in tools including web search, web scraping, HTTP, file I/O, calculator, and subprocess execution. Simple APIs for custom tools and cross-language remote tools.

  • Live Execution Dashboard

    Real-time WebSocket dashboard streams task and tool events to a browser. Supports browser-based human-in-the-loop review gates without blocking your JVM.

  • LLM Agnostic

    Works with any ChatModel from LangChain4j — OpenAI, Anthropic, Ollama, Azure OpenAI, Amazon Bedrock, Google Vertex, and Mistral. Mix models per task.

  • Structured Output

    Define Java records or classes as expected output schemas. Agents automatically produce and parse typed JSON — no manual parsing code required.

  • Metrics & Observability

    Micrometer integration with counters, timers, and task-level spans. Export metrics to Prometheus, Datadog, CloudWatch, or any Micrometer-compatible backend.

API

A simple, expressive
Java-native API

From zero-ceremony ensembles to full hierarchical teams with memory, the API scales with your needs without boilerplate.

Simple Ensemble

Run multiple agents in sequence with zero configuration. Personas and roles are automatically synthesized from task descriptions.

Read the quickstart
var model = OpenAiChatModel.builder()
    .apiKey(System.getenv("OPENAI_API_KEY"))
    .modelName("gpt-4o-mini")
    .build();

EnsembleOutput output = Ensemble.run(model,
    Task.of("Research the latest trends in distributed systems"),
    Task.of("Write a 500-word technical summary of the findings"),
    Task.of("Generate three actionable recommendations"));

// All task outputs in order
output.getTaskOutputs().forEach(t ->
    System.out.printf("[%s] %s%n",
        t.getAgentRole(),
        t.getRaw().substring(0, 100)));

System.out.printf("Total duration: %s%n", output.getTotalDuration());

Hierarchical Team

A manager agent dynamically creates and delegates to specialist worker agents based on the task at hand.

Read the quickstart
var manager = Agent.builder()
    .role("Engineering Manager")
    .goal("Coordinate a team to deliver high-quality software solutions")
    .background("10 years leading cross-functional engineering teams.")
    .llm(model)
    .build();

EnsembleOutput output = Ensemble.builder()
    .manager(manager)
    .maxDelegations(8)
    .task(Task.builder()
        .description("Design and document a REST API for a task management app")
        .expectedOutput("OpenAPI 3.0 spec and implementation guide")
        .build())
    .workflow(Workflow.HIERARCHICAL)
    .build()
    .run();

System.out.println(output.getRaw());

Memory Across Runs

Agents share and persist knowledge across multiple ensemble runs using typed memory stores.

Read the quickstart
var memoryStore = InMemoryStore.create();

var memory = EnsembleMemory.builder()
    .shortTerm(ShortTermMemory.create())
    .longTerm(LongTermMemory.withStore(memoryStore))
    .build();

// First run -- researches and stores findings
Ensemble.builder()
    .task(Task.of("Research Java concurrency best practices"))
    .task(Task.of("Store key findings for future reference"))
    .memory(memory)
    .build()
    .run();

// Second run -- recalls previous findings
EnsembleOutput output = Ensemble.builder()
    .task(Task.of("Using our previous research, write a guide"))
    .memory(memory)  // same store -- agents recall prior context
    .build()
    .run();

System.out.println(output.getRaw());
How It Works

From idea to running agents
in minutes

Define Tasks

Describe what each agent should do and what output you expect. Tasks can depend on each other, carry tools, and use different LLM models.

Task.of("...") or Task.builder()...

Configure the Ensemble

Choose a workflow pattern: sequential, hierarchical, parallel, or MapReduce. Attach memory, review gates, callbacks, and guardrails as needed.

Workflow.SEQUENTIAL | HIERARCHICAL | PARALLEL

Run & Observe

Call .run() and collect structured results. Stream events to the live dashboard, export execution traces for visualization, and collect metrics.

EnsembleOutput output = ensemble.run();
Agent
Task
Ensemble
LLM
Output
Get Started

Add one dependency.
Start building.

AgentEnsemble is available on Maven Central. Import the BOM to keep all module versions in sync, then add only the modules you need.

  • agentensemble-core Framework core — always required
  • agentensemble-memory Persistent memory stores
  • agentensemble-review Human-in-the-loop review gates
  • agentensemble-web Live execution dashboard
  • agentensemble-tools-* 10+ built-in tools
dependencies {
    implementation(platform("net.agentensemble:agentensemble-bom:2.3.0"))
    implementation("net.agentensemble:agentensemble-core")
    implementation("dev.langchain4j:langchain4j-open-ai:1.11.0")
}

Latest version: check Maven Central