Skip to content
AgentEnsemble AgentEnsemble
Get Started

The Agent Is an Implementation Detail: Task-First Orchestration in Java

Most agent frameworks start with the agent.

You define a role, write a goal, add a backstory, choose a model. Then you write a task and wire the agent to it. Then you pass both to a crew or ensemble. Three objects defined and wired together before the actual work is even described.

It works. But for a lot of use cases, that agent definition is accidental complexity.

You’re not thinking “I need a Researcher persona with a carefully tuned goal statement.” You’re thinking “I need this research done, then I need a report written from it.” The agent is an implementation detail. The task is the actual unit of work.

That’s the central insight behind the task-first design in AgentEnsemble.

The v1.x API required explicit agent definitions. Here’s a two-task research-writer pipeline:

Agent researcher = Agent.builder()
.role("Senior Researcher")
.goal("Find comprehensive information about {{topic}}")
.background("Expert at synthesizing information from multiple sources")
.build();
Agent writer = Agent.builder()
.role("Technical Writer")
.goal("Write clear, engaging content")
.background("Skilled at making complex topics accessible")
.build();
Task researchTask = Task.builder()
.description("Research {{topic}} thoroughly")
.expectedOutput("Detailed research notes")
.agent(researcher)
.build();
Task writeTask = Task.builder()
.description("Write an article based on the research")
.expectedOutput("A polished article")
.agent(writer)
.context(List.of(researchTask))
.build();
Ensemble.builder()
.agents(researcher, writer)
.tasks(researchTask, writeTask)
.chatLanguageModel(model)
.inputs(Map.of("topic", "WebAssembly"))
.build()
.run();

Five object definitions for a two-task pipeline. The agent persona fields — role, goal, background — look important, but in many cases a sensible default would work just as well.

In v2, agents are optional. When a task has no explicit agent, the framework synthesizes one:

Task researchTask = Task.builder()
.description("Research {{topic}} thoroughly")
.expectedOutput("Detailed research notes")
.build();
Task writeTask = Task.builder()
.description("Write an article based on the research")
.expectedOutput("A polished article")
.context(List.of(researchTask))
.build();
Ensemble.builder()
.chatLanguageModel(model)
.tasks(researchTask, writeTask)
.inputs(Map.of("topic", "WebAssembly"))
.build()
.run();

Same pipeline, no agent definitions. For the simplest case, it collapses further:

EnsembleOutput output = Ensemble.run(model,
Task.of("Research {{topic}}", "Detailed research notes"),
Task.of("Write an article based on the research", "A polished article"));

The framework uses an AgentSynthesizer to derive role and goal from the task description. The default is template-based — a verb-to-role lookup applied to the first word of the description:

First verbSynthesized role
Research / InvestigateResearcher
Write / Draft / ComposeWriter
Analyze / EvaluateAnalyst
Build / ImplementDeveloper
SummarizeSummarizer
ReviewReviewer
PlanPlanner
(anything else)Agent

The goal is set to the full task description. No extra LLM call is made. The synthesized agent is ephemeral — it exists for the duration of one task execution and is discarded.

For higher-quality personas, an LLM-based synthesizer is available as an opt-in:

Ensemble.builder()
.chatLanguageModel(model)
.agentSynthesizer(AgentSynthesizer.llmBased())
.tasks(researchTask, writeTask)
.build()
.run();

This makes one additional LLM call per agentless task to generate a tailored role, goal, and background. The cost is a few extra tokens per task; the benefit is a more domain-specific system prompt going into the LLM call that actually does the work.

Per-Task Configuration Without Explicit Agents

Section titled “Per-Task Configuration Without Explicit Agents”

The zero-ceremony path does not mean zero configuration. Task-level LLM, tools, and iteration limits still work:

Task researchTask = Task.builder()
.description("Research {{topic}} using recent web sources")
.expectedOutput("Research notes with citations")
.chatLanguageModel(gpt4o)
.tools(List.of(new WebSearchTool()))
.maxIterations(15)
.build();
Task summaryTask = Task.builder()
.description("Write a concise executive summary")
.expectedOutput("A 200-word summary")
.chatLanguageModel(gpt4oMini)
.build();

The task-level chatLanguageModel takes precedence over the ensemble default. Different tasks can use different models without declaring separate agent objects.

This is the typical production configuration: a powerful model for complex or expensive tasks, a cheaper model for simpler ones, all without any agent persona boilerplate.

Typed output is a task concern, not an agent concern:

record ResearchReport(String title, List<String> findings, String conclusion) {}
Task task = Task.builder()
.description("Research AI adoption trends in healthcare")
.expectedOutput("A structured research report")
.chatLanguageModel(model)
.outputType(ResearchReport.class)
.build();
EnsembleOutput result = Ensemble.run(model, task);
ResearchReport report = result.getTaskOutputs().get(0)
.getParsedOutput(ResearchReport.class);

The synthesized agent handles structured output exactly as an explicit agent would. The JSON schema is derived from the record, injected into the prompt, and the response is deserialized and validated.

The task-first approach is the default, not the only option. Explicit agents make sense when you need:

A crafted persona with domain-specific background. The background field is the highest-leverage part of agent configuration — a well-written backstory shapes how the LLM reasons about the task. For specialized work (healthcare analysis, legal review, financial modelling), a carefully written background can make a measurable difference:

Agent seniorAnalyst = Agent.builder()
.role("Senior Healthcare Analyst")
.goal("Provide rigorous, evidence-based analysis")
.background("You are a CFA-certified analyst specializing in clinical-stage "
+ "biotech with 20 years of experience evaluating Phase II/III trial data.")
.llm(anthropicModel)
.build();

Shared agent identity across tasks. When the same persona handles multiple related tasks and persona continuity matters, an explicit agent can be bound to each:

Task firstAnalysis = Task.builder()
.description("Analyse Q1 performance")
.agent(analystAgent).build();
Task secondAnalysis = Task.builder()
.description("Compare Q1 against Q2")
.agent(analystAgent) // same agent, same persona
.context(List.of(firstAnalysis)).build();

Verbose logging or custom response format. Agent-level configuration fields like verbose and responseFormat require an explicit agent:

Agent debuggable = Agent.builder()
.role("Researcher")
.goal("Research the topic thoroughly")
.verbose(true) // logs prompts and responses at INFO level
.llm(model)
.build();

Hierarchical delegation. The allowDelegation field is an agent property:

Agent manager = Agent.builder()
.role("Research Director")
.goal("Coordinate research across multiple domains")
.allowDelegation(true)
.llm(managerModel)
.build();

The key point is that explicit agents are an opt-in power-user path, not the required starting point.

The honest tradeoff here is between convenience and persona quality.

Template-based synthesis is fast (no extra LLM call), predictable (the lookup is deterministic), and sufficient for most workloads. For tasks where the first verb maps cleanly to a role — research, write, analyze, summarize — the synthesized persona is functionally equivalent to a manually declared one.

For specialized domains, a handwritten background field can outperform synthesis. The LLM’s reasoning is shaped by its system prompt; domain-specific context in the background gives it better framing for complex tasks. If you’re running healthcare analysis or legal review, the extra time writing an explicit agent background is likely worth it.

The AgentSynthesizer interface is pluggable for teams that want something in between:

AgentSynthesizer domainSynthesizer = (task, ctx) -> Agent.builder()
.role(derivedRole(task))
.goal(task.getDescription())
.background(loadBackgroundTemplate(task))
.llm(ctx.model())
.build();
Ensemble.builder()
.agentSynthesizer(domainSynthesizer)
.tasks(...)
.build().run();

The question to ask when designing a task pipeline is not “what agents do I need?” but “what work needs to be done, and in what order?”

For the majority of cases, that question is answered entirely by the task definitions: what each task is asked to do, what output it should produce, what context it needs from prior tasks, and what tools or model it should use. The agent emerges from those decisions, not the other way around.

Explicit agents remain available for cases where persona quality, shared identity, or per-agent configuration matter. But the default path is simpler: define the tasks, wire the dependencies, run.


Get started:


AgentEnsemble is MIT-licensed and available on GitHub.