Agentic Workflows: Design Patterns for Autonomous AI Systems

The four patterns of agentic workflows — Reflection, Tool Use, Planning, Multi-Agent — and how to choose the right one. Move beyond pipelines to autonomous AI execution.

by AnyCap

Most software workflows are pipelines: input in, steps execute in order, output out. They're predictable, debuggable, and brittle. When a step fails — an API is down, a page layout changed, the data doesn't look like you expected — the workflow stops and waits for a human.

Agentic workflows change this. Instead of a fixed sequence of steps, you give an AI agent a goal and let it decide how to get there — adapting in real time based on what it finds. The shift isn't just technical; it changes what's possible to automate.

This guide covers the design patterns, decision frameworks, and capability requirements for building agentic workflows that actually work in production.


What Makes a Workflow "Agentic"?

A workflow becomes agentic when it delegates decisions to the AI rather than pre-scripting every branch. The key word is autonomy.

Traditional Workflow Agentic Workflow
"If X, do Y. If A, do B." (all pre-coded) "Achieve goal G. You have tools T. Figure out the path."
Fails on unexpected input Adapts to unexpected input
Debugging: check the logs for the broken step Debugging: check why the agent made that decision
Scales by adding more branch conditions Scales by giving the agent better tools

The core insight from Andrew Ng's work on agentic design: an agentic workflow isn't a better pipeline. It's a different category of automation — one where the system makes choices during execution.


The Four Patterns of Agentic Workflows

Every agentic workflow is built from four fundamental patterns, used individually or in combination.

Pattern 1: Reflection

The agent produces output, then critiques its own work and improves it.

Write code → Review the code → Find bugs → Fix them → Review again

This is the simplest pattern and often the highest-ROI. Even a basic "review your own work and improve it" loop catches errors that a single-pass generation would miss. LLMs are better at critiquing output than producing perfect output on the first try — reflection harnesses that asymmetry.

Pattern 2: Tool Use

The agent invokes external tools to gather information or take action beyond text generation.

"What's the current price of X?" → call search tool → "The price is $Y" → continue

Tools turn an agent from a reasoning engine into an actor. Without tools, the agent can only think. With tools — search, crawl, generate, store, publish — it can affect the world.

This is where AnyCap becomes the agent's capability layer. Instead of the agent saying "I wish I could search the web," it runs:

anycap search --prompt "What is the current price of NVIDIA stock?"

The tool executes. The agent reads the result. The workflow continues.

Pattern 3: Planning

The agent breaks a complex goal into sub-tasks, executes them in order, and adjusts the plan as it learns.

Goal: "Write a market report on AI video"
→ Plan: (1) Search for key players (2) Crawl their pricing pages
  (3) Compare features (4) Write report (5) Publish
→ Execute step 1 → Discover new player → Revise plan → Continue

Planning is where agentic workflows diverge most dramatically from traditional pipelines. A pipeline has a fixed plan. An agentic workflow has a plan that evolves based on what the agent discovers during execution.

Pattern 4: Multi-Agent Collaboration

Multiple agents with different specialties work on different parts of a task, coordinated by an orchestrator.

Research Agent: finds sources
Writer Agent: produces the report
Reviewer Agent: checks for errors and gaps
Publisher Agent: deploys the final page

Multi-agent systems add complexity but enable specialization. A research agent can be optimized for thoroughness while a writer agent is optimized for clarity — different system prompts, different tools, different priorities.


Capabilities: What Your Agent Needs to Execute Workflows

A workflow pattern without tools is just a diagram. The agent needs actual capabilities to execute:

Workflow Pattern Required Capabilities AnyCap Tool
Reflection Generate, then review LLM self-critique
Tool Use Search, crawl, generate, store, publish anycap search, anycap crawl, anycap image generate, anycap drive, anycap page
Planning All of the above, plus state management Full AnyCap toolkit
Multi-Agent All of the above, plus message passing Orchestrator + AnyCap per agent

The quality of an agentic workflow is directly proportional to the quality of the tools available to it. An agent with only a search tool produces search results. An agent with search + crawl + generate + store + publish produces finished, delivered work.


The Orchestration Decision: When to Use Which Pattern

Not every task needs a multi-agent planning system. The decision framework:

Is the task path predictable?
  → Yes: Traditional pipeline is fine. Don't over-engineer.
  → No: Use Tool Use or Planning pattern.

Does the task benefit from self-critique?
  → Yes: Add Reflection.
  → No: Skip it.

Is the task too large for one agent?
  → Yes: Consider Multi-Agent.
  → No: One agent is simpler and more reliable.

The most common mistake: jumping to multi-agent before exhausting what a single well-tooled agent can do.


Production Considerations

Cost Management

Agentic workflows can be expensive. Every tool call costs credits; every planning step burns tokens. Mitigations:

  • Cap the number of steps per workflow execution
  • Use cheaper models for simple subtasks (reflection, formatting)
  • Cache common tool results (don't search for the same thing twice)

Failure Handling

Agentic workflows fail differently than pipelines. A pipeline fails at a specific step with a specific error. An agentic workflow might go down a wrong path for several steps before realizing the error.

Design for this:

  • Timeouts: If the workflow exceeds N steps or T minutes, return partial results
  • Checkpoints: Save intermediate state so the agent can resume, not restart
  • Human-in-the-loop: For high-stakes actions (publishing, sending), require approval

Observability

You can't debug what you can't see. Log every decision: what tool was called, with what parameters, what result came back, and what the agent decided to do next. Without this, you're debugging a black box.


From Theory to Practice

Agentic workflows are not a future concept. They're running in production today — automating research, generating content, managing data pipelines, and delivering finished work.

The barrier isn't the patterns. It's the tool access. The patterns are well-documented. What's been missing is a unified way for agents to actually execute them — to search, crawl, generate, store, and publish without integrating a dozen separate APIs.

AnyCap provides that unified capability layer. One CLI. Every tool. The agent focuses on decisions; the runtime handles execution.