Skills
Full media production workflow
for Claude Code, Cursor, and Codex
anycap-media-production is an installable skill that extends Claude Code, Cursor, Codex, and similar coding agents with a structured media production workflow. It covers iterative image generation, video generation, AI-driven refinement, and human annotation feedback loops so the agent can move from a rough visual brief to a polished output without leaving the coding environment. The skill teaches the agent when to call AnyCap for generation, when to request a refinement pass, and how to collect structured feedback from a human reviewer before continuing. This is useful for teams building media-heavy products, content pipelines, or any automated creative workflow where iteration quality and review checkpoints matter. The install path works through skills.sh, the AnyCap CLI, or manual placement. Once installed, the agent treats media production as a first-class workflow step rather than an ad-hoc prompt experiment. It knows the right command flags, understands the refinement loop structure, and can integrate human feedback before committing a final asset.
Install time
< 5 min
Supported agents
Claude Code · Cursor · Codex
Platform
macOS · Linux · Windows
Install this skill
npx -y skills add anycap-ai/anycap -s 'anycap-media-production' -g -y
How the workflow changes after install
BEFORE
Agent generates one image per prompt. No review loop. No refinement. Human must re-prompt manually to iterate.
AFTER
Agent generates, reviews with AnyCap image read, refines prompt, regenerates — and repeats until the brief is met. Human feedback can be injected at any checkpoint.
- 1
Receive brief
User describes the visual goal — style, subject, format, intended platform.
- 2
Generate base asset
Agent calls `anycap image generate` or `anycap video generate` with an initial prompt derived from the brief.
- 3
Review output
Agent calls `anycap image read` to analyze the generated asset against the brief criteria.
- 4
Refine and regenerate
If the review identifies gaps, agent adjusts the prompt and generates again. Continues until quality criteria are met.
- 5
Human checkpoint (optional)
If human annotation is configured, agent pauses and requests sign-off before delivering the final asset.
SAMPLE OUTPUT
$ anycap image generate "product hero, dark minimal, no text" ✓ Generated: hero-v1.png (1920×1080) [Agent] Reviewing output against brief... → Gap: composition too centered, shadow too heavy [Agent] Refining prompt and regenerating... ✓ Refined: hero-v2.png ✓ Committed: hero-v2.png → ./output/
Capabilities used by this skill
Supported agents
Frequently asked questions
- What does anycap-media-production do that direct prompting doesn't?
- Direct prompting gives you one-shot generation. This skill teaches the agent to run a full review-refine loop: generate, analyze the output with AnyCap image understanding, adjust the prompt, and regenerate — repeating until the brief is met.
- Can this skill generate video as well as images?
- Yes. The skill covers image generation, video generation, and iterative refinement for both asset types.
- Do I need an AnyCap account?
- Yes. The skill uses the AnyCap CLI as its execution layer. Install the CLI and run `anycap login` before your first session.
- How is this different from MCP?
- MCP is a protocol that describes how agents communicate with tools. This skill is an instruction file that tells the agent how to use AnyCap's specific commands, flags, and workflow patterns. You can use both together.