Models
Last updated April 5, 2026
Choose the right
model for the agent job.
AnyCap exposes multimodal models through one capability runtime and one CLI. This page helps teams choose the right model for a given agent workflow instead of treating every image or video request the same way.
Answer-first summary
The current public AnyCap model catalog includes image generation models for first-pass output and revision loops, video generation models for premium or production-friendly motion work, and a prompt-based music model for soundtrack drafts. The right choice usually depends on whether the job starts from a blank prompt or an existing asset, how much polish the first pass needs, and how much speed or cost efficiency matters in the workflow.
How to choose the right model
- Start with the output type: image, video, or music.
- Then decide whether the task needs a polished first pass, faster iteration, or revision from an existing asset.
- Use the model guide pages when the choice depends on motion style, editing workflow, or cost tradeoffs.
Visual guide

This illustration is a quick visual map of the current catalog: image models on one side, video models on another, and music generation as a separate capability lane inside the same agent runtime. It was generated with Nano Banana 2 to keep the page's visual language aligned with the model catalog itself.
Current model comparison
These are the current public models exposed through AnyCap. Credit ranges come from the same pricing inventory used on the pricing page, so the hub and pricing page stay aligned.
Image generation
Charged per call. Supports text-to-image and image-to-image modes.
| Model | Modes | Credits / call | Best for |
|---|---|---|---|
| Nano Banana Pro | text-to-image, image-to-image | ~7 | Targeted image editing and revision loops from an existing visual. |
| Nano Banana 2 | text-to-image, image-to-image | ~4 | Fast, scalable image generation and high-volume iteration. |
| Seedream 5 | text-to-image, image-to-image | ~2 | Polished first-pass image generation from a text prompt. |
Video generation
Charged per second of generated output. Supports text-to-video and image-to-video modes.
| Model | Modes | Credits / sec | Best for |
|---|---|---|---|
| Veo 3.1 | text-to-video, image-to-video | ~20 | Premium text-to-video output when the first pass needs to look stronger. |
| Seedance 1.5 Pro | text-to-video, image-to-video | ~14 | Steady production-friendly video workflows and repeatable image-to-video jobs. |
| Kling 3.0 | text-to-video, image-to-video | ~9 | Cinematic motion and flexible image-to-video workflows. |
Music generation
Charged per second of generated audio.
| Model | Modes | Credits / sec | Best for |
|---|---|---|---|
| ElevenLabs Music | text-to-music | ~1 | Prompt-based soundtrack drafts inside the same agent runtime. |
Image generation
Video generation
Music generation
FAQ
How do I choose between Seedream 5, Nano Banana Pro, and Nano Banana 2?
Use Seedream 5 when the workflow needs a stronger first-pass image from a prompt, Nano Banana Pro when the job starts from an existing image and needs revisions, and Nano Banana 2 when speed, throughput, or repeated iteration matters more.
How do I choose between Veo 3.1, Kling 3.0, and Seedance 1.5 Pro?
Use Veo 3.1 when the first video pass needs to look more premium from a text brief, Kling 3.0 when the workflow leans more on cinematic motion or flexible image-to-video work, and Seedance 1.5 Pro when the team wants a steadier production-oriented default.
Do all AnyCap models use the same CLI and auth flow?
Yes. AnyCap exposes these models through the same capability runtime, CLI, and auth flow, so teams do not need a separate provider integration path for each model page listed here.