Capabilities
Last updated April 5, 2026
Video Generation
AnyCap video generation gives agents one CLI for text-to-video and image-to-video workflows. Agents can generate short cinematic clips from prompts, animate still images into motion, and run video creation tasks through one interface instead of stitching together separate video generation APIs. The result is a cleaner video generation layer for Claude Code, Cursor, Codex, and similar agent products.
Answer-first summary
Use Veo 3.1 when the first-pass clip needs to look more premium from a text brief, Kling 3.0 when the workflow leans on cinematic motion or more exploratory image-to-video work, and Seedance 1.5 Pro when the team wants a steadier production-friendly default.
How to choose among video models
Premium first pass
Veo 3.1
Best when the text brief needs to become a more polished teaser or concept clip on the first generation.
Open model guide →Cinematic motion
Kling 3.0
Best when the workflow wants stronger motion style or more flexible image-to-video iteration.
Open model guide →Repeatable production
Seedance 1.5 Pro
Best when the team values steadier image-to-video output and a more repeatable production workflow.
Open model guide →Supported models
| Model | Modes | Best fit |
|---|---|---|
| Veo 3.1 | text-to-video, image-to-video | High-end cinematic output and premium video generation workflows |
| Kling 3.0 | text-to-video, image-to-video | Realistic motion and cinematic scene generation |
| Seedance 1.5 Pro | text-to-video, image-to-video | Reliable image-to-video output and production-friendly motion quality |
CLI usage
Text-to-video
anycap video generate --prompt "a drone shot flying over a mountain range at sunset" --model veo-3.1 -o hero.mp4
Image-to-video
anycap video generate --prompt "animate this still into a subtle camera move" --model seedance-1.5-pro --mode image-to-video --param reference_image_urls='["https://example.com/frame.jpg"]' -o animated.mp4
Discover models
anycap video models
When agents need video generation
Demo videos
Generate launch clips, walkthroughs, and product demos through one command.
Storyboards to motion
Turn design stills, screenshots, or reference frames into animated video drafts.
Social content
Produce short-form clips for campaigns, changelog announcements, and creator workflows.
Rapid prototyping
Explore visual concepts in motion before committing to a larger production pass.
Related models and guides
Model
Veo 3.1
See when Veo 3.1 is the right choice for premium agent-led video generation.
Agent page
For Claude Code
See how video generation fits into the broader Claude Code capability story.
Related capability
Image Generation
Pair image generation with video workflows when the agent starts from stills or concept frames.
FAQ
What does AnyCap video generation let agents do?
It gives agents one command surface for text-to-video and image-to-video workflows. Teams can generate new clips, animate reference images, and run repeatable video creation tasks without separate provider-specific integrations.
Which video models are available through AnyCap today?
The current public video generation surface includes Veo 3.1, Kling 3.0, and Seedance 1.5 Pro. Each model is available through the same AnyCap video generation API, CLI, and auth flow.
Why does this page include image-to-video as well as text-to-video?
Agent workflows often start from a screenshot, design frame, or product still rather than a prompt alone. AnyCap treats both text-to-video and image-to-video as one video generation capability so the workflow stays consistent.
Is this page about a video generation API or a CLI workflow?
It is both. Teams often search for a text-to-video API or video generation API, while agent execution usually happens through the AnyCap CLI.