跳转到内容

Video Generation

此内容尚不支持你的语言。

HiAPI supports multiple video generation models for text-to-video and image-to-video tasks.

ModelProviderMax DurationResolutionCredits
SoraOpenAI60s100
KlingKuaishou80
Seedance 1.5 ProByteDance12s1080pDynamic
Wan 2.7 T2VAlibaba15s1080p100
Wan 2.7 I2VAlibaba15s1080p100
Terminal window
curl -X POST https://api.hiapi.ai/v1/videos \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "wan2.7-t2v",
"prompt": "A Shiba Inu chasing butterflies under cherry blossoms",
"size": "1920*1080",
"seconds": 5
}'

Animate a still image into a video using Wan 2.7 I2V:

Terminal window
curl -X POST https://api.hiapi.ai/v1/videos \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "wan2.7-i2v",
"prompt": "The scene comes alive with gentle wind",
"input_image": "https://example.com/photo.jpg",
"seconds": 5
}'

Video generation is asynchronous. The API returns a task_id — poll for the result:

import time
import requests
# 1. Submit task
response = requests.post(
"https://api.hiapi.ai/v1/videos",
headers={"Authorization": "Bearer YOUR_API_KEY", "Content-Type": "application/json"},
json={"model": "wan2.7-t2v", "prompt": "sunset timelapse", "seconds": 5}
)
task_id = response.json()["data"]["task_id"]
# 2. Poll for result
while True:
result = requests.get(
f"https://api.hiapi.ai/v1/videos/{task_id}",
headers={"Authorization": "Bearer YOUR_API_KEY"}
).json()
if result["data"]["status"] == "completed":
print("Video URL:", result["data"]["url"])
break
time.sleep(5)
ParameterValuesModels
seconds3, 5, 8, 10, 12, 15Wan 2.7, Seedance
size1920*1080, 1080*1920, 1280*720Wan 2.7
duration5, 10Sora, Kling