Midjourney has launched its AI video generation model V1, which allows users to transform static images—whether generated or uploaded—into short video clips ranging from 5 to 20 seconds in length.
Seedance 1.0 is an advanced video generation model launched by Volcano Engine, a subsidiary of ByteDance, designed to generate high-quality video content from text and image inputs.
Veo 3, a video generation model launched by Google, not only produces high-quality videos but also automatically generates sound effects, background noise, and dialogue that match the video content.
MAGI-1, developed by Sand AI, is the world’s first autoregressive video generation model, designed to produce high-quality, smooth, and natural video content through autoregressive prediction of video block sequences.
Vidu Q1: The Industry's First Highly Controllable AI Video Model by Shengshu Technology
Runway has launched its latest AI video generation model, Gen-4, designed to generate high-quality video content through natural language prompts.
Step-Video-TI2V is an advanced text-driven image-to-video generation model capable of producing videos up to 102 frames based on text descriptions and image inputs.
HunyuanVideo-I2V is an advanced open-source image-to-video generation framework developed by Tencent, designed to transform static images into dynamic video content.
Wan2.1 is Alibaba Cloud’s latest open-source video generation model, offering significant performance advantages. It can run on personal computers and supports various video generation tasks.