Midjourney has launched its AI video generation model V1, which allows users to transform static images—whether generated or uploaded—into short video clips ranging from 5 to 20 seconds in length.
MiniMax-M1 is an open-source large-scale hybrid attention reasoning model based on a Mixture of Experts (MoE) architecture.
Seedance 1.0 is an advanced video generation model launched by Volcano Engine, a subsidiary of ByteDance, designed to generate high-quality video content from text and image inputs.
Krea 1 is a text-to-image model launched by KREA AI, designed to provide users with a high-quality image generation experience.
Magistral is the first reasoning model released by Mistral, designed to meet the demands of complex tasks.
OpenAudio S1 is the latest text-to-speech (TTS) model launched by Fish Audio. Trained on over 2 million hours of audio data, it aims to deliver a highly natural speech synthesis experience.
FLUX.1 Kontext is an advanced image generation and editing model developed by Black Forest Labs, designed to enable more flexible and intuitive image processing through a combination of text and image inputs.
Claude 4 is an artificial intelligence model released by Anthropic, featuring significant improvements in natural language processing, programming, and complex task reasoning capabilities.
Veo 3, a video generation model launched by Google, not only produces high-quality videos but also automatically generates sound effects, background noise, and dialogue that match the video content.