copy and paste this google map to your website or blog!
Press copy button and paste into your blog or website.
(Please switch to 'HTML' mode when posting into your blog. Examples: WordPress Example, Blogger Example)
Wan: Open and Advanced Large-Scale Video Generative Models 👍 Multiple Tasks: Wan2 1 excels in Text-to-Video, Image-to-Video, Video Editing, Text-to-Image, and Video-to-Audio, advancing the field of video generation 👍 Visual Text Generation: Wan2 1 is the first video model capable of generating both Chinese and English text, featuring robust text generation that enhances its practical applications
Video-R1: Reinforcing Video Reasoning in MLLMs - GitHub Video-R1 significantly outperforms previous models across most benchmarks Notably, on VSI-Bench, which focuses on spatial reasoning in videos, Video-R1-7B achieves a new state-of-the-art accuracy of 35 8%, surpassing GPT-4o, a proprietary model, while using only 32 frames and 7B parameters
HunyuanVideo: A Systematic Framework For Large Video . . . - GitHub We present HunyuanVideo, a novel open-source video foundation model that exhibits performance in video generation that is comparable to, if not superior to, leading closed-source models In order to train HunyuanVideo model, we adopt several key technologies for model learning, including data
DepthAnything Video-Depth-Anything - GitHub This work presents Video Depth Anything based on Depth Anything V2, which can be applied to arbitrarily long videos without compromising quality, consistency, or generalization ability Compared with other diffusion-based models, it enjoys faster inference speed, fewer parameters, and higher
Lightricks LTX-Video: Official repository for LTX-Video - GitHub LTX-Video is the first DiT-based video generation model that can generate high-quality videos in real-time It can generate 30 FPS videos at 1216×704 resolution, faster than it takes to watch them It can generate 30 FPS videos at 1216×704 resolution, faster than it takes to watch them
GitHub - lllyasviel FramePack: Lets make video diffusion practical! FramePack is a next-frame (next-frame-section) prediction neural network structure that generates videos progressively FramePack compresses input contexts to a constant length so that the generation workload is invariant to video length FramePack can process a very large number of frames with 13B
GitHub - Tencent-Hunyuan HunyuanVideo-Avatar HunyuanVideo-Avatar supports various downstream tasks and applications For instance, the system generates talking avatar videos, which could be applied to e-commerce, online streaming, social media video production, etc In addition, its multi-character animation feature enlarges the application such as video content creation, editing, etc
Create your first video in Google Vids Optional: To make changes to your video clip, click Edit prompt To add the video clip to your Vid: Hover over the generated video Click Insert This adds the video clip to your canvas At the bottom of the Vids window, in the timeline, the video clip has its own object track Learn more about generating video clips Start by recording a video
Lightricks ComfyUI-LTXVideo: LTX-Video Support for ComfyUI - GitHub Sequence Conditioning – Allows motion interpolation from a given frame sequence, enabling video extension from the beginning, end, or middle of the original video Prompt Enhancer – A new node that helps generate prompts optimized for the best model performance See the Example Workflows section for more details
GitHub - kijai ComfyUI-WanVideoWrapper ReCamMaster: WanVideo2_1_recammaster mp4 TeaCache (with the old temporary WIP naive version, I2V): Note that with the new version the threshold values should be 10x higher