copy and paste this google map to your website or blog!
Press copy button and paste into your blog or website.
(Please switch to 'HTML' mode when posting into your blog. Examples: WordPress Example, Blogger Example)
Wan: Open and Advanced Large-Scale Video Generative Models 👍 Multiple Tasks: Wan2 1 excels in Text-to-Video, Image-to-Video, Video Editing, Text-to-Image, and Video-to-Audio, advancing the field of video generation 👍 Visual Text Generation: Wan2 1 is the first video model capable of generating both Chinese and English text, featuring robust text generation that enhances its practical applications
Video-R1: Reinforcing Video Reasoning in MLLMs - GitHub Video-R1 significantly outperforms previous models across most benchmarks Notably, on VSI-Bench, which focuses on spatial reasoning in videos, Video-R1-7B achieves a new state-of-the-art accuracy of 35 8%, surpassing GPT-4o, a proprietary model, while using only 32 frames and 7B parameters
HunyuanCustom: A Multimodal-Driven Architecture for Customized Video . . . Customized video generation aims to produce videos featuring specific subjects under flexible user-defined conditions, yet existing methods often struggle with identity consistency and limited input modalities In this paper, we propose HunyuanCustom, a multi-modal customized video generation
Lightricks LTX-Video: Official repository for LTX-Video - GitHub LTX-Video is the first DiT-based video generation model that can generate high-quality videos in real-time It can generate 30 FPS videos at 1216×704 resolution, faster than it takes to watch them It can generate 30 FPS videos at 1216×704 resolution, faster than it takes to watch them
DepthAnything Video-Depth-Anything - GitHub This work presents Video Depth Anything based on Depth Anything V2, which can be applied to arbitrarily long videos without compromising quality, consistency, or generalization ability Compared with other diffusion-based models, it enjoys faster inference speed, fewer parameters, and higher
【EMNLP 2024 】Video-LLaVA: Learning United Visual . . . - GitHub [2024 09 25] 🔥🔥🔥 Our Video-LLaVA has been accepted at EMNLP 2024! We earn the meta score of 4 [2024 07 27] 🔥🔥🔥 A fine-tuned Video-LLaVA focuses on theme exploration, narrative analysis, and character dynamics
Store play video in Google Drive The video may be corrupted or uploaded in a format that won’t work Try to upload the video again or in a different format "Video is still processing Try again later ": The video is not ready to be played If the file is large, it may take awhile before your video is ready Try again in a while "This video is currently unavailable ":
Lightricks ComfyUI-LTXVideo: LTX-Video Support for ComfyUI - GitHub Sequence Conditioning – Allows motion interpolation from a given frame sequence, enabling video extension from the beginning, end, or middle of the original video Prompt Enhancer – A new node that helps generate prompts optimized for the best model performance See the Example Workflows section for more details
Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video . . . This is the repo for the Video-LLaMA project, which is working on empowering large language models with video and audio understanding capabilities Video-LLaMA is built on top of BLIP-2 and MiniGPT-4 It is composed of two core components: (1) Vision-Language (VL) Branch and (2) Audio-Language (AL