- Wan: Open and Advanced Large-Scale Video Generative Models
Wan: Open and Advanced Large-Scale Video Generative Models In this repository, we present Wan2 1, a comprehensive and open suite of video foundation models that pushes the boundaries of video generation Wan2 1 offers these key features:
- GitHub - k4yt3x video2x: A machine learning-based video super . . .
A machine learning-based video super resolution and frame interpolation framework Est Hack the Valley II, 2018 - k4yt3x video2x
- GitHub - Lightricks LTX-Video: Official repository for LTX-Video
Official repository for LTX-Video Contribute to Lightricks LTX-Video development by creating an account on GitHub
- DepthAnything Video-Depth-Anything - GitHub
This work presents Video Depth Anything based on Depth Anything V2, which can be applied to arbitrarily long videos without compromising quality, consistency, or generalization ability Compared with other diffusion-based models, it enjoys faster inference speed, fewer parameters, and higher
- GitHub - stepfun-ai Step-Video-T2V
Step-Video-T2V exhibits robust performance in inference settings, consistently generating high-fidelity and dynamic videos However, our experiments reveal that variations in inference hyperparameters can have a substantial effect on the trade-off between video fidelity and dynamics
- GitHub - veo-3 veo-3
Veo 3 is Google DeepMind’s latest AI-powered video generation model, introduced at Google I O 2025 It enables users to create high-quality, 1080p videos from simple text or image prompts, integrating realistic audio elements such as dialogue, sound effects, and ambient noise
- HunyuanCustom: A Multimodal-Driven Architecture for Customized Video . . .
Multimodal Video customization HunyuanCustom supports inputs in the form of text, images, audio, and video Specifically, it can handle single or multiple image inputs to enable customized video generation for one or more subjects Additionally, it can incorporate extra audio inputs to drive the subject to speak the corresponding audio
- Create your first video in Google Vids
Create a video using help me create You can use help me create to generate a first-draft video with Gemini in Google Vids All you need to do is enter a description Gemini then generates a draft—including a script, AI voiceover, scenes, and content—for the video You can then edit the draft as needed On your computer, open Google Vids
|