Official implementation of "Self-Improving Video Generation"
☆77Apr 25, 2025Updated 11 months ago
Alternatives and similar repositories for VideoAgent
Users that are interested in VideoAgent are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆61May 4, 2025Updated 11 months ago
- Official implementation of "LOCATEdit: Graph Laplacian Optimized Cross Attention for Localized Guided Image Editing☆16May 27, 2025Updated 10 months ago
- This is the official implementation of SG-I2V: Self-Guided Trajectory Control in Image-to-Video Generation.☆116Nov 26, 2024Updated last year
- Training-free Guidance in Text-to-Video Generation via Multimodal Planning and Structured Noise Initialization☆26Apr 14, 2025Updated 11 months ago
- Official repository for our paper on "Action Inference by Maximising Evidence: Zero-Shot Imitation from Observation with World Models"☆13Dec 4, 2023Updated 2 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Official repository of Learning to Act from Actionless Videos through Dense Correspondences.☆250Apr 25, 2024Updated last year
- MCP prompt tool applying Chain-of-Draft (CoD) reasoning - BYOLLM☆18Sep 8, 2025Updated 7 months ago
- Benchmarking physical understanding in generative video models☆273Updated this week
- Video Diffusion Alignment via Reward Gradients. We improve a variety of video diffusion models such as VideoCrafter, OpenSora, ModelScope…☆312Mar 12, 2025Updated last year
- ☆78May 23, 2025Updated 10 months ago
- DiT for VAE (and Video Generation)☆35Sep 2, 2024Updated last year
- Subtask-Aware Visual Reward Learning from Segmented Demonstrations (ICLR 2025 accepted)☆18Apr 11, 2025Updated last year
- Code for the paper Bootstrap Your Own Skills: Learning to Solve New Tasks with Large Language Model Guidance, accepted to CoRL 2023 as an…☆35Jul 15, 2025Updated 8 months ago
- A custom node extension for ComfyUI that integrates Google's Veo 2 text-to-video generation capabilities.☆32Apr 12, 2025Updated 11 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Official codebase for "Any-point Trajectory Modeling for Policy Learning"☆276Jun 19, 2025Updated 9 months ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆502Jan 22, 2025Updated last year
- Code for "Evaluating Robot Policies in a World Model".☆87Nov 6, 2025Updated 5 months ago
- Official repository for "RLVR-World: Training World Models with Reinforcement Learning" (NeurIPS 2025), https://arxiv.org/abs/2505.13934☆238Oct 28, 2025Updated 5 months ago
- Official PyTorch implementation of the paper Transformer-Based Image Generation from Scene Graphs https://arxiv.org/abs/2303.04634☆19Jan 30, 2024Updated 2 years ago
- Code for "Hierarchical World Models as Visual Whole-Body Humanoid Controllers"☆204Sep 18, 2025Updated 6 months ago
- [ACM Multimedia 2025 Datasets Track] EditWorld: Simulating World Dynamics for Instruction-Following Image Editing☆140Aug 2, 2025Updated 8 months ago
- Jupyter notebooks for PuLID face transfer with Flux.1 dev. Able to run on Google Colab Free Tier☆18Dec 18, 2024Updated last year
- Code release for: Controllable Layer Decomposition for Reversible Multi-Layer Image Generation☆46Dec 7, 2025Updated 4 months ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆82Dec 12, 2024Updated last year
- LMAct: A Benchmark for In-Context Imitation Learning with Long Multimodal Demonstrations☆27May 21, 2025Updated 10 months ago
- Suite of human-collected datasets and a multi-task continuous control benchmark for open vocabulary visuolinguomotor learning.