yangluo7 / CAMELinks
[ACL 2023] The official implementation of "CAME: Confidence-guided Adaptive Memory Optimization"
☆93Updated 7 months ago
Alternatives and similar repositories for CAME
Users that are interested in CAME are comparing it to the libraries listed below
Sorting:
- Official codebase for Margin-aware Preference Optimization for Aligning Diffusion Models without Reference (MaPO).☆82Updated last year
- [NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching☆116Updated last year
- ☆49Updated last year
- PyTorch implementation for "Parallel Sampling of Diffusion Models", NeurIPS 2023 Spotlight☆148Updated 2 years ago
- An open-source implementation of Regional Adaptive Sampling (RAS), a novel diffusion model sampling strategy that introduces regional var…☆144Updated 4 months ago
- Minimal Differentiable Image Reward Functions☆97Updated 2 months ago
- [ICML2025] LoRA fine-tune directly on the quantized models.☆36Updated 11 months ago
- Code for NeurIPS 2023 paper "Restart Sampling for Improving Generative Processes"☆151Updated last year
- The official implementation of Distribution Backtracking Distillation for One-step Diffusion Models☆33Updated 9 months ago
- TerDiT: Ternary Diffusion Models with Transformers☆71Updated last year
- A WebUI for Side-by-Side Comparison of Media (Images/Videos) Across Multiple Folders☆24Updated 8 months ago
- Official repository for VQDM:Accurate Compression of Text-to-Image Diffusion Models via Vector Quantization paper☆34Updated last year
- [ICLR 2025] Official PyTorch implmentation of paper "T-Stitch: Accelerating Sampling in Pre-trained Diffusion Models with Trajectory Stit…☆103Updated last year
- Triton implement of bi-directional (non-causal) linear attention☆56Updated 8 months ago
- ☆87Updated last year
- Official implementation for our paper "Scaling Diffusion Transformers Efficiently via μP".☆90Updated 4 months ago
- Official PyTorch Implementation for Paper "No More Adam: Learning Rate Scaling at Initialization is All You Need"☆54Updated 9 months ago
- SpeeD: A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training