Tele-AI / TeleTronLinks
To pioneer training long-context multi-modal transformer models
β58Updated last month
Alternatives and similar repositories for TeleTron
Users that are interested in TeleTron are comparing it to the libraries listed below
Sorting:
- Code for Draft Attentionβ90Updated 4 months ago
- A Unified Cache Acceleration Toolbox for π€Diffusers: FLUX.1, Qwen-Image-Edit, Qwen-Image, HunyuanImage-2.1, Wan 2.1/2.2, etc.β317Updated this week
- β175Updated 8 months ago
- A parallelism VAE avoids OOM for high resolution image generationβ78Updated last month
- High performance inference engine for diffusion modelsβ91Updated 2 weeks ago
- FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.β48Updated last year
- Adaptive Caching for Faster Video Generation with Diffusion Transformersβ159Updated 10 months ago
- [ICML2025] Sparse VideoGen: Accelerating Video Diffusion Transformers with Spatial-Temporal Sparsityβ430Updated 3 weeks ago
- FastCache: Fast Caching for Diffusion Transformer Through Learnable Linear Approximation [Efficient ML Model]β35Updated 2 weeks ago
- Patch convolution to avoid large GPU memory usage of Conv2Dβ92Updated 7 months ago
- Official implementation of paper "VMoBA: Mixture-of-Block Attention for Video Diffusion Models"β45Updated 2 months ago
- A light-weight and high-efficient training framework for accelerating diffusion tasks.β48Updated last year
- [NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Cachingβ113Updated last year
- [ECCV24] MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantizationβ45Updated 9 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Trainingβ508Updated this week
- A curated list of recent papers on efficient video attention for video diffusion models, including sparsification, quantization, and cachβ¦β38Updated 2 weeks ago
- Official implementation of DiCache: Let Diffusion Model Determine Its Own Cacheβ37Updated 3 weeks ago
- This is the official repo for the paper "Accelerating Parallel Sampling of Diffusion Models" Tang et al. ICML 2024 https://openreview.netβ¦β16Updated last year
- The official implementation of "Sparse-vDiT: Unleashing the Power of Sparse Attention to Accelerate Video Diffusion Transformers" (arXiv β¦β44Updated 3 months ago
- [CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformersβ62Updated last year
- mllm-npu: training multimodal large language models on Ascend NPUsβ92Updated last year
- A sparse attention kernel supporting mix sparse patternsβ296Updated 7 months ago
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generationβ119Updated 6 months ago
- SpeeD: A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Trainingβ183Updated 7 months ago
- A WebUI for Side-by-Side Comparison of Media (Images/Videos) Across Multiple Foldersβ23Updated 7 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoringβ229Updated 2 months ago
- β425Updated last month
- Implementation of SmoothCache, a project aimed at speeding-up Diffusion Transformer (DiT) based GenAI models with error-guided caching.β45Updated 2 months ago
- [ICCV2025] From Reusing to Forecasting: Accelerating Diffusion Models with TaylorSeersβ283Updated last month
- An open-source implementation of Regional Adaptive Sampling (RAS), a novel diffusion model sampling strategy that introduces regional varβ¦β140Updated 2 months ago