TencentARC / mllm-npuLinks
mllm-npu: training multimodal large language models on Ascend NPUs
☆91Updated last year
Alternatives and similar repositories for mllm-npu
Users that are interested in mllm-npu are comparing it to the libraries listed below
Sorting:
- [COLM 2025] Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources☆257Updated last week
- ☆173Updated 7 months ago
- ☆413Updated 3 weeks ago
- ☆174Updated 6 months ago
- Pruning the VLLMs☆103Updated 8 months ago
- Adaptive Caching for Faster Video Generation with Diffusion Transformers☆156Updated 10 months ago
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆139Updated 4 months ago
- A Unified Cache Acceleration Toolbox for 🤗Diffusers: FLUX.1, Qwen-Image-Edit, Qwen-Image, Wan2.1/2.2, etc.☆244Updated this week
- A curated list of recent papers on efficient video attention for video diffusion models, including sparsification, quantization, and cach…☆36Updated last month
- minisora-DiT, a DiT reproduction based on XTuner from the open source community MiniSora☆40Updated last year
- MM-Interleaved: Interleaved Image-Text Generative Modeling via Multi-modal Feature Synchronizer☆240Updated last year
- A light-weight and high-efficient training framework for accelerating diffusion tasks.☆49Updated 11 months ago
- The official repo of One RL to See Them All: Visual Triple Unified Reinforcement Learning☆311Updated 3 months ago
- A parallelism VAE avoids OOM for high resolution image generation☆76Updated last month
- The official repository of the dots.vlm1 instruct models proposed by rednote-hilab.☆236Updated this week
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆210Updated 7 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Updated last year
- [EMNLP 2024] RWKV-CLIP: A Robust Vision-Language Representation Learner☆141Updated 3 months ago
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"☆402Updated last week
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cache…☆143Updated 3 weeks ago
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆162Updated 8 months ago
- ☆119Updated last year
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆189Updated 2 months ago
- A Simple Framework of Small-scale LMMs for Video Understanding☆89Updated 2 months ago
- siiRL: Shanghai Innovation Institute RL Framework for Advanced LLMs and Multi-Agent Systems☆179Updated this week
- [ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation☆382Updated 4 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆234Updated 3 weeks ago
- 青稞Talk☆139Updated this week
- [NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching☆112Updated last year
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*