TencentARC / mllm-npuLinks
mllm-npu: training multimodal large language models on Ascend NPUs
☆95Updated last year
Alternatives and similar repositories for mllm-npu
Users that are interested in mllm-npu are comparing it to the libraries listed below
Sorting:
- ☆188Updated 11 months ago
- ☆441Updated 4 months ago
- Pruning the VLLMs☆105Updated last year
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆146Updated 8 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆212Updated last year
- A curated list of recent papers on efficient video attention for video diffusion models, including sparsification, quantization, and cach…☆52Updated 2 months ago
- The official repo of One RL to See Them All: Visual Triple Unified Reinforcement Learning☆329Updated 7 months ago
- [COLM 2025] Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources☆299Updated 4 months ago
- ☆187Updated 11 months ago
- MM-Interleaved: Interleaved Image-Text Generative Modeling via Multi-modal Feature Synchronizer☆248Updated last year
- A parallelism VAE avoids OOM for high resolution image generation☆84Updated 5 months ago
- Code for our ICCV 2025 paper "Adaptive Caching for Faster Video Generation with Diffusion Transformers"☆163Updated last year
- An industrial extension library of pytorch to accelerate large scale model training☆56Updated 4 months ago
- A light-weight and high-efficient training framework for accelerating diffusion tasks.☆51Updated last year
- The official repository of the dots.vlm1 instruct models proposed by rednote-hilab.☆276Updated 3 months ago
- The SAIL-VL2 series model developed by the BytedanceDouyinContent Group☆76Updated 3 months ago
- 青稞Talk☆181Updated this week
- To pioneer training long-context multi-modal transformer models☆64Updated 5 months ago
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cache…☆191Updated last month
- [EMNLP 2024] RWKV-CLIP: A Robust Vision-Language Representation Learner☆148Updated 3 weeks ago
- minisora-DiT, a DiT reproduction based on XTuner from the open source community MiniSora☆40Updated last year
- Valley is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data.☆267Updated last month
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆138Updated last year
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆256Updated 4 months ago
- Model compression toolkit engineered for enhanced usability, comprehensiveness, and efficiency.☆237Updated last week
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Updated last year
- ☆124Updated last year
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆198Updated last month
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆128Updated last year
- [NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching☆116Updated last year