NVlabs / DoRALinks
[ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation
β865Updated last year
Alternatives and similar repositories for DoRA
Users that are interested in DoRA are comparing it to the libraries listed below
Sorting:
- [ICLR2025 Spotlightπ₯] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parametersβ574Updated 8 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793β439Updated 5 months ago
- Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Modelsβ856Updated 3 months ago
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).β353Updated 2 years ago
- MoRA: High-Rank Updating for Parameter-Efο¬cient Fine-Tuningβ358Updated last year
- Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAIβ1,235Updated last week
- Autoregressive Model Beats Diffusion: π¦ Llama for Scalable Image Generationβ1,879Updated last year
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paperβ772Updated 2 months ago
- Anole: An Open, Autoregressive and Native Multimodal Models for Interleaved Image-Text Generationβ810Updated 4 months ago
- Implementation of DoRAβ304Updated last year
- Muon is an optimizer for hidden layers in neural networksβ1,888Updated 3 months ago
- A family of compressed models obtained via pruning and knowledge distillationβ352Updated 11 months ago
- β228Updated last year
- Helpful tools and examples for working with flex-attentionβ1,020Updated this week
- Dream 7B, a large diffusion language modelβ1,018Updated 3 weeks ago
- β212Updated last year
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projectionβ1,610Updated 11 months ago
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.β436Updated last year
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"β581Updated last week
- OLMoE: Open Mixture-of-Experts Language Modelsβ886Updated last month
- Muon is Scalable for LLM Trainingβ1,336Updated 2 months ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorchβ363Updated last year
- Next-Token Prediction is All You Needβ2,216Updated 7 months ago
- A Framework of Small-scale Large Multimodal Modelsβ910Updated 5 months ago
- Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities. arXiv:2408.07666.β572Updated this week
- [NeurIPS 2025] MMaDA - Open-Sourced Multimodal Large Diffusion Language Modelsβ1,434Updated last week
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"β124Updated last year
- Codebase for Merging Language Models (ICML 2024)β850Updated last year
- This repo contains the code for 1D tokenizer and generatorβ1,052Updated 7 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Rewardβ923Updated 8 months ago