NVlabs / DoRALinks
[ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation
β837Updated 11 months ago
Alternatives and similar repositories for DoRA
Users that are interested in DoRA are comparing it to the libraries listed below
Sorting:
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)β378Updated 2 months ago
- [ICLR2025 Spotlightπ₯] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parametersβ571Updated 6 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793β435Updated 3 months ago
- MoRA: High-Rank Updating for Parameter-Efο¬cient Fine-Tuningβ358Updated last year
- Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Modelsβ802Updated last month
- Implementation of DoRAβ301Updated last year
- Anole: An Open, Autoregressive and Native Multimodal Models for Interleaved Image-Text Generationβ797Updated 2 months ago
- Muon is an optimizer for hidden layers in neural networksβ1,640Updated last month
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.β432Updated last year
- β223Updated last year
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).β347Updated 2 years ago
- Autoregressive Model Beats Diffusion: π¦ Llama for Scalable Image Generationβ1,851Updated last year
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden Statesβ1,248Updated last year
- Official JAX implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden Statesβ421Updated last year
- Codebase for Merging Language Models (ICML 2024)β845Updated last year
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projectionβ1,591Updated 10 months ago
- Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAIβ1,196Updated 2 months ago
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paperβ735Updated 3 weeks ago
- When do we not need larger vision models?β407Updated 6 months ago
- Helpful tools and examples for working with flex-attentionβ951Updated 2 weeks ago
- β208Updated 10 months ago
- Dream 7B, a large diffusion language modelβ950Updated 2 weeks ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"β124Updated last year
- A collection of AWESOME things about mixture-of-expertsβ1,197Updated 8 months ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modelingβ908Updated 4 months ago
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruningβ629Updated last year
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorchβ359Updated last year
- A family of compressed models obtained via pruning and knowledge distillationβ349Updated 9 months ago
- Implementation of π Ring Attention, from Liu et al. at Berkeley AI, in Pytorchβ536Updated 3 months ago
- TransMLA: Multi-Head Latent Attention Is All You Needβ349Updated this week