Guitaricet / reloraLinks
Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates
☆463Updated last year
Alternatives and similar repositories for relora
Users that are interested in relora are comparing it to the libraries listed below
Sorting:
- Official PyTorch implementation of QA-LoRA☆140Updated last year
- The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction☆388Updated last year
- Implementation of DoRA☆301Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆652Updated last year
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆399Updated last year
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆630Updated last year
- ☆224Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆199Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆425Updated last year
- ☆228Updated last year
- ☆270Updated last year
- batched loras☆345Updated 2 years ago
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆414Updated 8 months ago
- Codebase for Merging Language Models (ICML 2024)☆849Updated last year
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆103Updated 2 years ago
- A simple and effective LLM pruning approach.☆799Updated last year
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆547Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆137Updated 2 years ago
- Scaling Data-Constrained Language Models☆341Updated 2 months ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆628Updated last year
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆249Updated last year
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆704Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆711Updated last year
- Official repository for ORPO☆465Updated last year
- DSIR large-scale data selection framework for language model training☆258Updated last year
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆177Updated last year
- A bagel, with everything.☆324Updated last year
- A repository for research on medium sized language models.☆510Updated 3 months ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆380Updated last year
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆309Updated last year