princeton-nlp / LLM-Shearing
[ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning
☆598Updated last year
Alternatives and similar repositories for LLM-Shearing:
Users that are interested in LLM-Shearing are comparing it to the libraries listed below
- Official PyTorch implementation of QA-LoRA☆131Updated last year
- A simple and effective LLM pruning approach.☆737Updated 8 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆715Updated 6 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,240Updated last month
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆395Updated 11 months ago
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baich…☆999Updated 6 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆459Updated last year
- Codebase for Merging Language Models (ICML 2024)☆816Updated 11 months ago
- ☆219Updated 10 months ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆545Updated last year
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆403Updated 6 months ago
- For releasing code related to compression methods for transformers, accompanying our publications☆424Updated 3 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆873Updated 2 months ago
- A family of compressed models obtained via pruning and knowledge distillation☆334Updated 5 months ago
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3.☆1,183Updated this week
- The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction☆386Updated 9 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆547Updated 4 months ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆319Updated 6 months ago
- Pytorch implementation of DoReMi, a method for optimizing the data mixture weights in language modeling datasets☆321Updated last year
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆686Updated 8 months ago
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆628Updated 9 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆833Updated last week
- [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.☆800Updated 6 months ago
- distributed trainer for LLMs☆572Updated 11 months ago
- Official repository for ORPO☆448Updated 10 months ago
- ☆255Updated last year
- Explorations into some recent techniques surrounding speculative decoding☆254Updated 4 months ago
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆407Updated last year
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆450Updated last year
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆956Updated 4 months ago