AlanAnsell / peftLinks
☆20Updated last year
Alternatives and similar repositories for peft
Users that are interested in peft are comparing it to the libraries listed below
Sorting:
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆22Updated last year
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆105Updated last month
- ☆128Updated last year
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆78Updated last year
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆24Updated 7 months ago
- ☆35Updated last year
- The original Backpack Language Model implementation, a fork of FlashAttention☆69Updated 2 years ago
- Long Context Extension and Generalization in LLMs☆62Updated last year
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆76Updated 6 months ago
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆143Updated 3 years ago
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆62Updated 3 months ago
- The HELMET Benchmark☆186Updated 3 months ago
- [NeurIPS 2024 Main Track] Code for the paper titled "Instruction Tuning With Loss Over Instructions"☆38Updated last year
- ☆75Updated last year
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆63Updated last year
- ☆101Updated 2 years ago
- ☆95Updated last year
- ☆34Updated 2 years ago
- This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.1…☆136Updated 2 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Updated 2 years ago
- Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models☆47Updated 2 years ago
- PASTA: Post-hoc Attention Steering for LLMs☆129Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆61Updated last year
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆40Updated 2 years ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Updated 2 years ago
- This is a repository for "PMET: Precise Model Editing in a Transformer"☆55Updated 2 years ago
- ☆142Updated last year
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Updated 7 months ago
- ☆88Updated last year