AlanAnsell / peftLinks
☆18Updated last year
Alternatives and similar repositories for peft
Users that are interested in peft are comparing it to the libraries listed below
Sorting:
- ☆13Updated last year
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- The code for the paper: "Same Task, More Tokens: the Impact of Input Length on the Reasoning Performance of Large Language Models"☆54Updated last year
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆79Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆43Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆56Updated last year
- SILO Language Models code repository☆81Updated last year
- ☆15Updated 3 months ago
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆23Updated 2 months ago
- Long Context Extension and Generalization in LLMs☆57Updated 9 months ago
- ☆32Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated 10 months ago
- [NeurIPS 2024] Low rank memory efficient optimizer without SVD☆30Updated 2 weeks ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆136Updated last year
- Scaling Sparse Fine-Tuning to Large Language Models☆16Updated last year
- ☆95Updated last year
- Code repository for the c-BTM paper☆106Updated last year
- Codebase for Context-aware Meta-learned Loss Scaling (CaMeLS). https://arxiv.org/abs/2305.15076.☆25Updated last year
- ☆36Updated last year
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆78Updated this week
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- ☆127Updated last year
- ☆82Updated 6 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated 11 months ago
- [ACL 2025] An inference-time decoding strategy with adaptive foresight sampling☆99Updated 2 months ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆86Updated last year
- ☆27Updated 11 months ago
- ☆68Updated last year
- Code for Zero-Shot Tokenizer Transfer☆133Updated 6 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆91Updated 2 months ago