Guitaricet / my_pefty_llamaLinks
Minimal implementation of multiple PEFT methods for LLaMA fine-tuning
☆13Updated 2 years ago
Alternatives and similar repositories for my_pefty_llama
Users that are interested in my_pefty_llama are comparing it to the libraries listed below
Sorting:
- SILO Language Models code repository☆81Updated last year
- ☆44Updated 9 months ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- Embedding Recycling for Language models☆39Updated 2 years ago
- Datasets collection and preprocessings framework for NLP extreme multitask learning☆186Updated last month
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆99Updated 2 years ago
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆107Updated last year
- [TMLR'23] Contrastive Search Is What You Need For Neural Text Generation☆120Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆94Updated 2 years ago
- [NeurIPS 2023 Main Track] This is the repository for the paper titled "Don’t Stop Pretraining? Make Prompt-based Fine-tuning Powerful Lea…☆75Updated last year
- Code and model release for the paper "Task-aware Retrieval with Instructions" by Asai et al.☆163Updated last year
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆78Updated 11 months ago
- ☆75Updated last year
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated 2 months ago
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆132Updated last year
- ☆66Updated 2 years ago
- Retrieval as Attention☆83Updated 2 years ago
- Reverse Instructions to generate instruction tuning data with corpus examples☆215Updated last year
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆58Updated 2 years ago
- ☆39Updated last year
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆49Updated last year
- Efficient Language Model Training through Cross-Lingual and Progressive Transfer Learning☆30Updated 2 years ago
- Code and Data for "Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering"☆86Updated last year
- Code, datasets, models for the paper "Automatic Evaluation of Attribution by Large Language Models"☆56Updated 2 years ago
- Code for Zero-Shot Tokenizer Transfer☆135Updated 7 months ago
- Transformers at any scale☆41Updated last year
- Prompting Large Language Models to Generate Dense and Sparse Representations for Zero-Shot Document Retrieval☆51Updated 2 months ago
- ☆72Updated 2 years ago
- Hierarchical Attention Transformers (HAT)☆58Updated last year
- Truly flash T5 realization!☆70Updated last year