Guitaricet / my_pefty_llamaLinks
Minimal implementation of multiple PEFT methods for LLaMA fine-tuning
☆13Updated 2 years ago
Alternatives and similar repositories for my_pefty_llama
Users that are interested in my_pefty_llama are comparing it to the libraries listed below
Sorting:
- Embedding Recycling for Language models☆38Updated 2 years ago
 - ☆44Updated 11 months ago
 - Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
 - Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 2 years ago
 - ☆76Updated last year
 - The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆107Updated 2 years ago
 - ☆65Updated 2 years ago
 - [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆99Updated 2 years ago
 - SILO Language Models code repository☆83Updated last year
 - The GitHub repo for Goal Driven Discovery of Distributional Differences via Language Descriptions☆71Updated 2 years ago
 - Experiments with generating opensource language model assistants☆97Updated 2 years ago
 - Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆59Updated 2 years ago
 - The original Backpack Language Model implementation, a fork of FlashAttention☆69Updated 2 years ago
 - Interpreting Language Models with Contrastive Explanations (EMNLP 2022 Best Paper Honorable Mention)☆62Updated 3 years ago
 - Datasets collection and preprocessings framework for NLP extreme multitask learning☆188Updated 3 months ago
 - [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆78Updated last year
 - A categorical archive of ChatGPT failures☆64Updated 2 years ago
 - ☆26Updated 8 months ago
 - Observe the slow deterioration of my mental sanity in the github commit history☆12Updated 2 years ago
 - [TMLR'23] Contrastive Search Is What You Need For Neural Text Generation☆121Updated 2 years ago
 - [NeurIPS 2023 Main Track] This is the repository for the paper titled "Don’t Stop Pretraining? Make Prompt-based Fine-tuning Powerful Lea…☆75Updated last year
 - Interpretable unified language safety checking with large language models☆31Updated 2 years ago
 - Easy modernBERT fine-tuning and multi-task learning☆61Updated 4 months ago
 - ☆55Updated 2 years ago
 - Transformers at any scale☆41Updated last year
 - [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆135Updated last year
 - ☆27Updated last year
 - [ICLR 2023] PyTorch code of Summarization Programs: Interpretable Abstractive Summarization with Neural Modular Trees☆24Updated 2 years ago
 - Ranking of fine-tuned HF models as base models.☆36Updated last month
 - [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated 4 months ago