Cohere-Labs-Community / parameter-efficient-moeLinks
☆271Updated 2 years ago
Alternatives and similar repositories for parameter-efficient-moe
Users that are interested in parameter-efficient-moe are comparing it to the libraries listed below
Sorting:
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆443Updated last year
- DSIR large-scale data selection framework for language model training☆265Updated last year
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆218Updated last year
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆441Updated last year
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆523Updated 9 months ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆109Updated 8 months ago
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆402Updated last year
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆318Updated last year
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆266Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆477Updated last year
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆166Updated last year
- ☆196Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆204Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆147Updated last year
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆236Updated 7 months ago
- Project for the paper entitled `Instruction Tuning for Large Language Models: A Survey`☆190Updated 3 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆121Updated last year
- ☆197Updated 6 months ago
- ☆314Updated last year
- Self-Alignment with Principle-Following Reward Models☆169Updated last month
- ☆163Updated last year
- A Survey on Data Selection for Language Models☆252Updated 6 months ago
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆391Updated last year
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆175Updated last year
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆192Updated last year
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆574Updated 11 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆237Updated last month
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆181Updated 4 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆176Updated 5 months ago
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆365Updated last year