Cohere-Labs-Community / parameter-efficient-moeLinks
☆274Updated 2 years ago
Alternatives and similar repositories for parameter-efficient-moe
Users that are interested in parameter-efficient-moe are comparing it to the libraries listed below
Sorting:
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆324Updated last year
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆453Updated last year
- DSIR large-scale data selection framework for language model training☆268Updated last year
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆410Updated last year
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆532Updated 11 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆482Updated last year
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆110Updated 10 months ago
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆226Updated last year
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆248Updated 10 months ago
- ☆204Updated last year
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆366Updated last year
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆168Updated last year
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆269Updated last year
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆444Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆148Updated last year
- The repo for In-context Autoencoder☆162Updated last year
- Project for the paper entitled `Instruction Tuning for Large Language Models: A Survey`☆223Updated 5 months ago
- Pytorch implementation of DoReMi, a method for optimizing the data mixture weights in language modeling datasets☆350Updated 2 years ago
- ☆320Updated last year
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆198Updated last month
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆243Updated 4 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆184Updated 6 months ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆176Updated last year
- A Survey on Data Selection for Language Models☆254Updated 8 months ago
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆253Updated 2 years ago
- Code for STaR: Bootstrapping Reasoning With Reasoning (NeurIPS 2022)☆218Updated 2 years ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆204Updated last year
- ☆173Updated last year
- An Extensible Continual Learning Framework Focused on Language Models (LMs)☆292Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Users☆245Updated last year