WENGSYX / LMTunerLinks
LMTuner: Make the LLM Better for Everyone
☆38Updated 2 years ago
Alternatives and similar repositories for LMTuner
Users that are interested in LMTuner are comparing it to the libraries listed below
Sorting:
- FuseAI Project☆87Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Updated last year
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆81Updated 2 years ago
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated 2 years ago
- Reformatted Alignment☆111Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"☆118Updated last year
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆58Updated last week
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆147Updated last year
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆63Updated last year
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆126Updated last year
- Plug in and play implementation of " Textbooks Are All You Need", ready for training, inference, and dataset generation☆73Updated 2 years ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209Updated last year
- ☆48Updated last year
- [EMNLP'24] LongHeads: Multi-Head Attention is Secretly a Long Context Processor☆31Updated last year
- Cascade Speculative Drafting☆32Updated last year
- [TMLR 2026] When Attention Collapses: How Degenerate Layers in LLMs Enable Smaller, Stronger Models☆122Updated last year
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- Code for ICML 25 paper "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆49Updated 7 months ago
- Easy control for Key-Value Constrained Generative LLM Inference(https://arxiv.org/abs/2402.06262)☆63Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)☆73Updated last year
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Updated last year
- Code for paper titled "Towards the Law of Capacity Gap in Distilling Language Models"☆102Updated last year
- Implementations of online merging optimizers proposed by Online Merging Optimizers for Boosting Rewards and Mitigating Tax in Alignment☆81Updated last year
- Implementation of NAACL 2024 Outstanding Paper "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆152Updated 10 months ago
- A repository for research on medium sized language models.☆77Updated last year
- ☆95Updated last year
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)☆29Updated last year
- Data preparation code for Amber 7B LLM☆94Updated last year