GeneZC / MiniMALinks
Code for paper titled "Towards the Law of Capacity Gap in Distilling Language Models"
☆100Updated 10 months ago
Alternatives and similar repositories for MiniMA
Users that are interested in MiniMA are comparing it to the libraries listed below
Sorting:
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆202Updated last year
- ☆100Updated 8 months ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆78Updated last year
- FuseAI Project☆87Updated 4 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆143Updated 8 months ago
- Unofficial implementation of AlpaGasus☆91Updated last year
- Reformatted Alignment☆114Updated 8 months ago
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated last year
- MultilingualSIFT: Multilingual Supervised Instruction Fine-tuning☆89Updated last year
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆121Updated 4 months ago
- Self-Alignment with Principle-Following Reward Models☆161Updated 3 weeks ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆106Updated 3 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆123Updated last year
- Flacuna was developed by fine-tuning Vicuna on Flan-mini, a comprehensive instruction collection encompassing various tasks. Vicuna is al…☆111Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆132Updated 11 months ago
- Experiments on speculative sampling with Llama models☆126Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆142Updated 7 months ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆136Updated 10 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆77Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆461Updated last year
- ☆258Updated last year
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆111Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆188Updated 9 months ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆153Updated 11 months ago
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆183Updated last year
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆81Updated 9 months ago
- ☆67Updated 2 months ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆103Updated 2 years ago
- ☆49Updated last year