timinar / BabyLlamaLinks
Training code for Baby-Llama, our submission to the strict-small track of the BabyLM challenge.
☆85Updated 2 years ago
Alternatives and similar repositories for BabyLlama
Users that are interested in BabyLlama are comparing it to the libraries listed below
Sorting:
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆147Updated last year
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆250Updated 10 months ago
- ☆235Updated last year
- ☆128Updated 2 years ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆126Updated last year
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆106Updated last year
- Implementation of NAACL 2024 Outstanding Paper "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆152Updated 10 months ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆176Updated last year
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Updated last year
- ☆273Updated 2 years ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Updated 9 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆445Updated last year
- ☆85Updated 2 months ago
- Low-bit optimizers for PyTorch☆138Updated 2 years ago
- Due to the huge vocaburary size (151,936) of Qwen models, the Embedding and LM Head weights are excessively heavy. Therefore, this projec…☆32Updated 3 weeks ago
- Explorations into some recent techniques surrounding speculative decoding☆299Updated last year
- [ACL 2024] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models☆113Updated last year
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆168Updated last year
- Unofficial implementations of block/layer-wise pruning methods for LLMs.☆77Updated last year
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Updated last year
- Code for paper "Patch-Level Training for Large Language Models"☆97Updated 2 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆205Updated last year
- Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**☆214Updated 11 months ago
- ☆203Updated last year
- ☆125Updated last year
- [ICLR‘24 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆103Updated 7 months ago
- Official PyTorch implementation of QA-LoRA☆145Updated last year
- Unofficial implementation of AlpaGasus☆94Updated 2 years ago
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆193Updated last year
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆110Updated 11 months ago