LLM360 / k2-trainLinks
☆52Updated last year
Alternatives and similar repositories for k2-train
Users that are interested in k2-train are comparing it to the libraries listed below
Sorting:
- This is the official repository for Inheritune.☆115Updated 9 months ago
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆110Updated 11 months ago
- ☆88Updated last week
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year
- ☆88Updated last year
- Verifiers for LLM Reinforcement Learning☆79Updated 7 months ago
- ☆39Updated last year
- A repository for research on medium sized language models.☆78Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Updated last year
- Replicating O1 inference-time scaling laws☆90Updated 11 months ago
- ☆124Updated 9 months ago
- ☆48Updated last year
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆92Updated last year
- Exploration of automated dataset selection approaches at large scales.☆48Updated 8 months ago
- ☆78Updated 3 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆110Updated 7 months ago
- ☆85Updated last week
- [EMNLP'25 Industry] Repo for "Z1: Efficient Test-time Scaling with Code"☆66Updated 7 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆118Updated last year
- Evaluating LLMs with fewer examples☆168Updated last year
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆40Updated last month
- Organize the Web: Constructing Domains Enhances Pre-Training Data Curation☆69Updated 6 months ago
- ☆108Updated last year
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆89Updated last year
- ☆75Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆85Updated last year
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆122Updated last year
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆156Updated 7 months ago