LLM360 / k2-trainLinks
☆52Updated last year
Alternatives and similar repositories for k2-train
Users that are interested in k2-train are comparing it to the libraries listed below
Sorting:
- ☆48Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year
- This is the official repository for Inheritune.☆115Updated 10 months ago
- ☆39Updated last year
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆112Updated last year
- ☆79Updated 3 weeks ago
- A repository for research on medium sized language models.☆78Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- ☆65Updated last year
- ☆93Updated 2 weeks ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆118Updated last year
- ☆89Updated last year
- [EMNLP'25 Industry] Repo for "Z1: Efficient Test-time Scaling with Code"☆67Updated 8 months ago
- Verifiers for LLM Reinforcement Learning☆80Updated 7 months ago
- Functional Benchmarks and the Reasoning Gap☆90Updated last year
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆93Updated last year
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- Replicating O1 inference-time scaling laws☆90Updated last year
- Evaluating LLMs with fewer examples☆169Updated last year
- Organize the Web: Constructing Domains Enhances Pre-Training Data Curation☆72Updated 7 months ago
- ☆70Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Updated last year
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆220Updated last month
- EvaByte: Efficient Byte-level Language Models at Scale☆111Updated 7 months ago
- Exploration of automated dataset selection approaches at large scales.☆50Updated 9 months ago
- The code for the paper: "Same Task, More Tokens: the Impact of Input Length on the Reasoning Performance of Large Language Models"☆55Updated last month
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Updated last year
- ☆125Updated 9 months ago
- MatFormer repo☆66Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆86Updated last year