analokmaus / kaggle-aimo2-fast-math-r1Links
Kaggle AIMO2 solution with token-efficient reasoning LLM recipes
☆42Updated 5 months ago
Alternatives and similar repositories for kaggle-aimo2-fast-math-r1
Users that are interested in kaggle-aimo2-fast-math-r1 are comparing it to the libraries listed below
Sorting:
- ☆48Updated last year
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- List of papers on Self-Correction of LLMs.☆80Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- Code for ICML 25 paper "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆49Updated 7 months ago
- Organize the Web: Constructing Domains Enhances Pre-Training Data Curation☆76Updated 8 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆88Updated last year
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆116Updated last year
- NeurIPS 2024 tutorial on LLM Inference☆47Updated last year
- ☆82Updated last year
- ☆52Updated last year
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆94Updated last year
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆44Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆78Updated last year
- Simple repository for training small reasoning models☆48Updated 11 months ago
- Code for NeurIPS LLM Efficiency Challenge☆60Updated last year
- ☆16Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- Official repository for "BLEUBERI: BLEU is a surprisingly effective reward for instruction following"☆31Updated 7 months ago
- ☆53Updated 11 months ago
- LLM-Merging: Building LLMs Efficiently through Merging☆209Updated last year
- Long Context Extension and Generalization in LLMs☆62Updated last year
- Codebase accompanying the Summary of a Haystack paper.☆80Updated last year
- RL significantly the reasoning capability of Qwen2.5-1.5B-Instruct☆31Updated 11 months ago
- [NeurIPS 2024 Main Track] Code for the paper titled "Instruction Tuning With Loss Over Instructions"☆38Updated last year
- Checkpointable dataset utilities for foundation model training☆32Updated 2 years ago
- Supercharge huggingface transformers with model parallelism.☆77Updated 6 months ago
- [ACL 2025 Findings] Autonomous Data Selection with Zero-shot Generative Classifiers for Mathematical Texts (As Huggingface Daily Papers: …☆90Updated 2 months ago
- User-friendly implementation of the Mixture-of-Sparse-Attention (MoSA). MoSA selects distinct tokens for each head with expert choice rou…☆28Updated 8 months ago
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆63Updated last year