analokmaus / kaggle-aimo2-fast-math-r1Links
Kaggle AIMO2 solution with token-efficient reasoning LLM recipes
☆34Updated 3 weeks ago
Alternatives and similar repositories for kaggle-aimo2-fast-math-r1
Users that are interested in kaggle-aimo2-fast-math-r1 are comparing it to the libraries listed below
Sorting:
- ☆48Updated 11 months ago
- Codebase for Instruction Following without Instruction Tuning☆35Updated 11 months ago
- Code for ICML 25 paper "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆41Updated last month
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆81Updated 9 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆56Updated 2 weeks ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆44Updated last year
- DPO, but faster 🚀☆44Updated 8 months ago
- [NAACL 2025] Representing Rule-based Chatbots with Transformers☆22Updated 6 months ago
- ☆65Updated last year
- Organize the Web: Constructing Domains Enhances Pre-Training Data Curation☆62Updated 3 months ago
- ☆76Updated last year
- List of papers on Self-Correction of LLMs.☆74Updated 8 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆54Updated 6 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆61Updated 10 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆33Updated last month
- Long Context Extension and Generalization in LLMs☆58Updated 11 months ago
- Code for reproducing our paper "Low Rank Adapting Models for Sparse Autoencoder Features"☆13Updated 4 months ago
- WideSearch: Benchmarking Agentic Broad Info-Seeking☆80Updated 2 weeks ago
- Code for NeurIPS LLM Efficiency Challenge☆59Updated last year
- ☆51Updated 2 months ago
- ☆19Updated 7 months ago
- ☆11Updated 2 years ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆91Updated 9 months ago
- [NeurIPS 2024 Main Track] Code for the paper titled "Instruction Tuning With Loss Over Instructions"☆38Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated last year
- 6th Position Solution Code for Kaggle - LLM Science Exam Competition☆23Updated last year
- Aioli: A unified optimization framework for language model data mixing☆27Updated 7 months ago
- [ICML 24 NGSM workshop] Associative Recurrent Memory Transformer implementation and scripts for training and evaluation☆43Updated 2 weeks ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆84Updated last month