RZFan525 / Awesome-ScalingLawsLinks
A curated list of awesome resources dedicated to Scaling Laws for LLMs
☆77Updated 2 years ago
Alternatives and similar repositories for Awesome-ScalingLaws
Users that are interested in Awesome-ScalingLaws are comparing it to the libraries listed below
Sorting:
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆62Updated last year
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated last year
- ☆53Updated last year
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆69Updated 8 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆219Updated 5 months ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆56Updated last year
- ☆68Updated last year
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆132Updated 2 years ago
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆185Updated last year
- GenRM-CoT: Data release for verification rationales☆65Updated 10 months ago
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆77Updated 9 months ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆38Updated last year
- ☆39Updated last year
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆112Updated 8 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆141Updated 11 months ago
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆74Updated 9 months ago
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆64Updated 9 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆114Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆123Updated 11 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆80Updated 2 months ago
- ☆96Updated last year
- Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"☆91Updated last month
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆170Updated 3 months ago
- ☆104Updated last month
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75Updated 3 months ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆83Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆114Updated 11 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆51Updated 2 months ago
- ☆75Updated last year
- A method of ensemble learning for heterogeneous large language models.☆58Updated last year