HIT-SCIR / AbacusLinks
珠算代码大模型(Abacus Code LLM)
☆57Updated last year
Alternatives and similar repositories for Abacus
Users that are interested in Abacus are comparing it to the libraries listed below
Sorting:
- We aim to provide the best references to search, select, and synthesize high-quality and large-quantity data for post-training your LLMs.☆60Updated last year
- IKEA: Reinforced Internal-External Knowledge Synergistic Reasoning for Efficient Adaptive Search Agent☆66Updated 6 months ago
- ☆83Updated last year
- ☆51Updated 8 months ago
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆118Updated 6 months ago
- The official GitHub page for the survey paper "A Survey on Data Augmentation in Large Model Era"☆131Updated last year
- ☆60Updated last year
- Harnessing the Reasoning Economy: A Survey of Efficient Reasoning for Large Language Models☆119Updated last month
- The demo, code and data of FollowRAG☆75Updated 4 months ago
- The implementation of paper "LLM Critics Help Catch Bugs in Mathematics: Towards a Better Mathematical Verifier with Natural Language Fee…☆38Updated last year
- ☆49Updated last year
- [COLM 2025] An Open Math Pre-trainng Dataset with 370B Tokens.☆107Updated 7 months ago
- A curated list of cutting-edge research papers and resources on Long Chain-of-Thought (CoT) Reasoning with Tools.☆41Updated 4 months ago
- ☆21Updated last year
- Trending projects & awesome papers about data-centric llm studies.☆38Updated 6 months ago
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆43Updated 8 months ago
- ☆172Updated 6 months ago
- ☆102Updated last year
- The official repository of the Omni-MATH benchmark.☆88Updated 10 months ago
- ☆86Updated 3 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆138Updated last year
- ☆51Updated last year
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆51Updated 5 months ago
- 最简易的R1结果在小模型上的复现,阐述类O1与DeepSeek R1最重要的本质。Think is all your need。利用实验佐证,对于强推理能力,think思考过程性内容是AGI/ASI的核心。☆44Updated 9 months ago
- Reformatted Alignment☆112Updated last year
- ☆118Updated this week
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆83Updated last year
- Official completion of “Training on the Benchmark Is Not All You Need”.☆37Updated 10 months ago
- The code for paper: Decoupled Planning and Execution: A Hierarchical Reasoning Framework for Deep Search☆62Updated 4 months ago
- ☆125Updated last year