Infini-AI-Lab / GRESOLinks
☆70Updated 5 months ago
Alternatives and similar repositories for GRESO
Users that are interested in GRESO are comparing it to the libraries listed below
Sorting:
- Kinetics: Rethinking Test-Time Scaling Laws☆84Updated 5 months ago
- ☆17Updated 4 months ago
- Code for "Reasoning to Learn from Latent Thoughts"☆123Updated 8 months ago
- A Sober Look at Language Model Reasoning☆92Updated last month
- ☆45Updated 2 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆40Updated 2 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆107Updated 2 months ago
- ☆108Updated 3 months ago
- ☆51Updated 10 months ago
- Long Context Extension and Generalization in LLMs☆62Updated last year
- ☆19Updated 11 months ago
- [NeurIPS 2025 Spotlight] Co-Evolving LLM Coder and Unit Tester via Reinforcement Learning☆144Updated 3 months ago
- ☆60Updated 6 months ago
- Reinforcing General Reasoning without Verifiers☆92Updated 6 months ago
- Optimizing Anytime Reasoning via Budget Relative Policy Optimization☆50Updated 5 months ago
- Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization☆80Updated 2 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆56Updated 10 months ago
- ☆53Updated 10 months ago
- (ACL 2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generation☆33Updated 6 months ago
- RL with Experience Replay☆51Updated 4 months ago
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆31Updated 4 months ago
- Exploration of automated dataset selection approaches at large scales.☆51Updated 9 months ago
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆70Updated 8 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆148Updated 5 months ago
- Code for the EMNLP24 paper "A simple and effective L2 norm based method for KV Cache compression."☆17Updated last year
- A holistic benchmark for LLM abstention☆67Updated 3 months ago
- Code for "Language Models Can Learn from Verbal Feedback Without Scalar Rewards"☆55Updated 2 months ago
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆28Updated last year
- ☆68Updated 6 months ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆89Updated last year