hkust-nlp / llm-compression-intelligence
Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]
☆130Updated 6 months ago
Alternatives and similar repositories for llm-compression-intelligence:
Users that are interested in llm-compression-intelligence are comparing it to the libraries listed below
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆166Updated 2 weeks ago
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆53Updated 5 months ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆73Updated 9 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆55Updated 3 months ago
- GenRM-CoT: Data release for verification rationales☆51Updated 5 months ago
- ☆143Updated 3 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆76Updated last year
- The official repository of the Omni-MATH benchmark.☆74Updated 3 months ago
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆98Updated 3 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆119Updated 6 months ago
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆71Updated last year
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆158Updated 9 months ago
- Interpretable Contrastive Monte Carlo Tree Search Reasoning☆45Updated 4 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆136Updated last week
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆101Updated this week
- ☆61Updated 4 months ago
- ☆81Updated last year
- The code for creating the iGSM datasets in papers "Physics of Language Models Part 2.1, Grade-School Math and the Hidden Reasoning Proces…☆40Updated 2 months ago
- Self-Alignment with Principle-Following Reward Models☆156Updated last year
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆61Updated 4 months ago
- We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆60Updated 4 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆44Updated last month
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆70Updated 3 months ago
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆179Updated 7 months ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆96Updated 2 weeks ago
- ☆136Updated 4 months ago