thunlp / APBLinks
Official Implementation of APB (ACL 2025 main Oral)
☆32Updated 10 months ago
Alternatives and similar repositories for APB
Users that are interested in APB are comparing it to the libraries listed below
Sorting:
- ☆60Updated 6 months ago
- ☆85Updated last month
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- ☆98Updated 4 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆40Updated 2 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆56Updated 10 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆52Updated last year
- ☆108Updated 3 months ago
- Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization☆80Updated this week
- ☆19Updated 11 months ago
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆118Updated 7 months ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆32Updated last year
- ☆71Updated last year
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- Beyond KV Caching: Shared Attention for Efficient LLMs☆20Updated last year
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆43Updated 10 months ago
- ☆74Updated 6 months ago
- [ICML 2025] From Low Rank Gradient Subspace Stabilization to Low-Rank Weights: Observations, Theories and Applications☆52Updated last month
- Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI, derived from Ling.☆107Updated 4 months ago
- ☆85Updated 8 months ago
- [EMNLP'25 Industry] Repo for "Z1: Efficient Test-time Scaling with Code"☆67Updated 8 months ago
- DPO, but faster 🚀☆46Updated last year
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆28Updated last year
- (ACL 2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generation☆33Updated 6 months ago
- Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?☆119Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆138Updated last year
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆32Updated 4 months ago
- FuseAI Project☆87Updated 11 months ago
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆35Updated 9 months ago
- ☆126Updated 6 months ago