Red-Hat-AI-Innovation-Team / SQuatLinks
☆16Updated last month
Alternatives and similar repositories for SQuat
Users that are interested in SQuat are comparing it to the libraries listed below
Sorting:
- ☆80Updated 6 months ago
- ☆82Updated 10 months ago
- Reinforcing General Reasoning without Verifiers☆71Updated 3 weeks ago
- Work in progress.☆70Updated 2 weeks ago
- Unofficial Implementation of Selective Attention Transformer☆17Updated 8 months ago
- ☆97Updated 9 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆33Updated 3 months ago
- [ICML 2025] Reward-guided Speculative Decoding (RSD) for efficiency and effectiveness.☆35Updated 2 months ago
- ☆13Updated 6 months ago
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆33Updated 4 months ago
- This is the official repository for the paper "Flora: Low-Rank Adapters Are Secretly Gradient Compressors" in ICML 2024.☆104Updated last year
- [ACL 2025] An inference-time decoding strategy with adaptive foresight sampling☆99Updated last month
- EvaByte: Efficient Byte-level Language Models at Scale☆103Updated 2 months ago
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆47Updated 2 months ago
- Repo for "Z1: Efficient Test-time Scaling with Code"☆63Updated 3 months ago
- AnchorAttention: Improved attention for LLMs long-context training☆208Updated 6 months ago
- The evaluation framework for training-free sparse attention in LLMs☆83Updated 3 weeks ago
- Mask-Enhanced Autoregressive Prediction: Pay Less Attention to Learn More☆31Updated 2 months ago
- The official implementation of Regularized Policy Gradient (RPG) (https://arxiv.org/abs/2505.17508)☆35Updated last week
- The official repo for "LLoCo: Learning Long Contexts Offline"☆117Updated last year
- Make reasoning models scalable☆40Updated last month
- ☆71Updated last week
- ☆66Updated last year
- ☆23Updated 3 weeks ago
- ☆117Updated 4 months ago
- Replicating O1 inference-time scaling laws☆89Updated 7 months ago
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆139Updated this week
- ☆26Updated 6 months ago
- Code for "Reasoning to Learn from Latent Thoughts"☆112Updated 3 months ago
- A repo for open research on building large reasoning models☆71Updated this week