Kinetics: Rethinking Test-Time Scaling Laws
☆86Jul 11, 2025Updated 8 months ago
Alternatives and similar repositories for Kinetics
Users that are interested in Kinetics are comparing it to the libraries listed below
Sorting:
- [NAACL'25 🏆 SAC Award] Official code for "Advancing MoE Efficiency: A Collaboration-Constrained Routing (C2R) Strategy for Better Expert…☆16Feb 4, 2025Updated last year
- Official code for the paper "HEXA-MoE: Efficient and Heterogeneous-Aware MoE Acceleration with Zero Computation Redundancy"☆15Mar 6, 2025Updated last year
- Vortex: A Flexible and Efficient Sparse Attention Framework☆49Jan 21, 2026Updated last month
- ☆35Mar 12, 2025Updated last year
- Code for "Reasoning to Learn from Latent Thoughts"☆125Mar 28, 2025Updated 11 months ago
- [ICML‘25] Official code for paper "Occult: Optimizing Collaborative Communication across Experts for Accelerated Parallel MoE Training an…☆13Apr 17, 2025Updated 11 months ago
- ☆33Oct 13, 2025Updated 5 months ago
- Quantize transformers to any learned arbitrary 4-bit numeric format☆53Jan 25, 2026Updated last month
- [ICML2024 Spotlight] Fine-Tuning Pre-trained Large Language Models Sparsely☆24Jun 26, 2024Updated last year
- [ACL 2025 main] FR-Spec: Frequency-Ranked Speculative Sampling☆53Jul 15, 2025Updated 8 months ago
- ☆63Jun 12, 2025Updated 9 months ago
- ☆83Feb 10, 2026Updated last month
- [ICML‘2024] "LoCoCo: Dropping In Convolutions for Long Context Compression", Ruisi Cai, Yuandong Tian, Zhangyang Wang, Beidi Chen☆17Sep 7, 2024Updated last year
- [ECCV 2022 Oral] AutoMix: Unveiling the Power of Mixup for Stronger Classifiers☆18Apr 25, 2023Updated 2 years ago
- Set-Encoder: Permutation-Invariant Inter-Passage Attention for Listwise Passage Re-Ranking with Cross-Encoders☆18May 23, 2025Updated 9 months ago
- Official Implementation of APB (ACL 2025 main Oral) and Spava.☆35Jan 30, 2026Updated last month
- ArcherCodeR is an open-source initiative enhancing code reasoning in large language models through scalable, rule-governed reinforcement …☆45Aug 6, 2025Updated 7 months ago
- LLM Inference with Microscaling Format☆34Nov 12, 2024Updated last year
- Sirius, an efficient correction mechanism, which significantly boosts Contextual Sparsity models on reasoning tasks while maintaining its…☆21Sep 10, 2024Updated last year
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆62Mar 25, 2025Updated 11 months ago
- ☆119May 19, 2025Updated 10 months ago
- [ICLR2025 Spotlight] MagicPIG: LSH Sampling for Efficient LLM Generation☆251Dec 16, 2024Updated last year
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆376Jul 10, 2025Updated 8 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆272Jul 6, 2025Updated 8 months ago
- Cascade Speculative Drafting☆33Apr 2, 2024Updated last year
- Entropy-Driven GRPO with Guided Error Correction for Advantage Diversity☆22Aug 28, 2025Updated 6 months ago
- [EMNLP 25] An effective and interpretable weight-editing method for mitigating overly short reasoning in LLMs, and a mechanistic study un…☆17Dec 17, 2025Updated 3 months ago
- Code of EMNLP 2025 paper 'UltraIF: Advancing Instruction Following from the Wild'.☆21Apr 3, 2025Updated 11 months ago
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆283May 1, 2025Updated 10 months ago
- Measuring Thinking Efficiency in Reasoning Models - Research Repository☆39Dec 2, 2025Updated 3 months ago
- ☆76Jun 28, 2025Updated 8 months ago
- [ICLR 2026] PSFT is a trust-region–inspired fine-tuning objective that views SFT as a policy gradient method with constant advantages, co…☆36Sep 9, 2025Updated 6 months ago
- [COLM 2025] Official PyTorch implementation of "Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models"☆72Jul 8, 2025Updated 8 months ago
- ☆25Apr 10, 2025Updated 11 months ago
- ☆36Feb 26, 2024Updated 2 years ago
- ☆54May 19, 2025Updated 10 months ago
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆43Aug 14, 2024Updated last year
- Official implementation of the paper: "A deeper look at depth pruning of LLMs"☆15Jul 24, 2024Updated last year
- ☆115Aug 26, 2024Updated last year