xichen-fy / FiraLinks
Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?
☆112Updated 8 months ago
Alternatives and similar repositories for Fira
Users that are interested in Fira are comparing it to the libraries listed below
Sorting:
- ☆104Updated 3 weeks ago
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆85Updated this week
- ☆85Updated 2 months ago
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"☆233Updated 2 weeks ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆81Updated 6 months ago
- [ICML 2025] Fourier Position Embedding: Enhancing Attention’s Periodic Extension for Length Generalization☆71Updated 3 weeks ago
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cache…☆113Updated last week
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆73Updated 4 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆88Updated last month
- paper list, tutorial, and nano code snippet for Diffusion Large Language Models.☆75Updated this week
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆45Updated 7 months ago
- ☆80Updated 5 months ago
- ☆51Updated 3 months ago
- ☆84Updated last month
- Official codebase for "GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning".☆75Updated 3 weeks ago
- ☆292Updated last week
- ☆58Updated this week
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆46Updated 8 months ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆156Updated 3 months ago
- XAttention: Block Sparse Attention with Antidiagonal Scoring☆166Updated this week
- LongSpec: Long-Context Lossless Speculative Decoding with Efficient Drafting and Verification☆54Updated 3 months ago
- PoC for "SpecReason: Fast and Accurate Inference-Time Compute via Speculative Reasoning" [arXiv '25]☆39Updated last month
- ☆17Updated 5 months ago
- [ICLR 2025] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration☆51Updated 4 months ago
- ☆109Updated 3 months ago
- The Entropy Mechanism of Reinforcement Learning for Large Language Model Reasoning.☆191Updated last week
- Repo for "Z1: Efficient Test-time Scaling with Code"☆61Updated 2 months ago
- ☆203Updated 4 months ago
- ☆45Updated last week
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆84Updated 7 months ago