weigao266 / Awesome-Efficient-ArchLinks
Speed Always Wins: A Survey on Efficient Architectures for Large Language Models
☆368Updated 3 weeks ago
Alternatives and similar repositories for Awesome-Efficient-Arch
Users that are interested in Awesome-Efficient-Arch are comparing it to the libraries listed below
Sorting:
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"☆725Updated last week
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆196Updated 2 months ago
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)☆413Updated 2 months ago
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆460Updated 6 months ago
- ☆439Updated 3 months ago
- ☆207Updated last month
- Implementation for FP8/INT8 Rollout for RL training without performence drop.☆279Updated last month
- Efficient Mixture of Experts for LLM Paper List☆145Updated 2 months ago
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆145Updated 7 months ago
- 青稞Talk☆169Updated 2 weeks ago
- Efficient LLM Inference over Long Sequences☆392Updated 5 months ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆510Updated 9 months ago
- The official repo of One RL to See Them All: Visual Triple Unified Reinforcement Learning☆328Updated 6 months ago
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cache…☆186Updated 3 weeks ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆221Updated 4 months ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆223Updated last month
- GPU-optimized framework for training diffusion language models at any scale. The backend of Quokka, Super Data Learners, and OpenMoE 2 tr…☆289Updated 3 weeks ago
- ☆344Updated this week
- OpenSeek aims to unite the global open source community to drive collaborative innovation in algorithms, data and systems to develop next…☆240Updated 3 weeks ago
- A lightweight reinforcement learning framework that integrates seamlessly into your codebase, empowering developers to focus on algorithm…☆90Updated 3 months ago
- Discrete Diffusion Forcing (D2F): dLLMs Can Do Faster-Than-AR Inference☆205Updated 2 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆256Updated 5 months ago
- Official codebase for "Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling".☆276Updated 9 months ago
- A Comprehensive Survey on Long Context Language Modeling☆209Updated 2 weeks ago
- Super-Efficient RLHF Training of LLMs with Parameter Reallocation☆326Updated 7 months ago
- MiroRL is an MCP-first reinforcement learning framework for deep research agent.☆180Updated 3 months ago
- qwen-nsa☆84Updated last month
- MiroMind-M1 is a fully open-source series of reasoning language models built on Qwen-2.5, focused on advancing mathematical reasoning.☆245Updated 3 months ago
- [TMLR 2025] Efficient Reasoning Models: A Survey☆282Updated last month
- HuggingFace conversion and training library for Megatron-based models☆250Updated this week