keli-wen / AGI-StudyLinks
The blog, read report and code example for AGI/LLM related knowledge.
☆44Updated 7 months ago
Alternatives and similar repositories for AGI-Study
Users that are interested in AGI-Study are comparing it to the libraries listed below
Sorting:
- ☆143Updated 2 months ago
- Efficient Mixture of Experts for LLM Paper List☆118Updated this week
- 青稞Talk☆139Updated this week
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆108Updated 4 months ago
- ☆198Updated 4 months ago
- qwen-nsa☆74Updated 4 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆327Updated last month
- ☆146Updated 6 months ago
- siiRL: Shanghai Innovation Institute RL Framework for Advanced LLMs and Multi-Agent Systems☆176Updated last week
- Due to the huge vocaburary size (151,936) of Qwen models, the Embedding and LM Head weights are excessively heavy. Therefore, this projec…☆26Updated last year
- Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**☆201Updated 6 months ago
- Multi-Candidate Speculative Decoding☆36Updated last year
- Bridge Megatron-Core to Hugging Face/Reinforcement Learning☆103Updated last week
- Tiny-DeepSpeed, a minimalistic re-implementation of the DeepSpeed library☆43Updated 2 weeks ago
- A lightweight reinforcement learning framework that integrates seamlessly into your codebase, empowering developers to focus on algorithm…☆61Updated last week
- DuoDecoding: Hardware-aware Heterogeneous Speculative Decoding with Dynamic Multi-Sequence Drafting☆16Updated 6 months ago
- A Comprehensive Survey on Long Context Language Modeling☆180Updated last month
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.☆467Updated last year
- The Official Implementation of Ada-KV: Optimizing KV Cache Eviction by Adaptive Budget Allocation for Efficient LLM Inference☆90Updated 2 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆234Updated 3 weeks ago
- SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs☆147Updated 3 weeks ago
- Unveiling Super Experts in Mixture-of-Experts Large Language Models☆22Updated last month
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆136Updated 3 months ago
- a survey of long-context LLMs from four perspectives, architecture, infrastructure, training, and evaluation☆56Updated 5 months ago
- [ACL 2025 main] FR-Spec: Frequency-Ranked Speculative Sampling☆41Updated last month
- "what, how, where, and how well? a survey on test-time scaling in large language models" repository☆63Updated last week
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆298Updated last week
- Repository of LV-Eval Benchmark☆70Updated last year
- Implementation for FP8/INT8 Rollout for RL training without performence drop.☆184Updated this week
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)☆309Updated 4 months ago