JerryYin777 / PaperHelperLinks
PaperHelper: Knowledge-Based LLM QA Paper Reading Assistant with Reliable References
☆17Updated last year
Alternatives and similar repositories for PaperHelper
Users that are interested in PaperHelper are comparing it to the libraries listed below
Sorting:
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆60Updated last year
- Efficient Mixture of Experts for LLM Paper List☆118Updated this week
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆136Updated last year
- Implementation for the paper: CMoE: Fast Carving of Mixture-of-Experts for Efficient LLM Inference☆24Updated 5 months ago
- Official Implementation of APB (ACL 2025 main Oral)☆31Updated 6 months ago
- Manages vllm-nccl dependency☆17Updated last year
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆56Updated 9 months ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- MiroRL is an MCP-first reinforcement learning framework for deep research agent.☆141Updated this week
- ☆74Updated 2 weeks ago
- LLMem: GPU Memory Estimation for Fine-Tuning Pre-Trained LLMs☆22Updated 3 months ago
- ☆117Updated 2 months ago
- SQUEEZED ATTENTION: Accelerating Long Prompt LLM Inference☆52Updated 9 months ago
- Source code for the paper "LongGenBench: Long-context Generation Benchmark"☆23Updated 10 months ago
- ☆69Updated 2 months ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆19Updated last week
- Official Implementation of SAM-Decoding: Speculative Decoding via Suffix Automaton☆31Updated 6 months ago
- Deepseek-r1复现科普与资源汇总☆22Updated 5 months ago
- A lightweight reinforcement learning framework that integrates seamlessly into your codebase, empowering developers to focus on algorithm…☆61Updated last week
- 方便扩展的Cuda算子理解和优化框架,仅用在学习 使用☆16Updated last year
- Implementation from scratch in C of the Multi-head latent attention used in the Deepseek-v3 technical paper.☆19Updated 7 months ago
- Code for paper: "Executing Arithmetic: Fine-Tuning Large Language Models as Turing Machines"☆12Updated 10 months ago
- Beyond KV Caching: Shared Attention for Efficient LLMs☆19Updated last year
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆48Updated 10 months ago
- ☆33Updated 6 months ago
- Summary of system papers/frameworks/codes/tools on training or serving large model☆57Updated last year
- ☆67Updated 3 months ago
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆42Updated last year
- TensorRT LLM Benchmark Configuration☆13Updated last year
- [NeurIPS 2024] A Novel Rank-Based Metric for Evaluating Large Language Models☆52Updated 3 months ago