SJTU-ReArch-Group / Paper-Reading-ListLinks
☆143Updated last month
Alternatives and similar repositories for Paper-Reading-List
Users that are interested in Paper-Reading-List are comparing it to the libraries listed below
Sorting:
- ☆218Updated 2 months ago
- Large Language Model (LLM) Serving Paper and Resource List☆24Updated 8 months ago
- LLM serving cluster simulator☆132Updated last year
- Automatic Mapping Generation, Verification, and Exploration for ISA-based Spatial Accelerators☆120Updated 3 years ago
- OSDI 2023 Welder, deeplearning compiler☆31Updated 2 years ago
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆56Updated last year
- ☆44Updated last year
- ☆48Updated last year
- WaferLLM: Large Language Model Inference at Wafer Scale☆83Updated 2 weeks ago
- Summary of some awesome work for optimizing LLM inference☆163Updated last month
- ☆12Updated last year
- ☆18Updated last year
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆91Updated 3 years ago
- NeuPIMs: NPU-PIM Heterogeneous Acceleration for Batched LLM Inferencing☆109Updated last year
- LLM Inference analyzer for different hardware platforms☆99Updated last month
- TileFlow is a performance analysis tool based on Timeloop for fusion dataflows☆66Updated last year
- ☆64Updated 6 months ago
- ☆25Updated last year
- Open-source Framework for HPCA2024 paper: Gemini: Mapping and Architecture Co-exploration for Large-scale DNN Chiplet Accelerators☆107Updated 8 months ago
- ☆32Updated last year
- ☆113Updated 2 years ago
- Repo for SpecEE: Accelerating Large Language Model Inference with Speculative Early Exiting (ISCA25)☆70Updated 8 months ago
- ☆166Updated last year
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆173Updated 6 months ago
- A Row Decomposition-based Approach for Sparse Matrix Multiplication on GPUs☆28Updated 2 years ago
- Github repository of HPCA 2025 paper "UniNDP: A Unified Compilation and Simulation Tool for Near DRAM Processing Architectures"☆18Updated last month
- ☆31Updated 9 months ago
- ☆17Updated last year
- From Minimal GEMM to Everything☆95Updated 3 weeks ago
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆173Updated last year