SJTU-ReArch-Group / Paper-Reading-ListLinks
☆140Updated last week
Alternatives and similar repositories for Paper-Reading-List
Users that are interested in Paper-Reading-List are comparing it to the libraries listed below
Sorting:
- ☆210Updated last month
- Automatic Mapping Generation, Verification, and Exploration for ISA-based Spatial Accelerators☆118Updated 3 years ago
- ☆12Updated last year
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆55Updated last year
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆90Updated 3 years ago
- WaferLLM: Large Language Model Inference at Wafer Scale☆77Updated last month
- Large Language Model (LLM) Serving Paper and Resource List☆24Updated 6 months ago
- ☆41Updated last year
- ☆26Updated last year
- ☆45Updated last year
- ☆112Updated 2 years ago
- NeuPIMs: NPU-PIM Heterogeneous Acceleration for Batched LLM Inferencing☆104Updated last year
- TileFlow is a performance analysis tool based on Timeloop for fusion dataflows☆63Updated last year
- ☆32Updated last year
- LLM serving cluster simulator☆125Updated last year
- LLM Inference analyzer for different hardware platforms☆97Updated last week
- ☆12Updated 11 months ago
- ☆13Updated 3 years ago
- OSDI 2023 Welder, deeplearning compiler☆28Updated 2 years ago
- Open-source Framework for HPCA2024 paper: Gemini: Mapping and Architecture Co-exploration for Large-scale DNN Chiplet Accelerators☆106Updated 7 months ago
- A Row Decomposition-based Approach for Sparse Matrix Multiplication on GPUs☆27Updated 2 years ago
- ☆18Updated last year
- Summary of some awesome work for optimizing LLM inference☆146Updated 2 weeks ago
- An analytical framework that models hardware dataflow of tensor applications on spatial architectures using the relation-centric notation…☆87Updated last year
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆162Updated 4 months ago
- A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores☆55Updated 2 years ago
- Repo for SpecEE: Accelerating Large Language Model Inference with Speculative Early Exiting (ISCA25)☆68Updated 7 months ago
- ☆115Updated last year
- ☆18Updated last year
- ☆162Updated last year