SJTU-ReArch-Group / Paper-Reading-ListLinks
☆117Updated 2 weeks ago
Alternatives and similar repositories for Paper-Reading-List
Users that are interested in Paper-Reading-List are comparing it to the libraries listed below
Sorting:
- ☆175Updated last year
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆53Updated last year
- ☆33Updated last year
- Automatic Mapping Generation, Verification, and Exploration for ISA-based Spatial Accelerators☆114Updated 2 years ago
- NeuPIMs: NPU-PIM Heterogeneous Acceleration for Batched LLM Inferencing☆89Updated last year
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆89Updated 2 years ago
- TileFlow is a performance analysis tool based on Timeloop for fusion dataflows☆61Updated last year
- WaferLLM: Large Language Model Inference at Wafer Scale☆41Updated 3 weeks ago
- Large Language Model (LLM) Serving Paper and Resource List☆24Updated 2 months ago
- ☆77Updated last year
- LLM Inference analyzer for different hardware platforms☆82Updated last month
- ☆48Updated last month
- ☆23Updated last year
- ☆11Updated 10 months ago
- Open-source Framework for HPCA2024 paper: Gemini: Mapping and Architecture Co-exploration for Large-scale DNN Chiplet Accelerators☆85Updated 3 months ago
- OSDI 2023 Welder, deeplearning compiler☆21Updated last year
- ☆107Updated last year
- ☆42Updated last year
- LLM serving cluster simulator☆108Updated last year
- Github repository of HPCA 2025 paper "UniNDP: A Unified Compilation and Simulation Tool for Near DRAM Processing Architectures"☆13Updated 8 months ago
- ☆12Updated 3 years ago
- The framework for the paper "Inter-layer Scheduling Space Definition and Exploration for Tiled Accelerators" in ISCA 2023.☆69Updated 4 months ago
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆127Updated 3 weeks ago
- ☆18Updated last year
- HyFiSS: A Hybrid Fidelity Stall-Aware Simulator for GPGPUs☆36Updated 8 months ago
- An Optimizing Framework on MLIR for Efficient FPGA-based Accelerator Generation☆50Updated last year
- A Row Decomposition-based Approach for Sparse Matrix Multiplication on GPUs☆22Updated last year
- Artifact for paper "PIM is All You Need: A CXL-Enabled GPU-Free System for LLM Inference", ASPLOS 2025☆82Updated 3 months ago
- ☆28Updated last year
- ☆150Updated last year