AIS-SNU / Smart-Infinity
[HPCA'24] Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System
☆41Updated last year
Alternatives and similar repositories for Smart-Infinity:
Users that are interested in Smart-Infinity are comparing it to the libraries listed below
- ☆21Updated 3 months ago
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆13Updated 8 months ago
- PIM-DL: Expanding the Applicability of Commodity DRAM-PIMs for Deep Learning via Algorithm-System Co-Optimization☆27Updated last year
- ☆41Updated 10 months ago
- UPMEM LLM Framework allows profiling PyTorch layers and functions and simulate those layers/functions with a given hardware profile.☆21Updated last month
- NeuPIMs Simulator☆75Updated 8 months ago
- ☆36Updated last year
- ☆22Updated 3 months ago
- The Artifact of NeoMem: Hardware/Software Co-Design for CXL-Native Memory Tiering☆43Updated 7 months ago
- ☆57Updated 8 months ago
- ☆11Updated 2 months ago
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆93Updated 2 weeks ago
- ☆66Updated 4 years ago
- ☆17Updated last year
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆49Updated 9 months ago
- ☆92Updated last year
- ☆53Updated 2 months ago
- ☆121Updated 8 months ago
- ☆19Updated last year
- A Cycle-level simulator for M2NDP☆24Updated 3 months ago
- LLM Inference analyzer for different hardware platforms☆54Updated last week
- ONNXim is a fast cycle-level simulator that can model multi-core NPUs for DNN inference☆97Updated last month
- mNPUsim: A Cycle-accurate Multi-core NPU Simulator (IISWC 2023)☆48Updated 3 months ago
- An analytical framework that models hardware dataflow of tensor applications on spatial architectures using the relation-centric notation…☆83Updated 10 months ago
- ☆24Updated 2 years ago
- MultiPIM: A Detailed and Configurable Multi-Stack Processing-In-Memory Simulator☆53Updated 3 years ago
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆112Updated 8 months ago
- ☆52Updated 11 months ago