sunkx109 / GPUs-Specs
Summary of the Specs of Commonly Used GPUs for Training and Inference of LLM
☆39Updated last month
Alternatives and similar repositories for GPUs-Specs:
Users that are interested in GPUs-Specs are comparing it to the libraries listed below
- Summary of some awesome work for optimizing LLM inference☆69Updated 3 weeks ago
- ☆60Updated last month
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆70Updated last week
- High performance Transformer implementation in C++.☆120Updated 3 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆99Updated last year
- DeepSeek-V3/R1 inference performance simulator☆115Updated last month
- ☆66Updated 2 weeks ago
- A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆34Updated 2 weeks ago
- Canvas: End-to-End Kernel Architecture Search in Neural Networks☆26Updated 5 months ago
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆60Updated 11 months ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆181Updated 3 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆74Updated last month
- ☆39Updated 11 months ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆43Updated last month
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆50Updated 11 months ago
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆34Updated last week
- ASPLOS'24: Optimal Kernel Orchestration for Tensor Programs with Korch☆34Updated last month
- ☆57Updated last week
- A lightweight design for computation-communication overlap.☆67Updated last week
- nnScaler: Compiling DNN models for Parallel Training☆109Updated last week
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆248Updated 2 months ago
- DeeperGEMM: crazy optimized version☆68Updated this week
- Examples of CUDA implementations by Cutlass CuTe☆170Updated 3 months ago
- Triton to TVM transpiler.☆19Updated 6 months ago
- Assembler and Decompiler for NVIDIA (Maxwell Pascal Volta Turing Ampere) GPUs.☆78Updated 2 years ago
- [EuroSys'25] Mist: Efficient Distributed Training of Large Language Models via Memory-Parallelism Co-Optimization☆12Updated this week
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆87Updated 4 months ago
- ☆28Updated 9 months ago
- ☆72Updated 3 years ago
- Stateful LLM Serving☆65Updated last month