kooyunmo / cuda-uvm-gpt2Links
PyTorch-UVM on super-large language models.
☆17Updated 4 years ago
Alternatives and similar repositories for cuda-uvm-gpt2
Users that are interested in cuda-uvm-gpt2 are comparing it to the libraries listed below
Sorting:
- ☆24Updated 2 years ago
- ☆27Updated 4 years ago
- ☆37Updated last year
- Tensors and Dynamic neural networks in Python with strong GPU acceleration☆15Updated 4 years ago
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆122Updated 3 weeks ago
- This serves as a repository for reproducibility of the SC21 paper "In-Depth Analyses of Unified Virtual Memory System for GPU Accelerated…☆33Updated last year
- GVProf: A Value Profiler for GPU-based Clusters☆51Updated last year
- ☆37Updated 2 weeks ago
- ☆42Updated 3 weeks ago
- LLM serving cluster simulator☆107Updated last year
- ☆49Updated 6 months ago
- ☆51Updated 6 years ago
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆52Updated last year
- ☆75Updated 4 years ago
- Artifacts for our ASPLOS'23 paper ElasticFlow☆52Updated last year
- Artifact for OSDI'23: MGG: Accelerating Graph Neural Networks with Fine-grained intra-kernel Communication-Computation Pipelining on Mult…☆40Updated last year
- ☆79Updated 2 years ago
- Tacker: Tensor-CUDA Core Kernel Fusion for Improving the GPU Utilization while Ensuring QoS☆27Updated 5 months ago
- [ACM EuroSys '23] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆57Updated last year
- Source code of the simulator used in the Mosaic paper from MICRO 2017: "Mosaic: A GPU Memory Manager with Application-Transparent Support…☆49Updated 6 years ago
- ☆23Updated 3 years ago
- ☆14Updated 4 years ago
- An interference-aware scheduler for fine-grained GPU sharing☆141Updated 5 months ago
- ☆39Updated 2 years ago
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆42Updated 3 years ago
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆95Updated 2 years ago
- Artifact for OSDI'21 GNNAdvisor: An Adaptive and Efficient Runtime System for GNN Acceleration on GPUs.☆66Updated 2 years ago
- Sharing the codebase and steps for artifact evaluation/reproduction for MICRO 2024 paper☆9Updated 10 months ago
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆89Updated 2 years ago
- Artifact for PPoPP22 QGTC: Accelerating Quantized GNN via GPU Tensor Core.☆30Updated 3 years ago