Jack47 / hack-SysML
The road to hack SysML and become an system expert
☆475Updated 6 months ago
Alternatives and similar repositories for hack-SysML:
Users that are interested in hack-SysML are comparing it to the libraries listed below
- how to learn PyTorch and OneFlow☆417Updated last year
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆471Updated last year
- paper and its code for AI System☆283Updated 2 months ago
- ☆605Updated 9 months ago
- ☆324Updated 2 months ago
- FlagGems is an operator library for large language models implemented in Triton Language.☆463Updated this week
- A self-learning tutorail for CUDA High Performance Programing.☆505Updated 3 weeks ago
- Disaggregated serving system for Large Language Models (LLMs).☆517Updated 7 months ago
- learning how CUDA works☆226Updated 3 weeks ago
- GLake: optimizing GPU memory management and IO transmission.☆449Updated last week
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆678Updated 2 months ago
- A baseline repository of Auto-Parallelism in Training Neural Networks☆143Updated 2 years ago
- Puzzles for learning Triton, play it with minimal environment configuration!☆267Updated 3 months ago
- A high-performance distributed deep learning system targeting large-scale and automated distributed training.☆295Updated 2 weeks ago
- ☆562Updated 3 weeks ago
- DeepLearning Framework Performance Profiling Toolkit☆285Updated 3 years ago
- The repository has collected a batch of noteworthy MLSys bloggers (Algorithms/Systems)☆207Updated 2 months ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆267Updated 2 years ago
- FlagScale is a large model toolkit based on open-sourced projects.☆257Updated this week
- A PyTorch-like deep learning framework. Just for fun.☆148Updated last year
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆238Updated 3 weeks ago
- flash attention tutorial written in python, triton, cuda, cutlass☆308Updated 2 months ago
- Zero Bubble Pipeline Parallelism☆375Updated 3 weeks ago
- This is a series of GPU optimization topics. Here we will introduce how to optimize the CUDA kernel in detail. I will introduce several…☆968Updated last year
- A PyTorch Native LLM Training Framework☆763Updated 3 months ago
- A CUDA tutorial to make people learn CUDA program from 0☆222Updated 8 months ago
- A model compilation solution for various hardware☆416Updated 2 weeks ago
- A collection of memory efficient attention operators implemented in the Triton language.☆257Updated 9 months ago
- how to optimize some algorithm in cuda.☆2,053Updated this week
- Yinghan's Code Sample☆316Updated 2 years ago