Jack47 / hack-SysML
The road to hack SysML and become an system expert
☆465Updated 4 months ago
Alternatives and similar repositories for hack-SysML:
Users that are interested in hack-SysML are comparing it to the libraries listed below
- how to learn PyTorch and OneFlow☆393Updated 10 months ago
- Disaggregated serving system for Large Language Models (LLMs).☆468Updated 6 months ago
- paper and its code for AI System☆272Updated 3 weeks ago
- A baseline repository of Auto-Parallelism in Training Neural Networks☆142Updated 2 years ago
- GLake: optimizing GPU memory management and IO transmission.☆431Updated 2 months ago
- ☆598Updated 8 months ago
- ☆314Updated last month
- FlagScale is a large model toolkit based on open-sourced projects.☆223Updated this week
- A self-learning tutorail for CUDA High Performance Programing.☆369Updated 2 months ago
- A high-performance distributed deep learning system targeting large-scale and automated distributed training.☆284Updated last month
- DeepLearning Framework Performance Profiling Toolkit☆283Updated 2 years ago
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆220Updated 2 months ago
- learning how CUDA works☆201Updated 6 months ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆469Updated 11 months ago
- A PyTorch Native LLM Training Framework☆732Updated last month
- ☆538Updated 5 months ago
- MegCC是一个运行时超轻量,高效,移植简单的深度学习模型编译器☆474Updated 3 months ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆51Updated 6 months ago
- FlagGems is an operator library for large language models implemented in Triton Language.☆421Updated this week
- Dynamic Memory Management for Serving LLMs without PagedAttention☆290Updated this week
- The repository has collected a batch of noteworthy MLSys bloggers (Algorithms/Systems)☆179Updated last month
- A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆107Updated last year
- A tutorial for CUDA&PyTorch☆126Updated last month
- A curated list of awesome projects and papers for distributed training or inference☆216Updated 4 months ago
- ☆127Updated last month
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆629Updated last month
- Tutorials for writing high-performance GPU operators in AI frameworks.☆129Updated last year
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆266Updated last year
- A fast communication-overlapping library for tensor parallelism on GPUs.☆296Updated 3 months ago
- A Easy-to-understand TensorOp Matmul Tutorial☆316Updated 5 months ago