Jack47 / hack-SysMLLinks
The road to hack SysML and become an system expert
☆498Updated 11 months ago
Alternatives and similar repositories for hack-SysML
Users that are interested in hack-SysML are comparing it to the libraries listed below
Sorting:
- ☆615Updated last year
- A high-performance distributed deep learning system targeting large-scale and automated distributed training.☆318Updated last month
- how to learn PyTorch and OneFlow☆449Updated last year
- paper and its code for AI System☆323Updated 2 weeks ago
- Disaggregated serving system for Large Language Models (LLMs).☆675Updated 4 months ago
- GLake: optimizing GPU memory management and IO transmission.☆477Updated 5 months ago
- A baseline repository of Auto-Parallelism in Training Neural Networks☆144Updated 3 years ago
- ☆608Updated 3 months ago
- DeepLearning Framework Performance Profiling Toolkit☆287Updated 3 years ago
- A PyTorch-like deep learning framework. Just for fun.☆156Updated last year
- Materials for learning SGLang☆549Updated this week
- FlagGems is an operator library for large language models implemented in the Triton Language.☆655Updated this week
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆61Updated last year
- Code base and slides for ECE408:Applied Parallel Programming On GPU.☆133Updated 4 years ago
- The repository has collected a batch of noteworthy MLSys bloggers (Algorithms/Systems)☆269Updated 7 months ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆475Updated last year
- ☆485Updated 3 weeks ago
- A curated list of awesome projects and papers for distributed training or inference☆241Updated 10 months ago
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆266Updated 5 months ago
- A self-learning tutorail for CUDA High Performance Programing.☆718Updated 2 months ago
- A PyTorch Native LLM Training Framework☆861Updated last month
- learning how CUDA works☆304Updated 5 months ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆267Updated 2 years ago
- Zero Bubble Pipeline Parallelism☆421Updated 3 months ago
- A simple deep learning framework that supports automatic differentiation and GPU acceleration.☆59Updated 2 years ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆840Updated last month
- Distributed Compiler based on Triton for Parallel Systems☆1,056Updated last week
- flash attention tutorial written in python, triton, cuda, cutlass☆408Updated 3 months ago
- Puzzles for learning Triton, play it with minimal environment configuration!☆497Updated 8 months ago
- A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆120Updated last year