xuqifan897 / OptimusLinks
☆28Updated 4 years ago
Alternatives and similar repositories for Optimus
Users that are interested in Optimus are comparing it to the libraries listed below
Sorting:
- ☆77Updated 4 years ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆68Updated 8 months ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆43Updated 3 years ago
- ☆80Updated 6 months ago
- ☆145Updated 10 months ago
- FTPipe and related pipeline model parallelism research.☆43Updated 2 years ago
- Research and development for optimizing transformers☆131Updated 4 years ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆224Updated 2 years ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆122Updated last year
- ☆113Updated last year
- ☆42Updated 2 years ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆218Updated last year
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆80Updated last year
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆76Updated 4 years ago
- PyTorch bindings for CUTLASS grouped GEMM.☆169Updated last month
- Github mirror of trition-lang/triton repo.☆98Updated last week
- nnScaler: Compiling DNN models for Parallel Training☆120Updated 2 months ago
- A schedule language for large model training☆151Updated 3 months ago
- ☆102Updated last year
- Training neural networks in TensorFlow 2.0 with 5x less memory☆137Updated 3 years ago
- ☆83Updated 2 years ago
- System for automated integration of deep learning backends.☆47Updated 3 years ago
- ☆88Updated 3 years ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆272Updated 4 months ago
- Python package for rematerialization-aware gradient checkpointing☆26Updated 2 years ago
- Dynamic Tensor Rematerialization prototype (modified PyTorch) and simulator. Paper: https://arxiv.org/abs/2006.09616☆132Updated 2 years ago
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆121Updated 3 years ago
- A baseline repository of Auto-Parallelism in Training Neural Networks☆147Updated 3 years ago
- A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆123Updated last year
- ☆122Updated last year