gnovack / distributed-training-and-deepspeedLinks
☆17Updated 2 years ago
Alternatives and similar repositories for distributed-training-and-deepspeed
Users that are interested in distributed-training-and-deepspeed are comparing it to the libraries listed below
Sorting:
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆147Updated last year
- ☆121Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆59Updated 2 weeks ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆216Updated this week
- Various transformers for FSDP research☆38Updated 2 years ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆147Updated 2 years ago
- Implementation of a Transformer, but completely in Triton☆276Updated 3 years ago
- experiments with inference on llama☆103Updated last year
- Experiment of using Tangent to autodiff triton☆80Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆256Updated 2 years ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆195Updated 5 months ago
- ☆19Updated 2 years ago
- Torch Distributed Experimental☆117Updated last year
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆161Updated last month
- ML/DL Math and Method notes☆64Updated last year
- ☆91Updated last year
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆87Updated last year
- Distributed preprocessing and data loading for language datasets☆39Updated last year
- ☆174Updated last year
- Make triton easier☆48Updated last year
- Context Manager to profile the forward and backward times of PyTorch's nn.Module☆82Updated 2 years ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog po…☆91Updated 2 years ago
- This repository contains the experimental PyTorch native float8 training UX☆223Updated last year
- Pytorch DTensor native training library for LLMs/VLMs with OOTB Hugging Face support☆141Updated this week
- Experiments on speculative sampling with Llama models☆125Updated 2 years ago
- ☆225Updated 2 weeks ago
- Load compute kernels from the Hub☆316Updated this week
- ☆121Updated last year