DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.
☆171Sep 26, 2025Updated 5 months ago
Alternatives and similar repositories for DeeperSpeed
Users that are interested in DeeperSpeed are comparing it to the libraries listed below
Sorting:
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries☆7,395Feb 3, 2026Updated 3 weeks ago
- ☆78Dec 7, 2023Updated 2 years ago
- ☆24Dec 11, 2024Updated last year
- URL downloader supporting checkpointing and continuous checksumming.☆19Nov 29, 2023Updated 2 years ago
- Work towards creating a common JSON based format for compact network specification☆14Jan 6, 2026Updated last month
- Lightweight piece tokenization library☆12Apr 15, 2024Updated last year
- RWKV model implementation☆37Jul 15, 2023Updated 2 years ago
- OSLO: Open Source for Large-scale Optimization☆175Sep 9, 2023Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Jun 21, 2023Updated 2 years ago
- ☆64Apr 9, 2024Updated last year
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆217Updated this week
- ☆39Jul 25, 2024Updated last year
- Implementation of Influence Function approximations for differently sized ML models, using PyTorch☆16Sep 15, 2023Updated 2 years ago
- Full knowledge and control of the train state.☆19Sep 23, 2020Updated 5 years ago
- Code for the paper-"Mirostat: A Perplexity-Controlled Neural Text Decoding Algorithm" (https://arxiv.org/abs/2007.14966).☆61Feb 7, 2022Updated 4 years ago
- Pile Deduplication Code☆18May 15, 2023Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,436Mar 20, 2024Updated last year
- Large Scale Distributed Model Training strategy with Colossal AI and Lightning AI☆56Sep 1, 2023Updated 2 years ago
- ☆39Oct 3, 2022Updated 3 years ago
- Understanding how features learned by neural networks evolve throughout training☆41Oct 24, 2024Updated last year
- ☆2,946Jan 15, 2026Updated last month
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆147Jul 26, 2021Updated 4 years ago
- ☆163Mar 5, 2021Updated 4 years ago
- ☆19Jan 27, 2021Updated 5 years ago
- Model parallel transformers in JAX and Haiku☆6,361Jan 21, 2023Updated 3 years ago
- ☆22Dec 15, 2023Updated 2 years ago
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,740Nov 15, 2025Updated 3 months ago
- ☆94Jul 16, 2022Updated 3 years ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆397Feb 24, 2024Updated 2 years ago
- ReCross: Unsupervised Cross-Task Generalization via Retrieval Augmentation☆24May 1, 2022Updated 3 years ago
- Efficiently computing & storing token n-grams from large corpora☆26Oct 6, 2024Updated last year
- Embroid: Unsupervised Prediction Smoothing Can Improve Few-Shot Classification☆11Aug 12, 2023Updated 2 years ago
- Code for reviewers☆12Oct 8, 2024Updated last year
- Companion repository to "Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models"☆14May 31, 2023Updated 2 years ago
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Nov 11, 2024Updated last year
- Formalization of Statement of Local Langlands Correspondence for Tori☆12Dec 18, 2018Updated 7 years ago
- PSTensor provides a way to hack the memory management of tensors in TensorFlow and PyTorch by defining your own C++ Tensor Class.☆10Feb 10, 2022Updated 4 years ago
- [ACL 2023] Code and data for our paper "Measuring Progress in Fine-grained Vision-and-Language Understanding"☆13Jun 11, 2023Updated 2 years ago
- See https://github.com/cuda-mode/triton-index/ instead!☆11May 8, 2024Updated last year