The-AI-Summer / pytorch-ddp
code for the ddp tutorial
☆32Updated 2 years ago
Alternatives and similar repositories for pytorch-ddp:
Users that are interested in pytorch-ddp are comparing it to the libraries listed below
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆71Updated last year
- Implementation of TableFormer, Robust Transformer Modeling for Table-Text Encoding, in Pytorch☆37Updated 2 years ago
- PyTorch implementation of "From Sparse to Soft Mixtures of Experts"☆51Updated last year
- ☆37Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆48Updated 2 years ago
- Code for the PAPA paper☆27Updated 2 years ago
- This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron …☆32Updated last year
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 2 years ago
- A simple implementation of a deep linear Pytorch module☆19Updated 4 years ago
- ☆21Updated 2 years ago
- An implementation of Transformer with Expire-Span, a circuit for learning which memories to retain☆33Updated 4 years ago
- The official repository for our paper "The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns …☆16Updated last year
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆44Updated 3 weeks ago
- A pytorch realization of adafactor (https://arxiv.org/pdf/1804.04235.pdf )☆23Updated 5 years ago
- Code for the paper "Query-Key Normalization for Transformers"☆37Updated 3 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆60Updated 2 years ago
- Recycling diverse models☆44Updated 2 years ago
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆31Updated last year
- ☆15Updated 7 months ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆45Updated 4 years ago
- JORA: JAX Tensor-Parallel LoRA Library (ACL 2024)☆32Updated 10 months ago
- PyTorch, PyTorch Lightning framework for trying knowledge distillation in image classification problems☆32Updated 7 months ago
- Fine-tuning large language models with huggingface transformers and deepspeed☆30Updated last year
- Stochastic Weight Averaging Tutorials using pytorch.☆33Updated 4 years ago
- AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers☆42Updated 2 years ago
- [COLM 2024] Early Weight Averaging meets High Learning Rates for LLM Pre-training☆15Updated 4 months ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 4 years ago
- several types of attention modules written in PyTorch for learning purposes☆46Updated 5 months ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆80Updated last year
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Updated 2 years ago