knagrecha / hydraLinks
Execution framework for multi-task model parallelism. Enables the training of arbitrarily large models with a single GPU, with linear speedups for multi-gpu multi-task execution.
☆20Updated 2 years ago
Alternatives and similar repositories for hydra
Users that are interested in hydra are comparing it to the libraries listed below
Sorting:
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆50Updated 3 years ago
- GPT, but made only out of MLPs☆89Updated 4 years ago
- Another attempt at a long-context / efficient transformer by me☆38Updated 3 years ago
- A collection of Models, Datasets, DataModules, Callbacks, Metrics, Losses and Loggers to better integrate pytorch-lightning with transfor…☆47Updated 2 years ago
- Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)☆117Updated 3 years ago
- ☆27Updated 4 years ago
- Implementation of TableFormer, Robust Transformer Modeling for Table-Text Encoding, in Pytorch☆39Updated 3 years ago
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆81Updated 4 years ago
- Standalone Product Key Memory module in Pytorch - for augmenting Transformer models☆86Updated last month
- High performance pytorch modules☆18Updated 2 years ago
- Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch☆89Updated 4 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- Implementation of Multistream Transformers in Pytorch☆54Updated 4 years ago
- ☆29Updated 3 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆126Updated 5 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆120Updated 4 years ago
- Script and models for clustering LAION-400m CLIP embeddings.☆26Updated 3 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆63Updated 3 years ago
- My explorations into editing the knowledge and memories of an attention network☆35Updated 3 years ago
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆207Updated 2 years ago
- A GPT, made only of MLPs, in Jax☆58Updated 4 years ago
- Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch☆184Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- [NeurIPS 2022] DataMUX: Data Multiplexing for Neural Networks☆60Updated 3 years ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆157Updated last year
- An open source implementation of CLIP.☆33Updated 3 years ago
- Re-implementation of 'Grokking: Generalization beyond overfitting on small algorithmic datasets'☆39Updated 4 years ago
- Implementation of Metaformer, but in an autoregressive manner☆26Updated 3 years ago
- Implementation of Feedback Transformer in Pytorch☆108Updated 4 years ago
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆115Updated last year