r0mainK / outperformer
Code for scaling Transformers
β26Updated 4 years ago
Alternatives and similar repositories for outperformer:
Users that are interested in outperformer are comparing it to the libraries listed below
- This repository contains example code to build models on TPUsβ30Updated 2 years ago
- A π€-style implementation of BERT using lambda layers instead of self-attentionβ69Updated 4 years ago
- PyTorch implementation of GLOMβ21Updated 2 years ago
- Your fruity companion for transformersβ14Updated 2 years ago
- TPU support for the fastai libraryβ13Updated 3 years ago
- A collection of Models, Datasets, DataModules, Callbacks, Metrics, Losses and Loggers to better integrate pytorch-lightning with transforβ¦β47Updated last year
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.β145Updated 3 years ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.β49Updated 3 years ago
- High performance pytorch modulesβ18Updated 2 years ago
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We sβ¦β67Updated 2 years ago
- This repository contains the code for running the character-level Sandwich Transformers from our ACL 2020 paper on Improving Transformer β¦β55Updated 4 years ago
- β28Updated last year
- β45Updated 5 years ago
- β27Updated 3 years ago
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 2 years ago
- Large dataset storage format for Pytorchβ45Updated 3 years ago
- A GPT, made only of MLPs, in Jaxβ57Updated 3 years ago
- Pretrained TorchVision models on CIFAR10 dataset (with weights)β24Updated 4 years ago
- Implements EvoNorms B0 and S0 as proposed in Evolving Normalization-Activation Layers.β11Updated 4 years ago
- Standalone pre-training recipe with JAX+Flaxβ31Updated last year
- HomebrewNLP in JAX flavour for maintable TPU-Trainingβ48Updated last year
- Implementation of the GBST block from the Charformer paper, in Pytorchβ117Updated 3 years ago
- A collection of optimizers, some arcane others well known, for Flax.β29Updated 3 years ago
- Implementation of Feedback Transformer in Pytorchβ105Updated 4 years ago
- A generative modelling toolkit for PyTorch.β70Updated 3 years ago
- β14Updated 5 years ago
- An open source implementation of CLIP.β32Updated 2 years ago
- This project shows how to derive the total number of training tokens from a large text dataset from π€ datasets with Apache Beam and Dataβ¦β24Updated 2 years ago
- GPT, but made only out of MLPsβ88Updated 3 years ago
- β64Updated 4 years ago