ofirpress / attention_with_linear_biases
Code for the ALiBi method for transformer language models (ICLR 2022)
☆497Updated 10 months ago
Related projects: ⓘ
- Sequence modeling with Mega.☆296Updated last year
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆528Updated last week
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆615Updated this week
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆391Updated 7 months ago
- [NeurIPS'22 Spotlight] A Contrastive Framework for Neural Text Generation☆457Updated 6 months ago
- Run Effective Large Batch Contrastive Learning Beyond GPU/TPU Memory Constraint☆342Updated 5 months ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆278Updated 3 months ago
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆850Updated 10 months ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆222Updated last week
- Recurrent Memory Transformer☆147Updated last year
- Transformers with Arbitrarily Large Context☆613Updated last month
- Long Range Arena for Benchmarking Efficient Transformers☆711Updated 9 months ago
- An implementation of local windowed attention for language modeling☆368Updated last week
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆610Updated last year
- Root Mean Square Layer Normalization☆204Updated last year
- Rectified Rotary Position Embeddings☆329Updated 3 months ago
- Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"☆424Updated last year
- Research code for pixel-based encoders of language (PIXEL)☆329Updated 6 months ago
- Code repository for supporting the paper "Atlas Few-shot Learning with Retrieval Augmented Language Models",(https//arxiv.org/abs/2208.03…☆508Updated 9 months ago
- Rotary Transformer☆782Updated 2 years ago
- Diffusion-LM☆1,031Updated last month
- ☆322Updated 5 months ago
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)☆456Updated last year
- Automatically split your PyTorch models on multiple GPUs for training & inference☆614Updated 8 months ago
- An implementation of masked language modeling for Pytorch, made as concise and simple as possible☆173Updated last year
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆202Updated 3 weeks ago
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆355Updated last year
- Task-based datasets, preprocessing, and evaluation for sequence models.☆552Updated this week
- Tutel MoE: An Optimized Mixture-of-Experts Implementation☆711Updated this week
- Library for 8-bit optimizers and quantization routines.☆713Updated 2 years ago