rasbt / b3-basic-batchsize-benchmarkLinks
Experiments for the blog post "No, We Don't Have to Choose Batch Sizes As Powers Of 2"
☆20Updated 3 years ago
Alternatives and similar repositories for b3-basic-batchsize-benchmark
Users that are interested in b3-basic-batchsize-benchmark are comparing it to the libraries listed below
Sorting:
- ☆31Updated last month
- A collection of Models, Datasets, DataModules, Callbacks, Metrics, Losses and Loggers to better integrate pytorch-lightning with transfor…☆47Updated 2 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- This repository contains example code to build models on TPUs☆30Updated 2 years ago
- AdamW optimizer for bfloat16 models in pytorch 🔥.☆35Updated last year
- A generative modelling toolkit for PyTorch.☆70Updated 3 years ago
- High performance pytorch modules☆18Updated 2 years ago
- Implementation of N-Grammer in Flax☆17Updated 2 years ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch☆76Updated 2 years ago
- A deep learning library based on Pytorch focussed on low resource language research and robustness☆70Updated 3 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆50Updated 3 years ago
- AdaCat☆49Updated 3 years ago
- a lightweight transformer library for PyTorch☆72Updated 3 years ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 3 years ago
- ☆15Updated 4 years ago
- Helper scripts and notes that were used while porting various nlp models☆46Updated 3 years ago
- Latent Diffusion Language Models☆69Updated last year
- SMASHED is a toolkit designed to apply transformations to samples in datasets, such as fields extraction, tokenization, prompting, batchi…☆33Updated last year
- bumble bee transformer☆14Updated 4 years ago
- My explorations into editing the knowledge and memories of an attention network☆35Updated 2 years ago
- Local Attention - Flax module for Jax☆22Updated 4 years ago
- Implementation of TableFormer, Robust Transformer Modeling for Table-Text Encoding, in Pytorch☆39Updated 3 years ago
- ☆15Updated 3 years ago
- The collection of bulding blocks building fine-tunable metric learning models☆32Updated 4 months ago
- Large dataset storage format for Pytorch☆45Updated 3 years ago
- ☆30Updated 3 years ago
- A case study of efficient training of large language models using commodity hardware.☆68Updated 3 years ago
- Unofficial PyTorch implementation of Fastformer based on paper "Fastformer: Additive Attention Can Be All You Need"."☆134Updated 3 years ago
- A place to store reusable transformer components of my own creation or found on the interwebs☆59Updated last week
- A (possibly/eventually annotated?) collection of resources (books, demos, lectures, etc) that I personally like for various topics in mac…☆32Updated 6 years ago