OptimalFoundation / nadirLinks
Nadir: Cutting-edge PyTorch optimizers for simplicity & composability! π₯ππ»
β14Updated last year
Alternatives and similar repositories for nadir
Users that are interested in nadir are comparing it to the libraries listed below
Sorting:
- Large scale 4D parallelism pre-training for π€ transformers in Mixture of Experts *(still work in progress)*β85Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pileβ116Updated 2 years ago
- Deep learning library implemented from scratch in numpy. Mixtral, Mamba, LLaMA, GPT, ResNet, and other experiments.β50Updated last year
- Experiments with generating opensource language model assistantsβ97Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β93Updated 2 years ago
- β34Updated 2 years ago
- An instruction-based benchmark for text improvements.β141Updated 2 years ago
- minimal pytorch implementation of bm25 (with sparse tensors)β103Updated last year
- Serialize JAX, Flax, Haiku, or Objax model params with π€`safetensors`β45Updated last year
- Amos optimizer with JEstimator lib.β82Updated last year
- β49Updated last year
- β92Updated last year
- Our open source implementation of MiniLMv2 (https://aclanthology.org/2021.findings-acl.188)β61Updated 2 years ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorchβ229Updated 10 months ago
- Gzip and nearest neighbors for text classificationβ57Updated last year
- HomebrewNLP in JAX flavour for maintable TPU-Trainingβ50Updated last year
- Experiments for efforts to train a new and improved t5β76Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Dayβ256Updated last year
- Multi-Domain Expert Learningβ67Updated last year
- Code repository for the c-BTM paperβ106Updated last year
- Minimal code to train a Large Language Model (LLM).β170Updated 2 years ago
- A fast implementation of T5/UL2 in PyTorch using Flash Attentionβ105Updated 4 months ago
- β67Updated 2 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)β187Updated 3 years ago
- This repository contains code for cleaning your training data of benchmark data to help combat data snooping.β25Updated 2 years ago
- Fast bare-bones BPE for modern tokenizer trainingβ159Updated 3 weeks ago
- Highly commented implementations of Transformers in PyTorchβ136Updated last year
- β166Updated 2 years ago
- Like picoGPT but for BERT.β50Updated 2 years ago
- Genalog is an open source, cross-platform python package allowing generation of synthetic document images with custom degradations and teβ¦β42Updated last year