OptimalFoundation / nadirLinks
Nadir: Cutting-edge PyTorch optimizers for simplicity & composability! π₯ππ»
β14Updated last year
Alternatives and similar repositories for nadir
Users that are interested in nadir are comparing it to the libraries listed below
Sorting:
- Like picoGPT but for BERT.β50Updated 2 years ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning Pβ¦β34Updated last year
- a pipeline for using api calls to agnostically convert unstructured data into structured training dataβ30Updated 9 months ago
- Experiments for efforts to train a new and improved t5β77Updated last year
- HomebrewNLP in JAX flavour for maintable TPU-Trainingβ50Updated last year
- β67Updated 2 years ago
- Experiments with generating opensource language model assistantsβ97Updated 2 years ago
- Embedding Recycling for Language modelsβ38Updated last year
- Engineering the state of RNN language models (Mamba, RWKV, etc.)β32Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β93Updated 2 years ago
- β49Updated last year
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/β¦β27Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pileβ115Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limitβ63Updated 2 years ago
- Large scale 4D parallelism pre-training for π€ transformers in Mixture of Experts *(still work in progress)*β84Updated last year
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergenceβ59Updated 3 years ago
- Proof-of-concept of global switching between numpy/jax/pytorch in a library.β18Updated last year
- This repository contains code for cleaning your training data of benchmark data to help combat data snooping.β25Updated 2 years ago
- Automatically take good care of your preemptible TPUsβ36Updated 2 years ago
- Using short models to classify long textsβ21Updated 2 years ago
- NanoGPT-speedrunning for the poor T4 enjoyersβ66Updated 2 months ago
- Jax like function transformation engine but micro, microjaxβ32Updated 8 months ago
- Amos optimizer with JEstimator lib.β82Updated last year
- Various handy scripts to quickly setup new Linux and Windows sandboxes, containers and WSL.β40Updated 2 months ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.β17Updated 3 months ago
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found heβ¦β31Updated last year
- PyTorch implementation for MRLβ18Updated last year
- β53Updated last year
- β27Updated 11 months ago
- BPE modification that implements removing of the intermediate tokens during tokenizer training.β25Updated 7 months ago