OptimalFoundation / nadirLinks
Nadir: Cutting-edge PyTorch optimizers for simplicity & composability! π₯ππ»
β14Updated last year
Alternatives and similar repositories for nadir
Users that are interested in nadir are comparing it to the libraries listed below
Sorting:
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)β189Updated 3 years ago
- Large scale 4D parallelism pre-training for π€ transformers in Mixture of Experts *(still work in progress)*β87Updated 2 years ago
- Amos optimizer with JEstimator lib.β82Updated last year
- A case study of efficient training of large language models using commodity hardware.β68Updated 3 years ago
- Minimal code to train a Large Language Model (LLM).β172Updated 3 years ago
- Deep learning library implemented from scratch in numpy. Mixtral, Mamba, LLaMA, GPT, ResNet, and other experiments.β54Updated last year
- β66Updated 3 years ago
- HomebrewNLP in JAX flavour for maintable TPU-Trainingβ51Updated last year
- Like picoGPT but for BERT.β51Updated 2 years ago
- Experiments for efforts to train a new and improved t5β76Updated last year
- Genalog is an open source, cross-platform python package allowing generation of synthetic document images with custom degradations and teβ¦β44Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pileβ116Updated 2 years ago
- minimal pytorch implementation of bm25 (with sparse tensors)β104Updated last month
- Various transformers for FSDP researchβ38Updated 3 years ago
- some common Huggingface transformers in maximal update parametrization (Β΅P)β87Updated 3 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β96Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limitβ63Updated 2 years ago
- β63Updated 3 years ago
- β20Updated 2 years ago
- β50Updated last year
- Scripts to convert datasets from various sources to Hugging Face Datasets.β57Updated 3 years ago
- ML/DL Math and Method notesβ65Updated 2 years ago
- Experiments with generating opensource language model assistantsβ97Updated 2 years ago
- β94Updated 2 years ago
- Supercharge huggingface transformers with model parallelism.β77Updated 5 months ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorchβ231Updated last year
- Inference code for LLaMA models in JAXβ120Updated last year
- Our open source implementation of MiniLMv2 (https://aclanthology.org/2021.findings-acl.188)β61Updated 2 years ago
- git extension for {collaborative, communal, continual} model developmentβ217Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Dayβ259Updated 2 years ago