OptimalFoundation / nadirLinks
Nadir: Cutting-edge PyTorch optimizers for simplicity & composability! 🔥🚀💻
☆14Updated last year
Alternatives and similar repositories for nadir
Users that are interested in nadir are comparing it to the libraries listed below
Sorting:
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆86Updated 2 years ago
- A case study of efficient training of large language models using commodity hardware.☆68Updated 3 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆190Updated 3 years ago
- A library to create and manage configuration files, especially for machine learning projects.☆79Updated 3 years ago
- ☆63Updated 3 years ago
- HomebrewNLP in JAX flavour for maintable TPU-Training☆51Updated 2 years ago
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- Experiments for efforts to train a new and improved t5☆76Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 2 years ago
- A library for squeakily cleaning and filtering language datasets.☆49Updated 2 years ago
- Amos optimizer with JEstimator lib.☆82Updated last year
- An instruction-based benchmark for text improvements.☆142Updated 3 years ago
- Like picoGPT but for BERT.☆51Updated 2 years ago
- Functional local implementations of main model parallelism approaches☆95Updated 2 years ago
- Used for adaptive human in the loop evaluation of language and embedding models.☆308Updated 2 years ago
- minimal pytorch implementation of bm25 (with sparse tensors)☆104Updated 3 months ago
- Various transformers for FSDP research☆38Updated 3 years ago
- My explorations into editing the knowledge and memories of an attention network☆35Updated 3 years ago
- ☆167Updated 2 years ago
- ☆34Updated 2 years ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆137Updated last year
- ☆94Updated 2 years ago
- Resources from the EleutherAI Math Reading Group☆54Updated 11 months ago
- Scripts to convert datasets from various sources to Hugging Face Datasets.☆57Updated 3 years ago
- Helper scripts and notes that were used while porting various nlp models☆49Updated 3 years ago
- git extension for {collaborative, communal, continual} model development☆217Updated last year
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆61Updated 3 years ago
- Our open source implementation of MiniLMv2 (https://aclanthology.org/2021.findings-acl.188)☆61Updated 2 years ago
- MinT: Minimal Transformer Library and Tutorials☆260Updated 3 years ago