EleutherAI / pyfra
Python Research Framework
☆106Updated 2 years ago
Alternatives and similar repositories for pyfra:
Users that are interested in pyfra are comparing it to the libraries listed below
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodes☆237Updated last year
- ☆38Updated 2 years ago
- See the issue board for the current status of active and prospective projects!☆65Updated 2 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆184Updated 2 years ago
- A GPT, made only of MLPs, in Jax☆57Updated 3 years ago
- GPT, but made only out of MLPs☆88Updated 3 years ago
- A case study of efficient training of large language models using commodity hardware.☆68Updated 2 years ago
- Contrastive Language-Image Pretraining☆142Updated 2 years ago
- ☆153Updated 4 years ago
- Code for scaling Transformers☆26Updated 4 years ago
- One stop shop for all things carp☆59Updated 2 years ago
- Re-implementation of 'Grokking: Generalization beyond overfitting on small algorithmic datasets'☆38Updated 3 years ago
- Amos optimizer with JEstimator lib.☆81Updated 8 months ago
- ☆64Updated 2 years ago
- ☆58Updated 2 years ago
- Simple and efficient RevNet-Library for PyTorch with XLA and DeepSpeed support and parameter offload☆125Updated 2 years ago
- A collection of optimizers, some arcane others well known, for Flax.☆29Updated 3 years ago
- Your fruity companion for transformers☆14Updated 2 years ago
- ☆57Updated 2 years ago
- gpu tester detects broken and slow gpus in a cluster☆67Updated last year
- Learned Hyperparameter Optimizers☆58Updated 3 years ago
- A library to create and manage configuration files, especially for machine learning projects.☆76Updated 2 years ago
- Babysit your preemptible TPUs☆85Updated 2 years ago
- ☆108Updated 2 years ago
- Another attempt at a long-context / efficient transformer by me☆37Updated 2 years ago
- A 🤗-style implementation of BERT using lambda layers instead of self-attention☆69Updated 4 years ago
- Named tensors with first-class dimensions for PyTorch☆322Updated last year
- Implementation of Feedback Transformer in Pytorch☆105Updated 3 years ago