huggingface / optimum-graphcore
Blazing fast training of π€ Transformers on Graphcore IPUs
β85Updated last year
Alternatives and similar repositories for optimum-graphcore:
Users that are interested in optimum-graphcore are comparing it to the libraries listed below
- β67Updated 2 years ago
- Inference code for LLaMA models in JAXβ118Updated 11 months ago
- JAX implementation of the Llama 2 modelβ218Updated last year
- β186Updated last week
- Implementation of Flash Attention in Jaxβ206Updated last year
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight β¦β235Updated 2 years ago
- DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.β167Updated last month
- β349Updated last year
- β60Updated 3 years ago
- This repository contains the experimental PyTorch native float8 training UXβ224Updated 9 months ago
- Implementation of a Transformer, but completely in Tritonβ264Updated 3 years ago
- Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)β116Updated 3 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)β187Updated 2 years ago
- Training material for IPU users: tutorials, feature examples, simple applicationsβ86Updated 2 years ago
- Experiment of using Tangent to autodiff tritonβ78Updated last year
- Easy and lightning fast training of π€ Transformers on Habana Gaudi processor (HPU)β186Updated this week
- Train very large language models in Jax.β204Updated last year
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimentaβ¦β491Updated last week
- β297Updated this week
- Large scale 4D parallelism pre-training for π€ transformers in Mixture of Experts *(still work in progress)*β82Updated last year
- JAX-Toolboxβ301Updated this week
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodesβ238Updated last year
- Pipeline for pulling and processing online language model pretraining data from the webβ177Updated last year
- jax-triton contains integrations between JAX and OpenAI Tritonβ391Updated this week
- git extension for {collaborative, communal, continual} model developmentβ211Updated 5 months ago
- β251Updated 9 months ago
- Google TPU optimizations for transformers modelsβ109Updated 3 months ago
- Amos optimizer with JEstimator lib.β82Updated 11 months ago
- Torch Distributed Experimentalβ115Updated 9 months ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pileβ115Updated 2 years ago