huggingface / optimum-graphcoreLinks
Blazing fast training of π€ Transformers on Graphcore IPUs
β85Updated last year
Alternatives and similar repositories for optimum-graphcore
Users that are interested in optimum-graphcore are comparing it to the libraries listed below
Sorting:
- β67Updated 2 years ago
- DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.β168Updated last month
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)β187Updated 3 years ago
- Easy and lightning fast training of π€ Transformers on Habana Gaudi processor (HPU)β190Updated this week
- JAX implementation of the Llama 2 modelβ219Updated last year
- β186Updated last month
- Inference code for LLaMA models in JAXβ118Updated last year
- Implementation of Flash Attention in Jaxβ213Updated last year
- Techniques used to run BLOOM at inference in parallelβ37Updated 2 years ago
- Large scale 4D parallelism pre-training for π€ transformers in Mixture of Experts *(still work in progress)*β85Updated last year
- Training material for IPU users: tutorials, feature examples, simple applicationsβ86Updated 2 years ago
- Train very large language models in Jax.β204Updated last year
- The package used to build the documentation of our Hugging Face reposβ122Updated this week
- β142Updated this week
- [WIP] A π₯ interface for running code in the cloudβ85Updated 2 years ago
- β19Updated 2 years ago
- β230Updated this week
- Implementation of a Transformer, but completely in Tritonβ270Updated 3 years ago
- β61Updated 3 years ago
- Google TPU optimizations for transformers modelsβ114Updated 5 months ago
- git extension for {collaborative, communal, continual} model developmentβ214Updated 8 months ago
- Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)β117Updated 3 years ago
- β358Updated last year
- β251Updated 11 months ago
- jax-triton contains integrations between JAX and OpenAI Tritonβ405Updated 3 weeks ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorchβ230Updated 10 months ago
- Amos optimizer with JEstimator lib.β82Updated last year
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimentaβ¦β513Updated last week
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight β¦β235Updated 2 years ago
- Pipeline for pulling and processing online language model pretraining data from the webβ178Updated last year