huggingface / optimum-graphcoreLinks
Blazing fast training of π€ Transformers on Graphcore IPUs
β86Updated last year
Alternatives and similar repositories for optimum-graphcore
Users that are interested in optimum-graphcore are comparing it to the libraries listed below
Sorting:
- β67Updated 3 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)β188Updated 3 years ago
- JAX implementation of the Llama 2 modelβ219Updated last year
- DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.β168Updated last month
- Easy and lightning fast training of π€ Transformers on Habana Gaudi processor (HPU)β194Updated this week
- Inference code for LLaMA models in JAXβ119Updated last year
- Implementation of Flash Attention in Jaxβ216Updated last year
- β361Updated last year
- β188Updated 2 weeks ago
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodesβ241Updated 2 years ago
- Techniques used to run BLOOM at inference in parallelβ37Updated 2 years ago
- β61Updated 3 years ago
- Large scale 4D parallelism pre-training for π€ transformers in Mixture of Experts *(still work in progress)*β87Updated last year
- git extension for {collaborative, communal, continual} model developmentβ216Updated 10 months ago
- [WIP] A π₯ interface for running code in the cloudβ85Updated 2 years ago
- β146Updated last month
- Train very large language models in Jax.β208Updated last year
- β252Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Dayβ256Updated last year
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimentaβ¦β535Updated 2 weeks ago
- Implementation of a Transformer, but completely in Tritonβ274Updated 3 years ago
- β19Updated 2 years ago
- Minimal library to train LLMs on TPU in JAX with pjit().β299Updated last year
- β171Updated 7 months ago
- OSLO: Open Source for Large-scale Optimizationβ175Updated 2 years ago
- Torch Distributed Experimentalβ117Updated last year
- Google TPU optimizations for transformers modelsβ120Updated 7 months ago
- Experiment of using Tangent to autodiff tritonβ81Updated last year
- β131Updated 3 years ago
- Training material for IPU users: tutorials, feature examples, simple applicationsβ87Updated 2 years ago