huggingface / optimum-graphcoreLinks
Blazing fast training of π€ Transformers on Graphcore IPUs
β86Updated last year
Alternatives and similar repositories for optimum-graphcore
Users that are interested in optimum-graphcore are comparing it to the libraries listed below
Sorting:
- β66Updated 3 years ago
- DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.β169Updated last week
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)β189Updated 3 years ago
- Implementation of Flash Attention in Jaxβ219Updated last year
- Large scale 4D parallelism pre-training for π€ transformers in Mixture of Experts *(still work in progress)*β87Updated last year
- Easy and lightning fast training of π€ Transformers on Habana Gaudi processor (HPU)β198Updated last week
- JAX implementation of the Llama 2 modelβ218Updated last year
- [WIP] A π₯ interface for running code in the cloudβ85Updated 2 years ago
- β189Updated last week
- Inference code for LLaMA models in JAXβ119Updated last year
- Implementation of a Transformer, but completely in Tritonβ275Updated 3 years ago
- β62Updated 3 years ago
- β19Updated 2 years ago
- β363Updated last year
- β253Updated last year
- Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)β117Updated 3 years ago
- Train very large language models in Jax.β209Updated last year
- Techniques used to run BLOOM at inference in parallelβ37Updated 2 years ago
- Training material for IPU users: tutorials, feature examples, simple applicationsβ87Updated 2 years ago
- Pipeline for pulling and processing online language model pretraining data from the webβ177Updated 2 years ago
- Experiment of using Tangent to autodiff tritonβ81Updated last year
- β20Updated 2 years ago
- Torch Distributed Experimentalβ117Updated last year
- Minimal code to train a Large Language Model (LLM).β172Updated 3 years ago
- Amos optimizer with JEstimator lib.β82Updated last year
- Various transformers for FSDP researchβ38Updated 2 years ago
- git extension for {collaborative, communal, continual} model developmentβ215Updated 10 months ago
- The package used to build the documentation of our Hugging Face reposβ130Updated this week
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mindβ¦β161Updated 2 weeks ago
- Google TPU optimizations for transformers modelsβ120Updated 8 months ago