huggingface / optimum-graphcoreLinks
Blazing fast training of π€ Transformers on Graphcore IPUs
β85Updated last year
Alternatives and similar repositories for optimum-graphcore
Users that are interested in optimum-graphcore are comparing it to the libraries listed below
Sorting:
- β67Updated 3 years ago
- DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.β168Updated 2 weeks ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)β187Updated 3 years ago
- Easy and lightning fast training of π€ Transformers on Habana Gaudi processor (HPU)β191Updated this week
- Inference code for LLaMA models in JAXβ118Updated last year
- JAX implementation of the Llama 2 modelβ219Updated last year
- Implementation of Flash Attention in Jaxβ215Updated last year
- Large scale 4D parallelism pre-training for π€ transformers in Mixture of Experts *(still work in progress)*β86Updated last year
- Train very large language models in Jax.β206Updated last year
- β187Updated last week
- β61Updated 3 years ago
- Accelerated inference of π€ models using FuriosaAI NPU chips.β26Updated last month
- β361Updated last year
- Implementation of a Transformer, but completely in Tritonβ273Updated 3 years ago
- [WIP] A π₯ interface for running code in the cloudβ85Updated 2 years ago
- β130Updated 3 years ago
- Minimal library to train LLMs on TPU in JAX with pjit().β292Updated last year
- git extension for {collaborative, communal, continual} model developmentβ217Updated 8 months ago
- Google TPU optimizations for transformers modelsβ117Updated 6 months ago
- jax-triton contains integrations between JAX and OpenAI Tritonβ412Updated last month
- Various transformers for FSDP researchβ37Updated 2 years ago
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodesβ241Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pileβ116Updated 2 years ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Dayβ256Updated last year
- β251Updated last year
- Used for adaptive human in the loop evaluation of language and embedding models.β311Updated 2 years ago
- Techniques used to run BLOOM at inference in parallelβ37Updated 2 years ago
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimentaβ¦β522Updated this week
- Amos optimizer with JEstimator lib.β82Updated last year
- Training material for IPU users: tutorials, feature examples, simple applicationsβ86Updated 2 years ago