octoml / Apple-M1-BERTLinks
3X speedup over Appleβs TensorFlow plugin by using Apache TVM on M1
β137Updated 3 years ago
Alternatives and similar repositories for Apple-M1-BERT
Users that are interested in Apple-M1-BERT are comparing it to the libraries listed below
Sorting:
- Nod.ai π¦ version of π» . You probably want to start at https://github.com/nod-ai/shark for the product and the upstream IREE repository β¦β106Updated 7 months ago
- benchmarking some transformer deploymentsβ26Updated 2 years ago
- Training material for IPU users: tutorials, feature examples, simple applicationsβ87Updated 2 years ago
- Tutorial on how to convert machine learned models into ONNXβ16Updated 2 years ago
- Example code and applications for machine learning on Graphcore IPUsβ329Updated last year
- Torch Distributed Experimentalβ117Updated last year
- DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.β168Updated last month
- β67Updated 3 years ago
- Blazing fast training of π€ Transformers on Graphcore IPUsβ86Updated last year
- β87Updated 3 years ago
- π Make Thinc faster on macOS by calling into Apple's native Accelerate libraryβ99Updated 2 months ago
- Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)β117Updated 3 years ago
- Evaluation suite for large-scale language models.β127Updated 4 years ago
- Accelerate PyTorch models with ONNX Runtimeβ363Updated 6 months ago
- The official page of ROCm/PyTorch will contain information that is always confusing. On this page we will endeavor to describe accurate iβ¦β87Updated 4 years ago
- Implementation of a Transformer, but completely in Tritonβ273Updated 3 years ago
- β52Updated 3 years ago
- PyTorch interface for the IPUβ180Updated last year
- β90Updated 3 years ago
- Fast sparse deep learning on CPUsβ55Updated 2 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)β187Updated 3 years ago
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight β¦β236Updated 2 years ago
- This repository contains example code to build models on TPUsβ30Updated 2 years ago
- Customized matrix multiplication kernelsβ56Updated 3 years ago
- Implementation of Flash Attention in Jaxβ216Updated last year
- torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters iβ¦β180Updated this week
- Babysit your preemptible TPUsβ86Updated 2 years ago
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodesβ241Updated 2 years ago
- Helper scripts and notes that were used while porting various nlp modelsβ47Updated 3 years ago
- β251Updated last year