octoml / Apple-M1-BERT
3X speedup over Appleβs TensorFlow plugin by using Apache TVM on M1
β136Updated 3 years ago
Alternatives and similar repositories for Apple-M1-BERT:
Users that are interested in Apple-M1-BERT are comparing it to the libraries listed below
- benchmarking some transformer deploymentsβ26Updated 2 years ago
- Nod.ai π¦ version of π» . You probably want to start at https://github.com/nod-ai/shark for the product and the upstream IREE repository β¦β106Updated 3 months ago
- Training material for IPU users: tutorials, feature examples, simple applicationsβ86Updated 2 years ago
- Fast sparse deep learning on CPUsβ53Updated 2 years ago
- Home for OctoML PyTorch Profilerβ112Updated last year
- Torch Distributed Experimentalβ115Updated 8 months ago
- torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters iβ¦β180Updated 4 months ago
- Blazing fast training of π€ Transformers on Graphcore IPUsβ85Updated last year
- Python bindings for ggmlβ140Updated 7 months ago
- π Make Thinc faster on macOS by calling into Apple's native Accelerate libraryβ94Updated 6 months ago
- Example code and applications for machine learning on Graphcore IPUsβ320Updated last year
- Implementation of a Transformer, but completely in Tritonβ263Updated 3 years ago
- Training neural networks in TensorFlow 2.0 with 5x less memoryβ130Updated 3 years ago
- β118Updated last year
- Productionize machine learning predictions, with ONNX or withoutβ65Updated last year
- Inference code for LLaMA models in JAXβ117Updated 11 months ago
- A library for syntactically rewriting Python programs, pronounced (sinner).β69Updated 3 years ago
- β87Updated 2 years ago
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mindβ¦β156Updated 4 months ago
- Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)β116Updated 3 years ago
- β67Updated 2 years ago
- This repository contains the results and code for the MLPerfβ’ Training v0.7 benchmark.β56Updated last year
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodesβ238Updated last year
- Tutorial on how to convert machine learned models into ONNXβ16Updated 2 years ago
- GPTQ inference Triton kernelβ299Updated last year
- experiments with inference on llamaβ104Updated 10 months ago
- This repository contains example code to build models on TPUsβ30Updated 2 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMsβ262Updated 6 months ago
- β69Updated 2 years ago
- β56Updated 2 years ago