octoml / Apple-M1-BERT
3X speedup over Appleβs TensorFlow plugin by using Apache TVM on M1
β135Updated 2 years ago
Alternatives and similar repositories for Apple-M1-BERT:
Users that are interested in Apple-M1-BERT are comparing it to the libraries listed below
- Nod.ai π¦ version of π» . You probably want to start at https://github.com/nod-ai/shark for the product and the upstream IREE repository β¦β106Updated last month
- Torch Distributed Experimentalβ115Updated 6 months ago
- benchmarking some transformer deploymentsβ26Updated last year
- Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)β116Updated 3 years ago
- Home for OctoML PyTorch Profilerβ107Updated last year
- β182Updated 2 weeks ago
- Implementation of a Transformer, but completely in Tritonβ257Updated 2 years ago
- Tutorial on how to convert machine learned models into ONNXβ16Updated last year
- Customized matrix multiplication kernelsβ53Updated 2 years ago
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight β¦β235Updated last year
- The Triton backend for the PyTorch TorchScript models.β143Updated this week
- Distributed preprocessing and data loading for language datasetsβ39Updated 10 months ago
- Accelerate PyTorch models with ONNX Runtimeβ358Updated 5 months ago
- Training material for IPU users: tutorials, feature examples, simple applicationsβ86Updated last year
- Blazing fast training of π€ Transformers on Graphcore IPUsβ85Updated 11 months ago
- torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters iβ¦β178Updated 2 months ago
- β50Updated 3 years ago
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodesβ237Updated last year
- β128Updated 2 years ago
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mindβ¦β154Updated 2 months ago
- A library for syntactically rewriting Python programs, pronounced (sinner).β70Updated 2 years ago
- Fast sparse deep learning on CPUsβ52Updated 2 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)β186Updated 2 years ago
- This repository contains example code to build models on TPUsβ30Updated 2 years ago
- FlashAttention (Metal Port)β436Updated 4 months ago
- β65Updated 2 years ago
- This is a Tensor Train based compression library to compress sparse embedding tables used in large-scale machine learning models such as β¦β193Updated 2 years ago
- State of the art faster Transformer with Tensorflow 2.0 ( NLP, Computer Vision, Audio ).β85Updated last year
- PyTorch RFCs (experimental)β131Updated 5 months ago
- PyTorch implementation of L2L execution algorithmβ107Updated 2 years ago