octoml / Apple-M1-BERT
3X speedup over Appleβs TensorFlow plugin by using Apache TVM on M1
β135Updated 2 years ago
Alternatives and similar repositories for Apple-M1-BERT:
Users that are interested in Apple-M1-BERT are comparing it to the libraries listed below
- benchmarking some transformer deploymentsβ26Updated last year
- Nod.ai π¦ version of π» . You probably want to start at https://github.com/nod-ai/shark for the product and the upstream IREE repository β¦β106Updated 2 months ago
- This is a Tensor Train based compression library to compress sparse embedding tables used in large-scale machine learning models such as β¦β193Updated 2 years ago
- Fast sparse deep learning on CPUsβ52Updated 2 years ago
- Torch Distributed Experimentalβ115Updated 7 months ago
- β67Updated 2 years ago
- Implementation of a Transformer, but completely in Tritonβ259Updated 2 years ago
- Distributed preprocessing and data loading for language datasetsβ39Updated 11 months ago
- β55Updated last year
- Home for OctoML PyTorch Profilerβ107Updated last year
- Evaluation suite for large-scale language models.β124Updated 3 years ago
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mindβ¦β154Updated 3 months ago
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight β¦β235Updated last year
- π Make Thinc faster on macOS by calling into Apple's native Accelerate libraryβ93Updated 5 months ago
- torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters iβ¦β179Updated 3 months ago
- Amos optimizer with JEstimator lib.β81Updated 9 months ago
- Our open source implementation of MiniLMv2 (https://aclanthology.org/2021.findings-acl.188)β60Updated last year
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodesβ236Updated last year
- Accelerated NLP pipelines for fast inference on CPU. Built with Transformers and ONNX runtime.β126Updated 4 years ago
- Productionize machine learning predictions, with ONNX or withoutβ65Updated last year
- β87Updated 2 years ago
- A lightweight wrapper for PyTorch that provides a simple declarative API for context switching between devices, distributed modes, mixed-β¦β67Updated last year
- β246Updated 7 months ago
- An open-source efficient deep learning framework/compiler, written in python.β689Updated 2 weeks ago
- β410Updated last year
- This repository contains the results and code for the MLPerfβ’ Training v0.7 benchmark.β56Updated last year
- β117Updated 10 months ago
- This repository contains example code to build models on TPUsβ30Updated 2 years ago
- β39Updated 2 years ago
- Blazing fast training of π€ Transformers on Graphcore IPUsβ85Updated last year