pytorch / executorchLinks
On-device AI across mobile, embedded and edge for PyTorch
☆3,151Updated this week
Alternatives and similar repositories for executorch
Users that are interested in executorch are comparing it to the libraries listed below
Sorting:
- Supporting PyTorch models with the Google AI Edge TFLite runtime.☆742Updated last week
- PyTorch native quantization and sparsity for training and inference☆2,251Updated this week
- Run PyTorch LLMs locally on servers, desktop and mobile☆3,605Updated last week
- LiteRT continues the legacy of TensorFlow Lite as the trusted, high-performance runtime for on-device AI. Now with LiteRT Next, we're exp…☆714Updated last week
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,168Updated 10 months ago
- TinyChatEngine: On-Device LLM Inference Library☆884Updated last year
- MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.☆1,315Updated 4 months ago
- Generative AI extensions for onnxruntime☆797Updated this week
- lightweight, standalone C++ inference engine for Google's Gemma models.☆6,541Updated last week
- The Qualcomm® AI Hub Models are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.)…☆770Updated last week
- ☆993Updated 6 months ago
- SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX R…☆2,472Updated last week
- PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily wri…☆1,388Updated last week
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,044Updated 4 months ago
- An Extensible Deep Learning Library☆2,227Updated last week
- Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch☆1,868Updated this week
- Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.p…☆1,285Updated 2 weeks ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,207Updated last month
- A PyTorch native platform for training generative AI models☆4,240Updated last week
- A simple, performant and scalable Jax LLM!☆1,867Updated this week
- Low-bit LLM inference on CPU/NPU with lookup table☆840Updated 2 months ago
- A pytorch quantization backend for optimum☆984Updated last month
- Fast Multimodal LLM on Mobile Devices