pytorch / executorchLinks
On-device AI across mobile, embedded and edge for PyTorch
☆4,098Updated this week
Alternatives and similar repositories for executorch
Users that are interested in executorch are comparing it to the libraries listed below
Sorting:
- Run PyTorch LLMs locally on servers, desktop and mobile☆3,622Updated 3 months ago
- PyTorch native quantization and sparsity for training and inference☆2,611Updated this week
- Supporting PyTorch models with the Google AI Edge TFLite runtime.☆890Updated this week
- LiteRT, successor to TensorFlow Lite. is Google's On-device framework for high-performance ML & GenAI deployment on edge platforms, via e…☆1,213Updated this week
- MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.☆1,402Updated 8 months ago
- Qualcomm® AI Hub Models is our collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) an…☆874Updated 3 weeks ago
- An Extensible Deep Learning Library☆2,310Updated this week
- A PyTorch native platform for training generative AI models☆4,924Updated this week
- PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily wri…☆1,434Updated this week
- A modern model graph visualizer and debugger☆1,361Updated last week
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,170Updated last year
- TinyChatEngine: On-Device LLM Inference Library☆935Updated last year
- PyTorch native post-training library☆5,646Updated this week
- Efficient Triton Kernels for LLM Training☆6,002Updated this week
- Generative AI extensions for onnxruntime☆922Updated this week
- ☆1,025Updated 11 months ago
- Olive: Simplify ML Model Finetuning, Conversion, Quantization, and Optimization for CPUs, GPUs and NPUs.☆2,219Updated this week
- Tile primitives for speedy kernels☆3,038Updated this week
- A simple, performant and scalable Jax LLM!☆2,072Updated this week
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆3,880Updated this week
- A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresse…☆1,770Updated this week
- SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, …☆2,561Updated this week
- lightweight, standalone C++ inference engine for Google's Gemma models.☆6,663Updated this week
- A lightweight library for portable low-level GPU computation using WebGPU.☆3,933Updated 3 months ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,410Updated 5 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,055Updated this week
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,237Updated 2 weeks ago
- CUDA Python: Performance meets Productivity☆3,126Updated this week
- Sparsity-aware deep learning inference runtime for CPUs☆3,160Updated 7 months ago
- A pytorch quantization backend for optimum☆1,020Updated last month