pytorch / executorch
On-device AI across mobile, embedded and edge for PyTorch
☆2,807Updated this week
Alternatives and similar repositories for executorch:
Users that are interested in executorch are comparing it to the libraries listed below
- Run PyTorch LLMs locally on servers, desktop and mobile☆3,576Updated this week
- PyTorch native quantization and sparsity for training and inference☆2,015Updated this week
- MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.☆1,292Updated 2 weeks ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆2,984Updated 3 weeks ago
- PyTorch native post-training library☆5,154Updated this week
- Supporting PyTorch models with the Google AI Edge TFLite runtime.☆566Updated this week
- A pytorch quantization backend for optimum☆928Updated 2 weeks ago
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,172Updated 7 months ago
- FlashInfer: Kernel Library for LLM Serving☆2,788Updated this week
- A PyTorch native library for large-scale model training☆3,665Updated this week
- ☆959Updated 3 months ago
- LiteRT is the new name for TensorFlow Lite (TFLite). While the name is new, it's still the same trusted, high-performance runtime for on-…☆375Updated this week
- TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and support state-of-the-art optimizati…☆10,436Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Bla…☆2,393Updated this week
- TinyChatEngine: On-Device LLM Inference Library☆843Updated 10 months ago
- Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch☆1,818Updated last month
- Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.p…☆1,253Updated this week
- Thunder gives you PyTorch models superpowers for training and inference. Unlock out-of-the-box optimizations for performance, memory and …☆1,337Updated this week
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,145Updated this week
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,157Updated this week
- Generative AI extensions for onnxruntime☆703Updated this week
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆5,940Updated 3 weeks ago
- SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX R…☆2,387Updated last week
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆2,965Updated this week
- ☆1,025Updated last year
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆3,135Updated this week
- Reaching LLaMA2 Performance with 0.1M Dollars☆980Updated 9 months ago
- Training LLMs with QLoRA + FSDP☆1,476Updated 6 months ago
- nvidia-modelopt is a unified library of state-of-the-art model optimization techniques like quantization, pruning, distillation, speculat…☆900Updated last week
- A modern model graph visualizer and debugger☆1,175Updated this week