ShishirPatil / poet
ML model training for edge devices
☆162Updated last year
Alternatives and similar repositories for poet:
Users that are interested in poet are comparing it to the libraries listed below
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆107Updated 3 months ago
- Fast low-bit matmul kernels in Triton☆267Updated this week
- A schedule language for large model training☆145Updated 9 months ago
- ☆93Updated 2 years ago
- 🏙 Interactive performance profiling and debugging tool for PyTorch neural networks.☆59Updated 2 months ago
- AI and Memory Wall☆213Updated last year
- ☆101Updated 7 months ago
- Flexible simulator for mixed precision and format simulation of LLMs and vision transformers.☆48Updated last year
- ☆157Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆271Updated last year
- Reorder-based post-training quantization for large language model☆185Updated last year
- Latency and Memory Analysis of Transformer Models for Training and Inference☆401Updated 3 weeks ago
- ☆248Updated 8 months ago
- Official code for "SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient"☆137Updated last year
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆680Updated 7 months ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆362Updated last year
- ☆141Updated 2 years ago
- GPTQ inference Triton kernel☆298Updated last year
- This repository contains integer operators on GPUs for PyTorch.☆196Updated last year
- A curated list of awesome projects and papers for distributed training or inference☆223Updated 5 months ago
- An open-source efficient deep learning framework/compiler, written in python.☆692Updated last month
- ☆145Updated last year
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆240Updated 4 months ago
- Code for the NeurIPS 2022 paper "Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning".☆113Updated last year
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆299Updated 8 months ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆189Updated last week
- ☆62Updated last month
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆336Updated 7 months ago
- Memory Optimizations for Deep Learning (ICML 2023)☆62Updated last year
- This repository contains the experimental PyTorch native float8 training UX☆222Updated 7 months ago