quic / efficient-transformersLinks
This library empowers users to seamlessly port pretrained models and checkpoints on the HuggingFace (HF) hub (developed using HF transformers library) into inference-ready formats that run efficiently on Qualcomm Cloud AI 100 accelerators.
☆84Updated this week
Alternatives and similar repositories for efficient-transformers
Users that are interested in efficient-transformers are comparing it to the libraries listed below
Sorting:
- ☆34Updated 5 months ago
- Model compression for ONNX☆99Updated last year
- AI Edge Quantizer: flexible post training quantization for LiteRT models.☆81Updated 3 weeks ago
- C++ implementations for various tokenizers (sentencepiece, tiktoken etc).☆43Updated this week
- ☆166Updated 2 years ago
- Mobile App Open☆64Updated this week
- Nsight Systems In Docker☆20Updated last year
- QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX☆166Updated this week
- A faster implementation of OpenCV-CUDA that uses OpenCV objects, and more!☆54Updated 3 weeks ago
- [ECCV 2022] SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning☆20Updated 3 years ago
- ☆207Updated 4 years ago
- Dynamic Neural Architecture Search Toolkit☆31Updated last year
- ☆15Updated 6 months ago
- ☆159Updated 2 years ago
- torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters i…☆182Updated 3 months ago
- Step by step implementation of a fast softmax kernel in CUDA☆58Updated 11 months ago
- Sandbox for TVM and playing around!☆22Updated 3 years ago
- Qualcomm Cloud AI SDK (Platform and Apps) enable high performance deep learning inference on Qualcomm Cloud AI platforms delivering high …☆70Updated this week
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆216Updated 2 weeks ago
- Easily benchmark PyTorch model FLOPs, latency, throughput, allocated gpu memory and energy consumption☆109Updated 2 years ago
- ☆76Updated last year
- llama INT4 cuda inference with AWQ☆55Updated 10 months ago
- Memory Optimizations for Deep Learning (ICML 2023)☆111Updated last year
- a fast and customizable CUDA int4 tensor core gemm☆14Updated last year
- Customized matrix multiplication kernels☆57Updated 3 years ago
- ONNX Command-Line Toolbox☆35Updated last year
- [ICML 2022] "DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks", by Yonggan …☆72Updated 3 years ago
- ☆69Updated 3 years ago
- PyTorch interface for the IPU☆181Updated 2 years ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆123Updated last year