abdelfattah-lab / nitroLinks
Lightweight Python Wrapper for OpenVINO, enabling LLM inference on NPUs
☆23Updated 9 months ago
Alternatives and similar repositories for nitro
Users that are interested in nitro are comparing it to the libraries listed below
Sorting:
- Beyond KV Caching: Shared Attention for Efficient LLMs☆19Updated last year
- ☆94Updated 3 weeks ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆127Updated 9 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆48Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆82Updated this week
- Compression for Foundation Models☆35Updated 2 months ago
- ☆33Updated last year
- ☆150Updated 3 months ago
- QuIP quantization☆59Updated last year
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆43Updated 2 months ago
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆111Updated 11 months ago
- Official implementation for Training LLMs with MXFP4☆91Updated 4 months ago
- ☆50Updated 4 months ago
- Quantized Attention on GPU☆44Updated 10 months ago
- ☆14Updated 2 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆82Updated last year
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- Repository for CPU Kernel Generation for LLM Inference☆26Updated 2 years ago
- vLLM performance dashboard☆34Updated last year
- ☆96Updated 4 months ago
- Make SGLang go brrr☆30Updated last week
- Work in progress.☆72Updated 2 months ago
- 16-fold memory access reduction with nearly no loss☆105Updated 5 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆168Updated last year
- ☆42Updated 4 months ago
- ☆57Updated 4 months ago
- ☆37Updated 2 weeks ago
- SQUEEZED ATTENTION: Accelerating Long Prompt LLM Inference☆52Updated 10 months ago
- ☆74Updated 5 months ago