microsoft / glinthawk
An LLM inference engine, written in C++
☆13Updated 3 months ago
Alternatives and similar repositories for glinthawk:
Users that are interested in glinthawk are comparing it to the libraries listed below
- PipeInfer: Accelerating LLM Inference using Asynchronous Pipelined Speculation☆29Updated 5 months ago
- A minimal implementation of vllm.☆39Updated 9 months ago
- FractalTensor is a programming framework that introduces a novel approach to organizing data in deep neural networks (DNNs) as a list of …☆26Updated 4 months ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆18Updated this week
- Compression for Foundation Models☆31Updated last month
- ☆45Updated 10 months ago
- A curated list for Efficient Large Language Models☆11Updated last year
- Official Implementation of "CheckEmbed: Effective Verification of LLM Solutions to Open-Ended Tasks"☆17Updated this week
- Beyond KV Caching: Shared Attention for Efficient LLMs☆18Updated 9 months ago
- How much energy do GenAI models consume?☆42Updated 6 months ago
- Implementation for the paper: CMoE: Fast Carving of Mixture-of-Experts for Efficient LLM Inference☆19Updated last month
- ☆22Updated 2 months ago
- Stateful LLM Serving☆63Updated last month
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆35Updated last week
- TensorRT LLM Benchmark Configuration☆13Updated 9 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆115Updated 4 months ago
- ❓Curie: Automated and Rigorous Scientific Experimentation with AI Agents☆77Updated this week
- ☆11Updated 8 months ago
- An Attention Superoptimizer☆21Updated 3 months ago
- ☆55Updated 2 weeks ago
- LLM Serving Performance Evaluation Harness☆77Updated 2 months ago
- ☆12Updated 10 months ago
- PyTorch compilation tutorial covering TorchScript, torch.fx, and Slapo☆18Updated 2 years ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆82Updated last week
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆34Updated last week
- ☆31Updated this week
- ☆59Updated 10 months ago
- Lightweight Python Wrapper for OpenVINO, enabling LLM inference on NPUs☆14Updated 4 months ago
- Repository for CPU Kernel Generation for LLM Inference☆26Updated last year
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆154Updated 7 months ago