microsoft / glinthawkLinks
An LLM inference engine, written in C++
☆15Updated 4 months ago
Alternatives and similar repositories for glinthawk
Users that are interested in glinthawk are comparing it to the libraries listed below
Sorting:
- A lightweight, user-friendly data-plane for LLM training.☆16Updated last month
- PipeInfer: Accelerating LLM Inference using Asynchronous Pipelined Speculation☆29Updated 6 months ago
- A resilient distributed training framework☆95Updated last year
- Compression for Foundation Models☆31Updated 2 months ago
- A minimal implementation of vllm.☆41Updated 10 months ago
- ☆46Updated 11 months ago
- FractalTensor is a programming framework that introduces a novel approach to organizing data in deep neural networks (DNNs) as a list of …☆25Updated 5 months ago
- Official Implementation of "CheckEmbed: Effective Verification of LLM Solutions to Open-Ended Tasks"☆19Updated this week
- How much energy do GenAI models consume?☆42Updated 3 weeks ago
- Beyond KV Caching: Shared Attention for Efficient LLMs☆19Updated 10 months ago
- ☆34Updated last week
- Unleash the full potential of exascale LLMs on consumer-class GPUs, proven by extensive benchmarks, with no long-term adjustments and min…☆26Updated 6 months ago
- ☆30Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆11Updated 8 months ago
- Bamboo-7B Large Language Model☆93Updated last year
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆158Updated 8 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆116Updated 6 months ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆18Updated last week
- Training hybrid models for dummies.☆21Updated 4 months ago
- Explore training for quantized models☆18Updated last week
- LLM Serving Performance Evaluation Harness☆78Updated 3 months ago
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆36Updated 3 months ago
- ☆66Updated 3 weeks ago
- A model serving framework for various research and production scenarios. Seamlessly built upon the PyTorch and HuggingFace ecosystem.☆23Updated 7 months ago
- A caching framework for microservice applications☆20Updated last year
- ☆62Updated 11 months ago
- A collection of reproducible inference engine benchmarks☆31Updated last month
- Modular and structured prompt caching for low-latency LLM inference☆94Updated 6 months ago
- asynchronous/distributed speculative evaluation for llama3☆39Updated 9 months ago
- Nexusflow function call, tool use, and agent benchmarks.☆19Updated 5 months ago