☆35Jun 22, 2024Updated last year
Alternatives and similar repositories for InferCept
Users that are interested in InferCept are comparing it to the libraries listed below
Sorting:
- Stateful LLM Serving☆97Mar 11, 2025Updated last year
- Paper-reading notes for Berkeley OS prelim exam.☆14Aug 28, 2024Updated last year
- ☆19May 4, 2023Updated 2 years ago
- PoC for "SpecReason: Fast and Accurate Inference-Time Compute via Speculative Reasoning" [NeurIPS '25]☆65Oct 2, 2025Updated 5 months ago
- ☆12Oct 16, 2022Updated 3 years ago
- ☆131Nov 11, 2024Updated last year
- ☆17May 10, 2024Updated last year
- The official implementation for the paper 'mmSampler: Efficient Frame Sampler for Multimodal Video Retrieval'.☆11Aug 23, 2022Updated 3 years ago
- A Streaming-Native Serving Engine for TTS/STS Models☆60Feb 22, 2026Updated 3 weeks ago
- A LLM agent capable of writing an API documentation and functional tests from a file containing the entrypoints code.☆10Sep 7, 2023Updated 2 years ago
- APEX+ is an LLM Serving Simulator☆44Jun 16, 2025Updated 9 months ago
- EuroSys '24: "Trinity: A Fast Compressed Multi-attribute Data Store"☆19Mar 8, 2025Updated last year
- Official Repo for "SplitQuant / LLM-PQ: Resource-Efficient LLM Offline Serving on Heterogeneous GPUs via Phase-Aware Model Partition and …☆37Aug 29, 2025Updated 6 months ago
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆182Jul 10, 2024Updated last year
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆75Sep 15, 2025Updated 6 months ago
- ☆11Nov 14, 2023Updated 2 years ago
- Query-Adaptive Vector Search☆69Updated this week
- A parallelism VAE avoids OOM for high resolution image generation☆89Mar 12, 2026Updated last week
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆78Oct 15, 2025Updated 5 months ago
- ☆13Dec 9, 2024Updated last year
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆134Feb 22, 2024Updated 2 years ago
- Nex Venus Communication Library☆74Nov 17, 2025Updated 4 months ago
- PipeRAG: Fast Retrieval-Augmented Generation via Algorithm-System Co-design (KDD 2025)☆31Jun 14, 2024Updated last year
- Source code for Jellyfish, a soft real-time inference serving system☆15Dec 20, 2022Updated 3 years ago
- My templates used in OI. All C++.☆11Jul 17, 2018Updated 7 years ago
- Repository for the COLM 2025 paper SpecDec++: Boosting Speculative Decoding via Adaptive Candidate Lengths☆16Jul 10, 2025Updated 8 months ago
- Simple intermediate representation language for learning and research.☆20Mar 27, 2020Updated 5 years ago
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆53Aug 6, 2025Updated 7 months ago
- An LLM inference engine, written in C++☆19Feb 5, 2026Updated last month
- Source code of IPA, https://escholarship.org/uc/item/2p0805dq☆12Jun 27, 2024Updated last year
- A binary representation of json value, optimized for parsing and querying.☆25Nov 14, 2025Updated 4 months ago
- Scripts used to setup a Spark cluster on EC2☆21Mar 24, 2016Updated 9 years ago
- A C++20 coroutine library based off asyncio☆25Apr 25, 2023Updated 2 years ago
- ☆19Dec 4, 2025Updated 3 months ago
- ☆76Sep 15, 2025Updated 6 months ago
- A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems☆243Updated this week
- Smoothing video traffic to make it a friendlier internet neighbor☆14Apr 23, 2024Updated last year
- ☆39Sep 13, 2025Updated 6 months ago
- A low-latency & high-throughput serving engine for LLMs☆484Jan 8, 2026Updated 2 months ago