godaai / llm-inferenceLinks
Resources for Large Language Model Inference
☆16Updated last year
Alternatives and similar repositories for llm-inference
Users that are interested in llm-inference are comparing it to the libraries listed below
Sorting:
- ☆54Updated 7 months ago
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆53Updated 7 months ago
- ☆86Updated 2 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆133Updated last year
- ☆194Updated last month
- Odysseus: Playground of LLM Sequence Parallelism☆70Updated last year
- QuIP quantization☆54Updated last year
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆129Updated last month
- ☆119Updated last year
- Benchmark suite for LLMs from Fireworks.ai☆76Updated 2 weeks ago
- Low-bit optimizers for PyTorch☆128Updated last year
- Easy and Efficient Quantization for Transformers☆199Updated 4 months ago
- Fast and memory-efficient exact attention☆74Updated last week
- ☆114Updated 3 weeks ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆252Updated 7 months ago
- Simple implementation of Speculative Sampling in NumPy for GPT-2.☆95Updated last year
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆310Updated 11 months ago
- A minimal implementation of vllm.☆43Updated 10 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- ☆72Updated 2 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆79Updated 9 months ago
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆98Updated last year
- ☆141Updated 3 months ago
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆63Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆116Updated 6 months ago
- ☆79Updated last year
- Summary of system papers/frameworks/codes/tools on training or serving large model☆57Updated last year
- KV cache compression for high-throughput LLM inference☆131Updated 4 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆100Updated last year
- ring-attention experiments☆144Updated 8 months ago