RahulSChand / gpu_poorLinks
Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization
☆1,316Updated 6 months ago
Alternatives and similar repositories for gpu_poor
Users that are interested in gpu_poor are comparing it to the libraries listed below
Sorting:
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,196Updated last month
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,101Updated 2 weeks ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,877Updated 2 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,258Updated 3 months ago
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3.☆1,337Updated 2 weeks ago
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,551Updated last year
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,773Updated this week
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,167Updated 8 months ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,836Updated last year
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,131Updated last year
- Minimalistic large language model 3D-parallelism training☆1,942Updated this week
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,336Updated this week
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆1,553Updated this week
- YaRN: Efficient Context Window Extension of Large Language Models☆1,501Updated last year
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,057Updated this week
- LLMPerf is a library for validating and benchmarking LLMs☆947Updated 6 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,778Updated 6 months ago
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,868Updated last year
- ☆942Updated 4 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,641Updated this week
- FlashInfer: Kernel Library for LLM Serving☆3,239Updated this week
- Serving multiple LoRA finetuned LLM as one☆1,066Updated last year
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,028Updated last month
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,020Updated 3 months ago
- ☆542Updated 10 months ago
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆6,591Updated this week
- Chat Templates for 🤗 HuggingFace Large Language Models☆674Updated 6 months ago
- Tools for merging pretrained large language models.☆5,853Updated last week
- AllenAI's post-training codebase☆3,028Updated this week
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,426Updated this week