RahulSChand / gpu_poorLinks
Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization
☆1,383Updated last year
Alternatives and similar repositories for gpu_poor
Users that are interested in gpu_poor are comparing it to the libraries listed below
Sorting:
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,300Updated 7 months ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,654Updated last year
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,995Updated last week
- ☆972Updated 10 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,645Updated last year
- A lightweight library for generating synthetic instruction tuning datasets for your data without GPT.☆816Updated 5 months ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,882Updated last year
- DataComp for Language Models☆1,402Updated 3 months ago
- A toolkit for inference and evaluation of 'mixtral-8x7b-32kseqlen' from Mistral AI☆773Updated 2 years ago
- Enforce the output format (JSON Schema, Regex etc) of a language model☆1,973Updated 4 months ago
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆2,127Updated last year
- LLMPerf is a library for validating and benchmarking LLMs☆1,068Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,312Updated 9 months ago
- Serving multiple LoRA finetuned LLM as one☆1,128Updated last year
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,169Updated 2 months ago
- Finetuning Large Language Models on One Consumer GPU in 2 Bits☆733Updated last year
- Minimalistic large language model 3D-parallelism training☆2,381Updated 2 weeks ago
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,680Updated last year
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,788Updated this week
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,489Updated last week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,212Updated 2 weeks ago
- Fast, Flexible and Portable Structured Generation☆1,445Updated this week
- A high-performance inference system for large language models, designed for production environments.☆489Updated last week
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,169Updated last year
- DataDreamer: Prompt. Generate Synthetic Data. Train & Align Models. 🤖💤☆1,083Updated 10 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆751Updated last year
- From scratch implementation of a sparse mixture of experts language model inspired by Andrej Karpathy's makemore :)☆780Updated last year
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,804Updated this week
- LLM Inference benchmark☆430Updated last year
- Chat Templates for 🤗 HuggingFace Large Language Models☆708Updated last year