lyogavin / airllmLinks
AirLLM 70B inference with single 4GB GPU
☆5,798Updated last month
Alternatives and similar repositories for airllm
Users that are interested in airllm are comparing it to the libraries listed below
Sorting:
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆6,591Updated this week
- SGLang is a fast serving framework for large language models and vision language models.☆15,421Updated this week
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,877Updated 2 months ago
- Python bindings for llama.cpp☆9,276Updated last month
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,336Updated this week
- Tools for merging pretrained large language models.☆5,853Updated last week
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,421Updated 3 weeks ago
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,196Updated last month
- Large Language Model Text Generation Inference☆10,249Updated this week
- A blazing fast inference solution for text embeddings models☆3,731Updated this week
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,504Updated last year
- Go ahead and axolotl questions☆9,760Updated this week
- g1: Using Llama-3.1 70b on Groq to create o1-like reasoning chains☆4,223Updated 5 months ago
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,216Updated 3 weeks ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆50,864Updated this week
- TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and support state-of-the-art optimizati…☆10,865Updated this week
- PyTorch native post-training library☆5,287Updated this week
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,773Updated this week
- Official release of InternLM series (InternLM, InternLM2, InternLM2.5, InternLM3).☆6,953Updated 4 months ago
- Agent framework and applications built upon Qwen>=3.0, featuring Function Calling, MCP, Code Interpreter, RAG, Chrome extension, etc.☆9,709Updated last week
- Democratizing Reinforcement Learning for LLMs☆3,396Updated last month
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,028Updated last month
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,191Updated 3 months ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,883Updated last year
- Tensor library for machine learning☆12,712Updated this week
- High-speed Large Language Model Serving for Local Deployment☆8,224Updated 4 months ago
- A framework for serving and evaluating LLM routers - save LLM costs without compromising quality☆4,064Updated 10 months ago
- Accessible large language models via k-bit quantization for PyTorch.☆7,150Updated last week
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,589Updated last year
- Multiple NVIDIA GPUs or Apple Silicon for Large Language Model Inference?☆1,670Updated last year