lyogavin / airllmLinks
AirLLM 70B inference with single 4GB GPU
☆5,858Updated 2 months ago
Alternatives and similar repositories for airllm
Users that are interested in airllm are comparing it to the libraries listed below
Sorting:
- Python bindings for llama.cpp☆9,399Updated 2 weeks ago
- Tools for merging pretrained large language models.☆6,122Updated this week
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,245Updated 3 weeks ago
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆6,805Updated this week
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,905Updated 3 months ago
- A blazing fast inference solution for text embeddings models☆3,857Updated last week
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,583Updated last year
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,667Updated last year
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,821Updated this week
- Large Language Model Text Generation Inference☆10,367Updated last week
- An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)☆4,671Updated 3 weeks ago
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆12,564Updated last week
- a state-of-the-art-level open visual language model | 多模态预训练模型☆6,626Updated last year
- Retrieval and Retrieval-augmented LLMs☆10,262Updated 2 weeks ago
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,436Updated last month
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,221Updated 2 months ago
- High-speed Large Language Model Serving for Local Deployment☆8,286Updated last week
- Official release of InternLM series (InternLM, InternLM2, InternLM2.5, InternLM3).☆7,013Updated last week
- Go ahead and axolotl questions☆10,095Updated this week
- SGLang is a fast serving framework for large language models and vision language models.☆16,386Updated this week
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,333Updated 2 months ago
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,412Updated this week
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,872Updated last year
- Large-scale LLM inference engine☆1,492Updated this week
- Infinity is a high-throughput, low-latency serving engine for text-embeddings, reranking models, clip, clap and colpali☆2,331Updated last week
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,300Updated 4 months ago
- Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization☆1,336Updated 8 months ago
- Run any open-source LLMs, such as DeepSeek and Llama, as OpenAI compatible API endpoint in the cloud.☆11,640Updated this week
- A lightweight framework for building LLM-based agents☆2,173Updated last month
- Agent framework and applications built upon Qwen>=3.0, featuring Function Calling, MCP, Code Interpreter, RAG, Chrome extension, etc.☆10,511Updated last week