lyogavin / airllmLinks
AirLLM 70B inference with single 4GB GPU
☆6,270Updated last month
Alternatives and similar repositories for airllm
Users that are interested in airllm are comparing it to the libraries listed below
Sorting:
- Tools for merging pretrained large language models.☆6,394Updated last month
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,970Updated 6 months ago
- High-speed Large Language Model Serving for Local Deployment☆8,369Updated 2 months ago
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,353Updated 2 months ago
- Go ahead and axolotl questions☆10,673Updated this week
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,199Updated this week
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,260Updated 5 months ago
- Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization☆1,378Updated 10 months ago
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆12,861Updated last week
- A Next-Generation Training Engine Built for Ultra-Large MoE Models☆4,949Updated this week
- PyTorch native post-training library☆5,547Updated last week
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,719Updated last year
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,780Updated last year
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,903Updated 2 years ago
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,459Updated 4 months ago
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,518Updated 5 months ago
- Python bindings for llama.cpp☆9,678Updated 2 months ago
- Large Language Model Text Generation Inference☆10,605Updated last month
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,912Updated this week
- Run any open-source LLMs, such as DeepSeek and Llama, as OpenAI compatible API endpoint in the cloud.☆11,883Updated this week
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,520Updated this week
- H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://docs.h2o.ai/h2o-llmstudio/☆4,691Updated last month
- Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.☆2,713Updated this week
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,094Updated last year
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,318Updated 3 months ago
- Curated list of datasets and tools for post-training.☆3,810Updated 3 months ago
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,876Updated last year
- Tensor library for machine learning☆13,332Updated last week
- Official release of InternLM series (InternLM, InternLM2, InternLM2.5, InternLM3).☆7,088Updated 3 months ago
- Multiple NVIDIA GPUs or Apple Silicon for Large Language Model Inference?☆1,806Updated last year