mlc-ai / mlc-llmLinks
Universal LLM Deployment Engine with ML Compilation
☆21,497Updated this week
Alternatives and similar repositories for mlc-llm
Users that are interested in mlc-llm are comparing it to the libraries listed below
Sorting:
- Tensor library for machine learning☆13,302Updated last week
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,710Updated last year
- High-performance In-browser LLM Inference Engine☆16,670Updated last month
- Large Language Model Text Generation Inference☆10,580Updated last month
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,185Updated 4 months ago
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,454Updated 4 months ago
- LLM inference in C/C++☆88,212Updated this week
- Instruct-tune LLaMA on consumer hardware☆18,969Updated last year
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆30,181Updated last year
- Python bindings for llama.cpp☆9,678Updated 2 months ago
- Inference code for Llama models☆58,873Updated 8 months ago
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,775Updated last year
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,526Updated 2 years ago
- Run any open-source LLMs, such as DeepSeek and Llama, as OpenAI compatible API endpoint in the cloud.☆11,857Updated last week
- LlamaIndex is the leading framework for building LLM-powered agents over your data.☆44,778Updated last week
- Inference Llama 2 in one file of pure C☆18,872Updated last year
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆40,461Updated this week
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,371Updated 11 months ago
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆12,861Updated this week
- 🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading☆9,820Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆60,980Updated this week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆19,900Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆7,659Updated 3 weeks ago
- A guidance language for controlling large language models.☆20,864Updated last week
- StableLM: Stability AI Language Models☆15,797Updated last year
- Fast and memory-efficient exact attention☆20,151Updated this week
- High-speed Large Language Model Serving for Local Deployment☆8,369Updated 2 months ago
- The definitive Web UI for local AI, with powerful features and easy setup.☆45,225Updated this week
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,970Updated 6 months ago
- Making large AI models cheaper, faster and more accessible☆41,206Updated last week