UbiquitousLearning / mllmLinks
Fast Multimodal LLM on Mobile Devices
☆1,334Updated this week
Alternatives and similar repositories for mllm
Users that are interested in mllm are comparing it to the libraries listed below
Sorting:
- Low-bit LLM inference on CPU/NPU with lookup table☆907Updated 7 months ago
- [EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs, and video generative models.☆659Updated last month
- TinyChatEngine: On-Device LLM Inference Library☆939Updated last year
- Awesome Mobile LLMs☆290Updated last month
- High-speed and easy-use LLM serving framework for local deployment☆140Updated 5 months ago
- The Qualcomm® AI Hub apps are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) a…☆355Updated last month
- LLM model quantization (compression) toolkit with hw acceleration support for Nvidia CUDA, AMD ROCm, Intel XPU and Intel/AMD/Apple CPU vi…☆971Updated this week
- Qualcomm® AI Hub Models is our collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) an…☆890Updated this week
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆605Updated last year
- llm-export can export llm model to onnx.☆340Updated 2 months ago
- ☆41Updated 9 months ago
- Strong and Open Vision Language Assistant for Mobile Devices☆1,321Updated last year
- ☆110Updated 2 weeks ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆802Updated 10 months ago
- [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.☆885Updated last month
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,171Updated 3 months ago
- Fast inference from large lauguage models via speculative decoding☆880Updated last year
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,414Updated 6 months ago
- a lightweight LLM model inference framework☆747Updated last year
- LLM inference in C/C++☆48Updated this week
- MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.☆1,405Updated 8 months ago
- Awesome LLM compression research papers and tools.☆1,757Updated 2 months ago
- 📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉☆4,909Updated last month
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,585Updated last year
- Demonstration of running a native LLM on Android device.☆217Updated this week
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,580Updated this week
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baich…☆1,100Updated last year
- ☆65Updated last year
- A curated list for Efficient Large Language Models☆1,929Updated 7 months ago
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,835Updated this week