UbiquitousLearning / mllmLinks
Fast Multimodal LLM on Mobile Devices
☆1,370Updated this week
Alternatives and similar repositories for mllm
Users that are interested in mllm are comparing it to the libraries listed below
Sorting:
- Low-bit LLM inference on CPU/NPU with lookup table☆916Updated 8 months ago
- Awesome Mobile LLMs☆301Updated 2 months ago
- High-speed and easy-use LLM serving framework for local deployment☆145Updated 5 months ago
- [EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs, and video generative models.☆672Updated 2 months ago
- TinyChatEngine: On-Device LLM Inference Library☆941Updated last year
- LLM model quantization (compression) toolkit with hw acceleration support for Nvidia CUDA, AMD ROCm, Intel XPU and Intel/AMD/Apple CPU vi…☆1,007Updated this week
- Qualcomm® AI Hub Models is our collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) an…☆909Updated last week
- ☆42Updated 10 months ago
- The Qualcomm® AI Hub apps are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) a…☆369Updated last week
- Strong and Open Vision Language Assistant for Mobile Devices☆1,330Updated last year
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆615Updated last year
- ☆120Updated this week
- llm-export can export llm model to onnx.☆344Updated 3 months ago
- LLM inference in C/C++☆48Updated this week
- Demonstration of running a native LLM on Android device.☆226Updated this week
- a lightweight LLM model inference framework☆749Updated last year
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,180Updated 4 months ago
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).☆2,169Updated last week
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆810Updated 11 months ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,431Updated 6 months ago
- Fast inference from large lauguage models via speculative decoding☆886Updated last year
- [ICLR-2025-SLLM Spotlight 🔥]MobiLlama : Small Language Model tailored for edge devices☆668Updated 8 months ago
- [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.☆888Updated 2 months ago
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,660Updated last week
- 🎯An accuracy-first, highly efficient quantization toolkit for LLMs, designed to minimize quality degradation across Weight-Only Quantiza…☆839Updated this week
- Disaggregated serving system for Large Language Models (LLMs).☆771Updated 10 months ago
- A throughput-oriented high-performance serving framework for LLMs☆945Updated 3 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,005Updated last year
- MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.☆1,406Updated 9 months ago
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baich…☆1,105Updated last year