UbiquitousLearning / mllm
Fast Multimodal LLM on Mobile Devices
☆781Updated last week
Alternatives and similar repositories for mllm:
Users that are interested in mllm are comparing it to the libraries listed below
- [EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a V…☆443Updated this week
- Low-bit LLM inference on CPU with lookup table☆705Updated 2 months ago
- [NeurIPS'24 Spotlight, ICLR'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which r…☆951Updated this week
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3.☆1,113Updated last week
- A throughput-oriented high-performance serving framework for LLMs☆782Updated 6 months ago
- Strong and Open Vision Language Assistant for Mobile Devices☆1,177Updated 11 months ago
- Fast inference from large lauguage models via speculative decoding☆700Updated 7 months ago
- ☆55Updated 4 months ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆617Updated 3 weeks ago
- Awesome Mobile LLMs☆156Updated last week
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆661Updated this week
- TinyChatEngine: On-Device LLM Inference Library☆826Updated 8 months ago
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆425Updated 6 months ago
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆240Updated 3 weeks ago
- Survey Paper List - Efficient LLM and Foundation Models☆241Updated 6 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆783Updated 6 months ago
- Demonstration of running a native LLM on Android device.☆127Updated this week
- llm-export can export llm model to onnx.☆274Updated 2 months ago
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆811Updated last week
- ☆28Updated 4 months ago
- FlashInfer: Kernel Library for LLM Serving☆2,532Updated this week
- Advanced Quantization Algorithm for LLMs/VLMs.☆413Updated this week
- Yet Another Language Model: LLM inference in C++/CUDA, no libraries except for I/O☆274Updated 2 months ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆678Updated 2 months ago
- Materials for learning SGLang☆355Updated last week
- LLaMa/RWKV onnx models, quantization and testcase☆359Updated last year
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.☆433Updated 8 months ago
- LLM Inference benchmark☆405Updated 8 months ago
- A curated list for Efficient Large Language Models☆1,575Updated last week
- ☆311Updated 11 months ago