XiaoMi / MobileBenchLinks
Mobile-Bench: An Evaluation Benchmark for LLM-based Mobile Agents
☆18Updated 8 months ago
Alternatives and similar repositories for MobileBench
Users that are interested in MobileBench are comparing it to the libraries listed below
Sorting:
- D^2-MoE: Delta Decompression for MoE-based LLMs Compression☆65Updated 5 months ago
- siiRL: Shanghai Innovation Institute RL Framework for Advanced LLMs and Multi-Agent Systems☆179Updated this week
- Official code for the paper "HEXA-MoE: Efficient and Heterogeneous-Aware MoE Acceleration with Zero Computation Redundancy"☆13Updated 5 months ago
- Implementation for the paper: CMoE: Fast Carving of Mixture-of-Experts for Efficient LLM Inference☆24Updated 5 months ago
- Official implementation of ICML 2024 paper "ExCP: Extreme LLM Checkpoint Compression via Weight-Momentum Joint Shrinking".☆48Updated last year
- ☆18Updated 5 months ago
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆92Updated 9 months ago
- Model compression toolkit engineered for enhanced usability, comprehensiveness, and efficiency.☆100Updated this week
- FBI-LLM: Scaling Up Fully Binarized LLMs from Scratch via Autoregressive Distillation☆51Updated last week
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆123Updated 7 months ago
- [CoLM'25] The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆145Updated last month
- KV cache compression via sparse coding☆12Updated 3 months ago
- [ACL 2024] Official PyTorch implementation of "IntactKV: Improving Large Language Model Quantization by Keeping Pivot Tokens Intact"☆47Updated last year
- ☆80Updated 5 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Updated last year
- Efficient Mixture of Experts for LLM Paper List☆118Updated this week
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆39Updated last year
- Unveiling Super Experts in Mixture-of-Experts Large Language Models☆22Updated last month
- PLM: Efficient Peripheral Language Models Hardware-Co-Designed for Ubiquitous Computing☆19Updated 5 months ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- ☆13Updated 11 months ago
- The code repository of "MBQ: Modality-Balanced Quantization for Large Vision-Language Models"☆55Updated 5 months ago
- ZO2 (Zeroth-Order Offloading): Full Parameter Fine-Tuning 175B LLMs with 18GB GPU Memory☆180Updated last month
- (ACL 2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generation☆32Updated 3 months ago
- ☆18Updated 2 months ago
- ☆69Updated 2 months ago
- [COLM 2025] Official PyTorch implementation of "Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models"☆47Updated last month
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆189Updated 2 months ago
- [ICML 2025 Oral] Mixture of Lookup Experts☆51Updated 3 months ago
- A Survey of Efficient Attention Methods: Hardware-efficient, Sparse, Compact, and Linear Attention☆172Updated last week