UbiquitousLearning / PhoneLMLinks
☆65Updated last year
Alternatives and similar repositories for PhoneLM
Users that are interested in PhoneLM are comparing it to the libraries listed below
Sorting:
- [EMNLP Findings 2024] MobileQuant: Mobile-friendly Quantization for On-device Language Models☆68Updated last year
- ☆101Updated last year
- High-speed and easy-use LLM serving framework for local deployment☆139Updated 4 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆103Updated 7 months ago
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆317Updated 3 weeks ago
- Efficient Agent Training for Computer Use☆134Updated 3 months ago
- FuseAI Project☆87Updated 10 months ago
- The homepage of OneBit model quantization framework.☆196Updated 10 months ago
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆214Updated 6 months ago
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆276Updated last month
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆126Updated 11 months ago
- [NeurIPS 24 Spotlight] MaskLLM: Learnable Semi-structured Sparsity for Large Language Models☆182Updated 11 months ago
- A repository aimed at pruning DeepSeek V3, R1 and R1-zero to a usable size☆81Updated 3 months ago
- [NeurIPS'25 Oral] Query-agnostic KV cache eviction: 3–4× reduction in memory and 2× decrease in latency (Qwen3/2.5, Gemma3, LLaMA3)☆169Updated 3 weeks ago
- Awesome Mobile LLMs☆282Updated 3 weeks ago
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆272Updated this week
- ☆204Updated last year
- ☆88Updated 7 months ago
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆148Updated last month
- KV cache compression for high-throughput LLM inference☆148Updated 10 months ago
- PB-LLM: Partially Binarized Large Language Models☆157Updated 2 years ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆176Updated last year
- Official Repository for "Glyph: Scaling Context Windows via Visual-Text Compression"☆524Updated last month
- [ICLR 2024] Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation☆182Updated last year
- ☆63Updated 7 months ago
- ☆80Updated last month
- Fused Qwen3 MoE layer for faster training, compatible with HF Transformers, LoRA, 4-bit quant, Unsloth☆217Updated last month
- [CoLM'25] The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆154Updated 3 weeks ago
- Official Implementation of APB (ACL 2025 main Oral)☆32Updated 9 months ago
- ☆38Updated last year