UbiquitousLearning / PhoneLMLinks
☆63Updated 11 months ago
Alternatives and similar repositories for PhoneLM
Users that are interested in PhoneLM are comparing it to the libraries listed below
Sorting:
- The homepage of OneBit model quantization framework.☆193Updated 9 months ago
- Awesome Mobile LLMs☆267Updated 3 weeks ago
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆309Updated 5 months ago
- ☆98Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆97Updated 5 months ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆124Updated 9 months ago
- FuseAI Project☆87Updated 9 months ago
- High-speed and easy-use LLM serving framework for local deployment☆130Updated 3 months ago
- Reverse Engineering Gemma 3n: Google's New Edge-Optimized Language Model☆250Updated 5 months ago
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆258Updated this week
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆264Updated last week
- KV cache compression for high-throughput LLM inference☆144Updated 9 months ago
- [EMNLP Findings 2024] MobileQuant: Mobile-friendly Quantization for On-device Language Models☆68Updated last year
- A family of compressed models obtained via pruning and knowledge distillation☆355Updated 11 months ago
- ☆202Updated 11 months ago
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆180Updated 7 months ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆175Updated last year
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆201Updated 5 months ago
- [NeurIPS 24 Spotlight] MaskLLM: Learnable Semi-structured Sparsity for Large Language Models☆179Updated 10 months ago
- awesome llm plaza: daily tracking all sorts of awesome topics of llm, e.g. llm for coding, robotics, reasoning, multimod etc.☆209Updated last week
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆148Updated last week
- ☆152Updated 4 months ago
- llama.cpp tutorial on Android phone☆134Updated 6 months ago
- Efficient Agent Training for Computer Use☆132Updated 2 months ago
- ☆38Updated last year
- Experiments on speculative sampling with Llama models☆126Updated 2 years ago
- Data preparation code for Amber 7B LLM☆93Updated last year
- Make reasoning models scalable☆47Updated 5 months ago
- Advanced Ultra-Low Bitrate Compression Techniques for the LLaMA Family of LLMs☆110Updated last year
- [NeurIPS'25 Oral] Query-agnostic KV cache eviction: 3–4× reduction in memory and 2× decrease in latency (Qwen3/2.5, Gemma3, LLaMA3)☆128Updated last week