facebookresearch / MobileLLM
MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.
☆1,250Updated this week
Alternatives and similar repositories for MobileLLM:
Users that are interested in MobileLLM are comparing it to the libraries listed below
- nanoGPT style version of Llama 3.1☆1,316Updated 6 months ago
- Official implementation of "Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling"☆841Updated this week
- MINT-1T: A one trillion token multimodal interleaved dataset.☆797Updated 6 months ago
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.☆1,930Updated 6 months ago
- LLaMA-Omni is a low-latency and high-quality end-to-end speech interaction model built upon Llama-3.1-8B-Instruct, aiming to achieve spee…☆2,809Updated 3 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆748Updated this week
- Everything about the SmolLM2 and SmolVLM family of models☆1,888Updated 2 weeks ago
- DataComp for Language Models☆1,230Updated 2 months ago
- Minimalistic large language model 3D-parallelism training☆1,483Updated this week
- Recipes to scale inference-time compute of open models☆1,002Updated last month
- Reaching LLaMA2 Performance with 0.1M Dollars☆973Updated 6 months ago
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆1,946Updated last month
- Fast, Flexible and Portable Structured Generation☆704Updated this week
- AllenAI's post-training codebase☆2,657Updated this week
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,418Updated 2 weeks ago
- Open weights language model from Google DeepMind, based on Griffin.☆620Updated 7 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,446Updated 11 months ago
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆982Updated this week
- Minimalistic 4D-parallelism distributed training framework for education purpose☆724Updated this week
- [NeurIPS'24 Spotlight, ICLR'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which r…☆917Updated last week
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆588Updated last week
- An Open Large Reasoning Model for Real-World Solutions☆1,444Updated 2 months ago
- Sky-T1: Train your own O1 preview model within $450☆2,641Updated this week
- A pytorch quantization backend for optimum☆883Updated last month
- Training Large Language Model to Reason in a Continuous Latent Space☆877Updated 3 weeks ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,790Updated last year
- OLMoE: Open Mixture-of-Experts Language Models☆610Updated 2 months ago
- ☆502Updated 5 months ago
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,361Updated 10 months ago