PrimeIntellect-ai / prime
prime is a framework for efficient, globally distributed training of AI models over the internet.
β626Updated this week
Alternatives and similar repositories for prime:
Users that are interested in prime are comparing it to the libraries listed below
- OpenDiLoCo: An Open-Source Framework for Globally Distributed Low-Communication Trainingβ424Updated 2 weeks ago
- Distributed Training Over-The-Internetβ866Updated last month
- A Self-adaptation Frameworkπ that adapts LLMs for unseen tasks in real-time!β831Updated 2 weeks ago
- An Open Source Toolkit For LLM Distillationβ442Updated 3 weeks ago
- VPTQ, A Flexible and Extreme low-bit quantization algorithmβ572Updated last week
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLMβ894Updated this week
- Efficient LLM Inference over Long Sequencesβ349Updated last month
- Training Large Language Model to Reason in a Continuous Latent Spaceβ746Updated this week
- β497Updated 5 months ago
- System 2 Reasoning Link Collectionβ751Updated this week
- Minimalistic 4D-parallelism distributed training framework for education purposeβ670Updated this week
- Synthetic Data curation for post-training and structured data extractionβ575Updated this week
- A throughput-oriented high-performance serving framework for LLMsβ714Updated 4 months ago
- Minimalistic large language model 3D-parallelism trainingβ1,400Updated this week
- A Flexible Framework for Experiencing Cutting-edge LLM Inference Optimizationsβ845Updated 2 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)β737Updated 2 weeks ago
- Recipes to scale inference-time compute of open modelsβ975Updated last week
- OLMoE: Open Mixture-of-Experts Language Modelsβ536Updated last month
- GRadient-INformed MoEβ261Updated 4 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, sparsβ¦β288Updated last month
- veRL: Volcano Engine Reinforcement Learning for LLMβ1,135Updated this week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backendsβ1,022Updated this week
- Official implementation of "Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling"β833Updated last week
- LLM KV cache compression made easyβ356Updated this week
- Efficient, Flexible and Portable Structured Generationβ619Updated this week
- β192Updated this week
- [NeurIPS'24 Spotlight, ICLR'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which rβ¦β890Updated last week
- Gemma 2 optimized for your local machine.β357Updated 5 months ago
- A comprehensive repository of reasoning tasks for LLMs (and beyond)β394Updated 4 months ago
- β243Updated last month