PrimeIntellect-ai / prime
prime is a framework for efficient, globally distributed training of AI models over the internet.
☆689Updated this week
Alternatives and similar repositories for prime:
Users that are interested in prime are comparing it to the libraries listed below
- OpenDiLoCo: An Open-Source Framework for Globally Distributed Low-Communication Training☆478Updated 2 months ago
- Distributed Training Over-The-Internet☆893Updated 4 months ago
- Minimalistic 4D-parallelism distributed training framework for education purpose☆970Updated 3 weeks ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆311Updated 3 months ago
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆622Updated this week
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆1,139Updated this week
- OLMoE: Open Mixture-of-Experts Language Models☆698Updated 2 weeks ago
- A throughput-oriented high-performance serving framework for LLMs☆785Updated 6 months ago
- Minimalistic large language model 3D-parallelism training☆1,737Updated this week
- Efficient LLM Inference over Long Sequences☆365Updated last month
- Muon is Scalable for LLM Training☆993Updated this week
- Recipes to scale inference-time compute of open models☆1,048Updated last month
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆236Updated this week
- An Open Source Toolkit For LLM Distillation☆562Updated 2 months ago
- Production ready LLM model compression/quantization toolkit with hw accelerated inference support for both cpu/gpu via HF, vLLM, and SGLa…☆406Updated this week
- LLM KV cache compression made easy☆444Updated 2 weeks ago
- [NeurIPS'24 Spotlight, ICLR'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which r…☆951Updated this week
- Pretraining code for a large-scale depth-recurrent language model☆709Updated 2 weeks ago
- Advanced Quantization Algorithm for LLMs/VLMs.☆413Updated this week
- Fast, Flexible and Portable Structured Generation☆831Updated this week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,358Updated this week
- FlashInfer: Kernel Library for LLM Serving☆2,532Updated this week
- PyTorch native quantization and sparsity for training and inference☆1,927Updated this week
- Training Large Language Model to Reason in a Continuous Latent Space☆1,015Updated 2 months ago
- Muon optimizer: +>30% sample efficiency with <3% wallclock overhead☆539Updated last week
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆195Updated 8 months ago
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆601Updated 2 weeks ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆774Updated this week
- A Self-adaptation Framework🐙 that adapts LLMs for unseen tasks in real-time!☆1,020Updated 2 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆262Updated 5 months ago