PrimeIntellect-ai / primeLinks
prime is a framework for efficient, globally distributed training of AI models over the internet.
☆805Updated 3 months ago
Alternatives and similar repositories for prime
Users that are interested in prime are comparing it to the libraries listed below
Sorting:
- OpenDiLoCo: An Open-Source Framework for Globally Distributed Low-Communication Training☆528Updated 7 months ago
- Distributed Training Over-The-Internet☆955Updated 3 months ago
- Decentralized RL Training at Scale☆472Updated this week
- Atropos is a Language Model Reinforcement Learning Environments framework for collecting and evaluating LLM trajectories through diverse …☆591Updated last week
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆346Updated 8 months ago
- A Self-adaptation Framework🐙 that adapts LLMs for unseen tasks in real-time!☆1,137Updated 7 months ago
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆652Updated 4 months ago
- Minimalistic 4D-parallelism distributed training framework for education purpose☆1,693Updated last month
- Minimalistic large language model 3D-parallelism training☆2,164Updated this week
- Muon is Scalable for LLM Training☆1,289Updated 3 weeks ago
- Pretraining and inference code for a large-scale depth-recurrent language model☆822Updated last month
- OLMoE: Open Mixture-of-Experts Language Models☆845Updated 5 months ago
- procedural reasoning datasets☆1,069Updated this week
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆907Updated 3 months ago
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆1,861Updated this week
- Advanced Quantization Algorithm for LLMs and VLMs, with support for CPU, Intel GPU, CUDA and HPU. Seamlessly integrated with Torchao, Tra…☆607Updated this week
- An Open Source Toolkit For LLM Distillation☆717Updated last month
- noise_step: Training in 1.58b With No Gradient Memory☆220Updated 8 months ago
- DFloat11: Lossless LLM Compression for Efficient GPU Inference☆524Updated this week
- Recipes to scale inference-time compute of open models☆1,112Updated 3 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆868Updated last week
- Scalable toolkit for efficient model reinforcement☆796Updated this week
- Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"☆519Updated last month
- ☆561Updated last year
- Efficient LLM Inference over Long Sequences☆390Updated 2 months ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆261Updated 3 weeks ago
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆537Updated this week
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,105Updated 2 weeks ago
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆319Updated 10 months ago
- Testing baseline LLMs performance across various models☆305Updated 3 weeks ago