PrimeIntellect-ai / prime-vllmLinks
Modded vLLM to run pipeline parallelism over public networks
☆41Updated 7 months ago
Alternatives and similar repositories for prime-vllm
Users that are interested in prime-vllm are comparing it to the libraries listed below
Sorting:
- Solidity contracts for the decentralized Prime Network protocol☆27Updated 5 months ago
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆141Updated 3 months ago
- TOPLOC: is a novel method for verifiable inference that enables users to verify that LLM providers are using the correct model configurat…☆50Updated 8 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆108Updated 9 months ago
- Official CLI and Python SDK for Prime Intellect - access GPU compute, remote sandboxes, RL environments, and distributed training infrast…☆121Updated last week
- ☆136Updated 9 months ago
- SIMD quantization kernels☆93Updated 3 months ago
- DeMo: Decoupled Momentum Optimization☆198Updated last year
- ☆122Updated last year
- MoE training for Me and You and maybe other people☆298Updated last week
- look how they massacred my boy☆63Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 2 months ago
- train entropix like a champ!☆20Updated last year
- peer-to-peer compute and intelligence network that enables decentralized AI development at scale☆135Updated last month
- Simple Transformer in Jax☆141Updated last year
- NanoGPT (124M) quality in 2.67B tokens☆28Updated 3 months ago
- Plotting (entropy, varentropy) for small LMs☆99Updated 7 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆174Updated 11 months ago
- smolLM with Entropix sampler on pytorch☆149Updated last year
- Storing long contexts in tiny caches with self-study☆228Updated 3 weeks ago
- Train your own SOTA deductive reasoning model☆107Updated 9 months ago
- NSA Triton Kernels written with GPT5 and Opus 4.1☆69Updated 4 months ago
- A 7B parameter model for mathematical reasoning☆40Updated 10 months ago
- ☆22Updated 11 months ago
- ☆13Updated last year
- A tree-based prefix cache library that allows rapid creation of looms: hierarchal branching pathways of LLM generations.☆77Updated 10 months ago
- Long context evaluation for large language models☆224Updated 9 months ago
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 8 months ago
- ☆64Updated last year