PrimeIntellect-ai / pcclLinks
PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP
β103Updated last week
Alternatives and similar repositories for pccl
Users that are interested in pccl are comparing it to the libraries listed below
Sorting:
- SIMD quantization kernelsβ79Updated last week
- π· Build compute kernelsβ106Updated last week
- DeMo: Decoupled Momentum Optimizationβ190Updated 8 months ago
- Modded vLLM to run pipeline parallelism over public networksβ38Updated 3 months ago
- train with kittens!β62Updated 9 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clustersβ128Updated 8 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)β66Updated 5 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!β52Updated this week
- Docker image NVIDIA GH200 machines - optimized for vllm serving and hf trainer finetuningβ47Updated 6 months ago
- Storing long contexts in tiny caches with self-studyβ140Updated this week
- Decentralized RL Training at Scaleβ441Updated this week
- β20Updated 7 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)β105Updated 5 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languagβ¦β97Updated 3 weeks ago
- Official code for "SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient"β142Updated last year
- β217Updated 7 months ago
- β45Updated last year
- Solidity contracts for the decentralized Prime Network protocolβ24Updated last month
- β14Updated last year
- β217Updated last month
- PyTorch Single Controllerβ361Updated last week
- NSA Triton Kernels written with GPT5 and Opus 4.1β63Updated last week
- Load compute kernels from the Hubβ244Updated this week
- β27Updated last year
- A 7B parameter model for mathematical reasoningβ40Updated 6 months ago
- train entropix like a champ!β20Updated 10 months ago
- NanoGPT (124M) quality in 2.67B tokensβ28Updated 2 weeks ago
- β130Updated 5 months ago
- Long context evaluation for large language modelsβ220Updated 5 months ago
- look how they massacred my boyβ63Updated 10 months ago