PlatformNetwork / platformLinks
[🧠] Platform is a Bittensor subnet built to advance collaborative AI research through multiple simultaneous challenges powered by sub-subnet technology. It enables miners to compete and cooperate across diverse challenges, ensuring confidentiality, transparent evaluation, and the continuous pursuit of the most efficient and innovative code.
☆39Updated this week
Alternatives and similar repositories for platform
Users that are interested in platform are comparing it to the libraries listed below
Sorting:
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆141Updated 4 months ago
- coding CUDA everyday!☆73Updated this week
- Modded vLLM to run pipeline parallelism over public networks☆40Updated 8 months ago
- SIMD quantization kernels☆94Updated 5 months ago
- Grokking on modular arithmetic in less than 150 epochs in MLX☆14Updated last year
- Ship correct and fast LLM kernels to PyTorch☆140Updated 3 weeks ago
- Anima Machina☆33Updated this week
- MoE training for Me and You and maybe other people☆335Updated last month
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 10 months ago
- Solidity contracts for the decentralized Prime Network protocol☆26Updated 7 months ago
- some mixture of experts architecture implementations☆25Updated last year
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆198Updated 8 months ago
- train entropix like a champ!☆20Updated last year
- ☆147Updated this week
- Learn CUDA with PyTorch☆200Updated this week
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆251Updated 9 months ago
- Code for data-aware compression of DeepSeek models☆70Updated last month
- Quantized LLM training in pure CUDA/C++.☆238Updated 3 weeks ago
- 👷 Build compute kernels☆215Updated 2 weeks ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆233Updated 7 months ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆165Updated 2 months ago
- AccelOpt: Self-improving Agents for AI Accelerator Kernel Optimization☆20Updated last week
- DeMo: Decoupled Momentum Optimization☆198Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆110Updated 11 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 9 months ago
- A bunch of kernels that might make stuff slower 😉☆75Updated this week
- ☆14Updated last year
- Load compute kernels from the Hub☆397Updated this week
- Fast low-bit matmul kernels in Triton☆427Updated last week
- peer-to-peer compute and intelligence network that enables decentralized AI development at scale☆138Updated 3 months ago