PrimeIntellect-ai / pi-quantLinks
SIMD quantization kernels
☆79Updated last week
Alternatives and similar repositories for pi-quant
Users that are interested in pi-quant are comparing it to the libraries listed below
Sorting:
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆103Updated last week
- Simple Transformer in Jax☆139Updated last year
- look how they massacred my boy☆63Updated 10 months ago
- Plotting (entropy, varentropy) for small LMs☆98Updated 3 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆105Updated 5 months ago
- Decentralized RL Training at Scale☆441Updated this week
- smol models are fun too☆92Updated 9 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆69Updated 4 months ago
- Compiling useful links, papers, benchmarks, ideas, etc.☆45Updated 5 months ago
- DeMo: Decoupled Momentum Optimization☆190Updated 8 months ago
- ☆38Updated last year
- train entropix like a champ!☆20Updated 10 months ago
- rl from zero pretrain, can it be done? yes.☆250Updated last week
- smolLM with Entropix sampler on pytorch☆150Updated 9 months ago
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆84Updated this week
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆63Updated 9 months ago
- Modify Entropy Based Sampling to work with Mac Silicon via MLX☆49Updated 9 months ago
- explore token trajectory trees on instruct and base models☆133Updated 2 months ago
- ☆98Updated 2 weeks ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 5 months ago
- ☆27Updated last year
- NSA Triton Kernels written with GPT5 and Opus 4.1☆63Updated last week
- Simple & Scalable Pretraining for Neural Architecture Research☆287Updated 2 weeks ago
- A tree-based prefix cache library that allows rapid creation of looms: hierarchal branching pathways of LLM generations.☆73Updated 6 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆95Updated last month
- ☆130Updated 5 months ago
- An introduction to LLM Sampling☆79Updated 8 months ago
- A graph visualization of attention☆57Updated 3 months ago
- Modded vLLM to run pipeline parallelism over public networks☆38Updated 3 months ago
- ☆66Updated 3 months ago