PrimeIntellect-ai / pi-quantLinks
SIMD quantization kernels
☆87Updated last month
Alternatives and similar repositories for pi-quant
Users that are interested in pi-quant are comparing it to the libraries listed below
Sorting:
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆133Updated last month
- Simple Transformer in Jax☆139Updated last year
- look how they massacred my boy☆63Updated last year
- NanoGPT-speedrunning for the poor T4 enjoyers☆72Updated 6 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆107Updated 7 months ago
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆68Updated 5 months ago
- Compiling useful links, papers, benchmarks, ideas, etc.☆45Updated 7 months ago
- smol models are fun too☆93Updated 11 months ago
- smolLM with Entropix sampler on pytorch☆150Updated 11 months ago
- Storing long contexts in tiny caches with self-study☆201Updated this week
- Quantized LLM training in pure CUDA/C++.☆206Updated this week
- DeMo: Decoupled Momentum Optimization☆194Updated 10 months ago
- Plotting (entropy, varentropy) for small LMs☆98Updated 5 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 7 months ago
- The Prime Intellect CLI provides a powerful command-line interface for managing GPU resources across various providers☆100Updated this week
- rl from zero pretrain, can it be done? yes.☆277Updated 3 weeks ago
- Training-Ready RL Environments + Evals☆128Updated this week
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆84Updated 2 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆297Updated 2 months ago
- train entropix like a champ!☆20Updated last year
- ☆21Updated 9 months ago
- A zero-to-one guide on scaling modern transformers with n-dimensional parallelism.☆103Updated 3 weeks ago
- A graph visualization of attention☆57Updated 5 months ago
- ☆105Updated this week
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆98Updated 3 months ago
- A really tiny autograd engine☆95Updated 4 months ago
- ☆211Updated last week
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆62Updated 11 months ago
- NanoGPT (124M) quality in 2.67B tokens☆28Updated last month
- NSA Triton Kernels written with GPT5 and Opus 4.1☆65Updated 2 months ago