PrimeIntellect-ai / pi-quantLinks
SIMD quantization kernels
☆86Updated this week
Alternatives and similar repositories for pi-quant
Users that are interested in pi-quant are comparing it to the libraries listed below
Sorting:
- Simple Transformer in Jax☆139Updated last year
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆117Updated last week
- look how they massacred my boy☆64Updated 10 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆105Updated 6 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆71Updated 4 months ago
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆67Updated 3 months ago
- Storing long contexts in tiny caches with self-study☆179Updated last week
- DeMo: Decoupled Momentum Optimization☆190Updated 9 months ago
- Plotting (entropy, varentropy) for small LMs☆98Updated 3 months ago
- rl from zero pretrain, can it be done? yes.☆265Updated 3 weeks ago
- smolLM with Entropix sampler on pytorch☆150Updated 10 months ago
- NSA Triton Kernels written with GPT5 and Opus 4.1☆65Updated last month
- smol models are fun too☆93Updated 10 months ago
- ☆27Updated last year
- ☆39Updated last year
- Simple & Scalable Pretraining for Neural Architecture Research☆291Updated 3 weeks ago
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆83Updated 3 weeks ago
- train entropix like a champ!☆20Updated 11 months ago
- A really tiny autograd engine☆95Updated 3 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 5 months ago
- Decentralized RL Training at Scale☆569Updated this week
- Compiling useful links, papers, benchmarks, ideas, etc.☆45Updated 5 months ago
- This is a zero-to-one guide on scaling modern transformers with n-dimensional parallelism.☆74Updated last week
- Modded vLLM to run pipeline parallelism over public networks☆39Updated 3 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆96Updated last month
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆96Updated last month
- ☆21Updated 8 months ago
- An introduction to LLM Sampling☆80Updated 8 months ago
- Dion optimizer algorithm☆338Updated last week
- Training-Ready RL Environments + Evals☆77Updated this week