Noumena-Network / NSA-TestLinks
NSA Triton Kernels written with GPT5 and Opus 4.1
☆65Updated 3 months ago
Alternatives and similar repositories for NSA-Test
Users that are interested in NSA-Test are comparing it to the libraries listed below
Sorting:
- Storing long contexts in tiny caches with self-study☆218Updated last month
- look how they massacred my boy☆63Updated last year
- SIMD quantization kernels☆92Updated 2 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆112Updated last month
- rl from zero pretrain, can it be done? yes.☆281Updated 2 months ago
- smolLM with Entropix sampler on pytorch☆149Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated last month
- Simple Transformer in Jax☆139Updated last year
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆138Updated 2 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆108Updated 8 months ago
- train entropix like a champ!☆20Updated last year
- ☆40Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆130Updated last year
- Lego for GRPO☆30Updated 6 months ago
- Official repo for Learning to Reason for Long-Form Story Generation☆72Updated 7 months ago
- ☆106Updated last month
- Entropy Based Sampling and Parallel CoT Decoding☆17Updated last year
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 8 months ago
- train with kittens!☆63Updated last year
- A reading list of relevant papers and projects on foundation model annotation☆28Updated 9 months ago
- ☆68Updated 6 months ago
- Plotting (entropy, varentropy) for small LMs☆99Updated 6 months ago
- Compiling useful links, papers, benchmarks, ideas, etc.☆45Updated 8 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 7 months ago
- Long context evaluation for large language models☆224Updated 9 months ago
- DeMo: Decoupled Momentum Optimization☆197Updated last year
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆99Updated 4 months ago
- Simple repository for training small reasoning models☆46Updated 9 months ago
- ☆13Updated last year
- An introduction to LLM Sampling☆79Updated 11 months ago