wbrickner / noise_stepLinks
noise_step: Training in 1.58b With No Gradient Memory
☆221Updated 9 months ago
Alternatives and similar repositories for noise_step
Users that are interested in noise_step are comparing it to the libraries listed below
Sorting:
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆322Updated 11 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆105Updated 7 months ago
- SIMD quantization kernels☆87Updated last month
- RWKV in nanoGPT style☆192Updated last year
- Normalized Transformer (nGPT)☆190Updated 10 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆296Updated last month
- DeMo: Decoupled Momentum Optimization☆193Updated 10 months ago
- RWKV-7: Surpassing GPT☆96Updated 10 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆342Updated 9 months ago
- ☆146Updated 10 months ago
- Gradient descent is cool and all, but what if we could delete it?☆104Updated last month
- Exploring Applications of GRPO☆248Updated last month
- Inference of Mamba models in pure C☆191Updated last year
- look how they massacred my boy☆63Updated 11 months ago
- Verification of Google DeepMind's AlphaEvolve 48-multiplication matrix algorithm, a breakthrough in matrix multiplication after 56 years.☆124Updated 3 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆72Updated 5 months ago
- Async RL Training at Scale☆669Updated this week
- ☆124Updated 9 months ago
- PyTorch implementation of models from the Zamba2 series.☆185Updated 8 months ago
- An open source implementation of LFMs from Liquid AI: Liquid Foundation Models☆191Updated 2 weeks ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆99Updated 2 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆247Updated 8 months ago
- Reverse Engineering Gemma 3n: Google's New Edge-Optimized Language Model☆243Updated 4 months ago
- Simple Transformer in Jax☆139Updated last year
- ☆999Updated this week
- Beyond Language Models: Byte Models are Digital World Simulators☆329Updated last year
- Getting crystal-like representations with harmonic loss☆194Updated 6 months ago
- ☆135Updated last year
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆68Updated 4 months ago
- prime is a framework for efficient, globally distributed training of AI models over the internet.☆828Updated 4 months ago