wbrickner / noise_stepLinks
noise_step: Training in 1.58b With No Gradient Memory
☆221Updated 11 months ago
Alternatives and similar repositories for noise_step
Users that are interested in noise_step are comparing it to the libraries listed below
Sorting:
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 7 months ago
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆330Updated last year
- RWKV in nanoGPT style☆196Updated last year
- DeMo: Decoupled Momentum Optimization☆197Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆108Updated 9 months ago
- SIMD quantization kernels☆93Updated 3 months ago
- Gradient descent is cool and all, but what if we could delete it?☆104Updated 3 months ago
- ☆129Updated 11 months ago
- look how they massacred my boy☆63Updated last year
- Reasoning Computers. Lambda Calculus, Fully Differentiable. Also Neural Stacks, Queues, Arrays, Lists, Trees, and Latches.☆283Updated last year
- Simple & Scalable Pretraining for Neural Architecture Research☆304Updated last month
- GRadient-INformed MoE☆264Updated last year
- This repo contains the source code for the paper "Evolution Strategies at Scale: LLM Fine-Tuning Beyond Reinforcement Learning"☆272Updated 2 weeks ago
- ☆148Updated last year
- Getting crystal-like representations with harmonic loss☆192Updated 8 months ago
- Exploring Applications of GRPO☆249Updated 3 months ago
- Normalized Transformer (nGPT)☆193Updated last year
- ☆136Updated last year
- A really tiny autograd engine☆96Updated 6 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆359Updated last year
- PyTorch implementation of models from the Zamba2 series.☆186Updated 10 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆112Updated 2 months ago
- An open source implementation of LFMs from Liquid AI: Liquid Foundation Models☆197Updated last week
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆155Updated last year
- Simple Transformer in Jax☆139Updated last year
- rl from zero pretrain, can it be done? yes.☆282Updated 2 months ago
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆73Updated 6 months ago
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Updated 10 months ago
- OpenDiLoCo: An Open-Source Framework for Globally Distributed Low-Communication Training☆550Updated 10 months ago
- Inference of Mamba models in pure C☆194Updated last year