JoeLi12345 / nGPTLinks
an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)
☆107Updated 8 months ago
Alternatives and similar repositories for nGPT
Users that are interested in nGPT are comparing it to the libraries listed below
Sorting:
- NanoGPT-speedrunning for the poor T4 enjoyers☆72Updated 6 months ago
- look how they massacred my boy☆63Updated last year
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 9 months ago
- smolLM with Entropix sampler on pytorch☆150Updated last year
- NanoGPT (124M) quality in 2.67B tokens☆28Updated last month
- Train your own SOTA deductive reasoning model☆108Updated 8 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆58Updated 3 weeks ago
- DeMo: Decoupled Momentum Optimization☆197Updated 11 months ago
- Collection of autoregressive model implementation☆86Updated 6 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆110Updated 6 months ago
- ☆136Updated last year
- An introduction to LLM Sampling☆79Updated 11 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆299Updated 2 weeks ago
- Scaling is a distributed training library and installable dependency designed to scale up neural networks, with a dedicated module for tr…☆64Updated last month
- Official repo for Learning to Reason for Long-Form Story Generation☆72Updated 6 months ago
- PyTorch implementation of models from the Zamba2 series.☆185Updated 9 months ago
- Simple Transformer in Jax☆139Updated last year
- Storing long contexts in tiny caches with self-study☆213Updated 3 weeks ago
- Lego for GRPO☆30Updated 5 months ago
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆84Updated 2 months ago
- working implimention of deepseek MLA☆45Updated 10 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆249Updated 9 months ago
- ☆28Updated last year
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆145Updated 8 months ago
- RWKV-7: Surpassing GPT☆100Updated 11 months ago
- rl from zero pretrain, can it be done? yes.☆280Updated last month
- H-Net Dynamic Hierarchical Architecture☆80Updated 2 months ago
- Plotting (entropy, varentropy) for small LMs☆98Updated 5 months ago
- ☆26Updated 10 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆98Updated 5 months ago