Normalized Transformer (nGPT)
☆201Nov 19, 2024Updated last year
Alternatives and similar repositories for ngpt
Users that are interested in ngpt are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆293Jun 3, 2025Updated 10 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆248Jun 15, 2025Updated 10 months ago
- Implementation of Hyena Hierarchy in JAX☆10Apr 30, 2023Updated 2 years ago
- Code for the paper "Function-Space Learning Rates"☆25Jun 3, 2025Updated 10 months ago
- iQRL: implicitly Quantized Representations for Sample-efficient Reinforcement Learning☆12Jan 8, 2025Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- NanoGPT (124M) in 2 minutes☆5,070Mar 29, 2026Updated 2 weeks ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated 2 years ago
- ☆68Mar 21, 2025Updated last year
- ☆19Dec 4, 2025Updated 4 months ago
- [ICLR 2025] Official Pytorch Implementation of "Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN" by Pengxia…☆29Jul 24, 2025Updated 8 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆111Mar 7, 2025Updated last year
- A curated list of awesome memory in reinforcement learning research materials☆24Sep 5, 2021Updated 4 years ago
- Parallel Associative Scan for Language Models☆18Jan 8, 2024Updated 2 years ago
- ☆25May 23, 2025Updated 10 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆190Jan 19, 2026Updated 2 months ago
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆150Feb 25, 2026Updated last month
- Code for the paper "Cottention: Linear Transformers With Cosine Attention"☆20Nov 15, 2025Updated 5 months ago
- Official implementation of "BERTs are Generative In-Context Learners"☆32Mar 14, 2025Updated last year
- ☆35Nov 22, 2024Updated last year
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28May 4, 2025Updated 11 months ago
- Focused on fast experimentation and simplicity☆80Dec 24, 2024Updated last year
- Evaluating majors LLMs on the Abstraction and Reasoning Corpus☆17Nov 9, 2023Updated 2 years ago
- Collection of LLM completions for reasoning-gym task datasets☆31Jul 4, 2025Updated 9 months ago
- Deploy open-source AI quickly and easily - Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- ☆10Jun 27, 2024Updated last year
- ☆14Dec 19, 2024Updated last year
- minimal Energy-based transformer☆43Dec 11, 2025Updated 4 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Apr 17, 2024Updated last year
- TiledLower is a Dataflow Analysis and Codegen Framework written in Rust.☆13Nov 23, 2024Updated last year
- Fast and memory-efficient exact attention☆75Mar 3, 2025Updated last year
- Transformers components but in Triton☆34May 9, 2025Updated 11 months ago
- Mitigating Partial Observability in Sequential Decision Processes via the Lambda Discrepancy☆23Oct 28, 2024Updated last year
- Code to reproduce key results accompanying "SAEs (usually) Transfer Between Base and Chat Models"☆13Jul 18, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- [ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters