idiap / sigma-gptLinks
σ-GPT: A New Approach to Autoregressive Models
☆64Updated 9 months ago
Alternatives and similar repositories for sigma-gpt
Users that are interested in sigma-gpt are comparing it to the libraries listed below
Sorting:
- Focused on fast experimentation and simplicity☆73Updated 5 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆128Updated 3 weeks ago
- ☆53Updated last year
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆100Updated 5 months ago
- supporting pytorch FSDP for optimizers☆79Updated 5 months ago
- ☆80Updated last year
- ☆78Updated 11 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆116Updated 5 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆127Updated last year
- Understand and test language model architectures on synthetic tasks.☆197Updated 2 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆100Updated 2 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆126Updated 9 months ago
- ☆51Updated last year
- ☆71Updated 9 months ago
- Latent Diffusion Language Models☆68Updated last year
- WIP☆93Updated 9 months ago
- ☆33Updated 5 months ago
- Minimal (truly) muP implementation, consistent with TP4 and TP5 papers notation☆14Updated 2 weeks ago
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆25Updated 4 months ago
- ☆49Updated last year
- Code accompanying the paper "Generalized Interpolating Discrete Diffusion"☆80Updated last week
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆50Updated this week
- ☆63Updated 8 months ago
- Repository for code used in the xVal paper☆134Updated last year
- Implementation of Infini-Transformer in Pytorch☆111Updated 5 months ago
- The official repository for HyperZ⋅Z⋅W Operator Connects Slow-Fast Networks for Full Context Interaction.☆36Updated 2 months ago
- ☆60Updated last year
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆190Updated last year
- Scaling is a distributed training library and installable dependency designed to scale up neural networks, with a dedicated module for tr…☆62Updated 7 months ago
- Normalized Transformer (nGPT)☆181Updated 6 months ago