erogol / BlaGPTLinks
Experimental playground for benchmarking language model (LM) architectures, layers, and tricks on smaller datasets. Designed for flexible experimentation and exploration.
☆95Updated last week
Alternatives and similar repositories for BlaGPT
Users that are interested in BlaGPT are comparing it to the libraries listed below
Sorting:
- Repository for "TESS-2: A Large-Scale, Generalist Diffusion Language Model"☆54Updated 11 months ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆67Updated last year
- Griffin MQA + Hawk Linear RNN Hybrid☆88Updated last year
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆163Updated 9 months ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆56Updated 10 months ago
- Randomized Positional Encodings Boost Length Generalization of Transformers☆82Updated last year
- DPO, but faster 🚀☆46Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆134Updated 3 months ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆75Updated last year
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆182Updated 7 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Updated last year
- Implementation of a Light Recurrent Unit in Pytorch☆49Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆229Updated 7 months ago
- MatFormer repo☆70Updated last year
- Explorations into adversarial losses on top of autoregressive loss for language modeling☆41Updated last month
- ☆91Updated last year
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆53Updated 2 weeks ago
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆82Updated 2 months ago
- Implementation of Infini-Transformer in Pytorch☆112Updated last year
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆134Updated 3 months ago
- Collection of autoregressive model implementation☆85Updated 2 weeks ago
- Official code release for "SuperBPE: Space Travel for Language Models"☆86Updated 3 weeks ago
- An unofficial pytorch implementation of 'Efficient Infinite Context Transformers with Infini-attention'☆54Updated last year
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆163Updated 2 weeks ago
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆57Updated last year
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆137Updated last month
- Here we will test various linear attention designs.☆62Updated last year
- [EMNLP Main '25] LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation☆146Updated 8 months ago
- MiSS is a novel PEFT method that features a low-rank structure but introduces a new update mechanism distinct from LoRA, achieving an exc…☆25Updated 3 months ago
- Implementation of Google's USM speech model in Pytorch☆34Updated 2 weeks ago