Language Modeling with the H3 State Space Model
☆522Sep 29, 2023Updated 2 years ago
Alternatives and similar repositories for H3
Users that are interested in H3 are comparing it to the libraries listed below
Sorting:
- Convolutions for Sequence Modeling☆913Jun 13, 2024Updated last year
- Sequence modeling with Mega.☆303Jan 28, 2023Updated 3 years ago
- Accelerated First Order Parallel Associative Scan☆195Jan 7, 2026Updated last month
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆562Dec 28, 2024Updated last year
- Structured state space sequence models☆2,854Jul 17, 2024Updated last year
- Cramming the training of a (BERT-type) language model into limited compute.☆1,363Jun 13, 2024Updated last year
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,585Jan 28, 2026Updated last month
- ☆51Jan 28, 2024Updated 2 years ago
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆40Dec 2, 2023Updated 2 years ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆248Jun 6, 2025Updated 8 months ago
- JAX/Flax implementation of the Hyena Hierarchy☆34Apr 27, 2023Updated 2 years ago
- FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores☆343Dec 28, 2024Updated last year
- Trying to deconstruct RWKV in understandable terms☆14May 6, 2023Updated 2 years ago
- ☆316Jan 8, 2025Updated last year
- Some preliminary explorations of Mamba's context scaling.☆218Feb 8, 2024Updated 2 years ago
- Official Repository for Efficient Linear-Time Attention Transformers.☆18Jun 2, 2024Updated last year
- ☆29Jul 9, 2024Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆117Mar 16, 2024Updated last year
- ☆164Jan 24, 2023Updated 3 years ago
- Implementation of https://srush.github.io/annotated-s4☆512Jun 20, 2025Updated 8 months ago
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆67Apr 24, 2024Updated last year
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,393Feb 21, 2026Updated last week
- Embroid: Unsupervised Prediction Smoothing Can Improve Few-Shot Classification☆11Aug 12, 2023Updated 2 years ago
- Implementation of Hyena Hierarchy in JAX☆10Apr 30, 2023Updated 2 years ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆138Apr 30, 2024Updated last year
- Foundation Architecture for (M)LLMs☆3,135Apr 11, 2024Updated last year
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,382Oct 28, 2024Updated last year
- ☆2,946Jan 15, 2026Updated last month
- Understand and test language model architectures on synthetic tasks.☆257Feb 24, 2026Updated last week
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,738Jan 8, 2024Updated 2 years ago
- PyTorch implementation of Structured State Space for Sequence Modeling (S4), based on Annotated S4.☆89Mar 1, 2024Updated 2 years ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- RWKV model implementation☆37Jul 15, 2023Updated 2 years ago
- Code for the ALiBi method for transformer language models (ICLR 2022)☆552Oct 30, 2023Updated 2 years ago
- ☆52Jan 19, 2023Updated 3 years ago
- Annotated version of the Mamba paper☆497Feb 27, 2024Updated 2 years ago
- Accessible large language models via k-bit quantization for PyTorch.☆7,997Feb 26, 2026Updated last week
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,149Jan 11, 2024Updated 2 years ago