Convolutions for Sequence Modeling
☆913Jun 13, 2024Updated last year
Alternatives and similar repositories for safari
Users that are interested in safari are comparing it to the libraries listed below
Sorting:
- Language Modeling with the H3 State Space Model☆522Sep 29, 2023Updated 2 years ago
- Structured state space sequence models☆2,854Jul 17, 2024Updated last year
- An annotated implementation of the Hyena Hierarchy paper☆34May 28, 2023Updated 2 years ago
- ☆316Jan 8, 2025Updated last year
- Accelerated First Order Parallel Associative Scan☆195Jan 7, 2026Updated 2 months ago
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆202Jun 22, 2023Updated 2 years ago
- Understand and test language model architectures on synthetic tasks.☆257Feb 24, 2026Updated last week
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆562Dec 28, 2024Updated last year
- Official implementation for HyenaDNA, a long-range genomic foundation model built with Hyena☆766Apr 22, 2025Updated 10 months ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆249Jun 6, 2025Updated 9 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆138Dec 17, 2024Updated last year
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,393Feb 21, 2026Updated 2 weeks ago
- FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores☆343Dec 28, 2024Updated last year
- Sequence modeling with Mega.☆303Jan 28, 2023Updated 3 years ago
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆117Mar 16, 2024Updated last year
- ☆62Dec 8, 2023Updated 2 years ago
- Tile primitives for speedy kernels☆3,202Feb 24, 2026Updated last week
- JAX/Flax implementation of the Hyena Hierarchy☆34Apr 27, 2023Updated 2 years ago
- Foundation Architecture for (M)LLMs☆3,134Apr 11, 2024Updated last year
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆695Jan 26, 2026Updated last month
- ☆164Jan 24, 2023Updated 3 years ago
- Sequence Modeling with Multiresolution Convolutional Memory (ICML 2023)☆127Oct 11, 2023Updated 2 years ago
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,474Updated this week
- Implementation of https://srush.github.io/annotated-s4☆512Jun 20, 2025Updated 8 months ago
- Cramming the training of a (BERT-type) language model into limited compute.☆1,363Jun 13, 2024Updated last year
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,064Mar 7, 2024Updated 2 years ago
- Annotated version of the Mamba paper☆497Feb 27, 2024Updated 2 years ago
- This repo contains data and code for the paper "Language Models Enable Simple Systems for Generating Structured Views of Heterogeneous Da…☆493Mar 26, 2024Updated last year
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆550Feb 26, 2026Updated last week
- Fast and memory-efficient exact attention☆22,460Updated this week
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,151Jan 11, 2024Updated 2 years ago
- Butterfly matrix multiplication in PyTorch☆178Oct 5, 2023Updated 2 years ago
- Repository for StripedHyena, a state-of-the-art beyond Transformer architecture☆413Mar 7, 2024Updated 2 years ago
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆67Apr 24, 2024Updated last year
- ☆316Jun 21, 2024Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 5 months ago
- The repository for the code of the UltraFastBERT paper☆519Mar 24, 2024Updated last year
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,083Jul 1, 2025Updated 8 months ago