HazyResearch / safariView external linksLinks
Convolutions for Sequence Modeling
☆911Jun 13, 2024Updated last year
Alternatives and similar repositories for safari
Users that are interested in safari are comparing it to the libraries listed below
Sorting:
- Language Modeling with the H3 State Space Model☆522Sep 29, 2023Updated 2 years ago
- Structured state space sequence models☆2,842Jul 17, 2024Updated last year
- An annotated implementation of the Hyena Hierarchy paper☆34May 28, 2023Updated 2 years ago
- ☆314Jan 8, 2025Updated last year
- Accelerated First Order Parallel Associative Scan☆196Jan 7, 2026Updated last month
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆202Jun 22, 2023Updated 2 years ago
- Understand and test language model architectures on synthetic tasks.☆252Jan 12, 2026Updated last month
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆562Dec 28, 2024Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jax☆92Jun 18, 2024Updated last year
- Official implementation for HyenaDNA, a long-range genomic foundation model built with Hyena☆758Apr 22, 2025Updated 9 months ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆248Jun 6, 2025Updated 8 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆138Dec 17, 2024Updated last year
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,351Updated this week
- FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores☆342Dec 28, 2024Updated last year
- Sequence modeling with Mega.☆303Jan 28, 2023Updated 3 years ago
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆116Mar 16, 2024Updated last year
- ☆62Dec 8, 2023Updated 2 years ago
- Tile primitives for speedy kernels☆3,139Updated this week
- JAX/Flax implementation of the Hyena Hierarchy☆34Apr 27, 2023Updated 2 years ago
- Implementation of Block Recurrent Transformer - Pytorch☆224Aug 20, 2024Updated last year
- Foundation Architecture for (M)LLMs☆3,130Apr 11, 2024Updated last year
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆693Jan 26, 2026Updated 2 weeks ago
- ☆163Jan 24, 2023Updated 3 years ago
- Sequence Modeling with Multiresolution Convolutional Memory (ICML 2023)☆126Oct 11, 2023Updated 2 years ago
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆422Jan 6, 2025Updated last year
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,379Updated this week
- Implementation of https://srush.github.io/annotated-s4☆513Jun 20, 2025Updated 7 months ago
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆224Aug 20, 2024Updated last year
- Cramming the training of a (BERT-type) language model into limited compute.☆1,361Jun 13, 2024Updated last year
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,066Mar 7, 2024Updated last year
- Annotated version of the Mamba paper☆496Feb 27, 2024Updated last year
- Fast and memory-efficient exact attention☆22,231Updated this week
- This repo contains data and code for the paper "Language Models Enable Simple Systems for Generating Structured Views of Heterogeneous Da…☆494Mar 26, 2024Updated last year
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆548Updated this week
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆655Dec 27, 2024Updated last year
- Explorations into the recently proposed Taylor Series Linear Attention☆100Aug 18, 2024Updated last year
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,143Jan 11, 2024Updated 2 years ago
- Butterfly matrix multiplication in PyTorch☆178Oct 5, 2023Updated 2 years ago