Convolutions for Sequence Modeling
☆911Jun 13, 2024Updated last year
Alternatives and similar repositories for safari
Users that are interested in safari are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Language Modeling with the H3 State Space Model☆522Sep 29, 2023Updated 2 years ago
- An annotated implementation of the Hyena Hierarchy paper☆34May 28, 2023Updated 2 years ago
- Structured state space sequence models☆2,875Jul 17, 2024Updated last year
- ☆317Jan 8, 2025Updated last year
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆102Feb 25, 2023Updated 3 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Accelerated First Order Parallel Associative Scan☆197Jan 7, 2026Updated 3 months ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆561Dec 28, 2024Updated last year
- Official implementation for HyenaDNA, a long-range genomic foundation model built with Hyena☆774Apr 22, 2025Updated 11 months ago
- Understand and test language model architectures on synthetic tasks.☆264Mar 22, 2026Updated 2 weeks ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆251Jun 6, 2025Updated 10 months ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆92Jun 18, 2024Updated last year
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆202Jun 22, 2023Updated 2 years ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,458Mar 30, 2026Updated last week
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores☆350Dec 28, 2024Updated last year
- ☆63Dec 8, 2023Updated 2 years ago
- A MAD laboratory to improve AI architecture designs 🧪☆141Dec 17, 2024Updated last year
- Tile primitives for speedy kernels☆3,304Mar 28, 2026Updated last week
- ☆165Jan 24, 2023Updated 3 years ago
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆118Mar 16, 2024Updated 2 years ago
- Sequence modeling with Mega.☆303Jan 28, 2023Updated 3 years ago
- Sequence Modeling with Multiresolution Convolutional Memory (ICML 2023)☆127Oct 11, 2023Updated 2 years ago
- JAX/Flax implementation of the Hyena Hierarchy☆34Apr 27, 2023Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Implementation of Block Recurrent Transformer - Pytorch☆224Aug 20, 2024Updated last year
- Annotated version of the Mamba paper☆500Feb 27, 2024Updated 2 years ago
- 🚀 Efficient implementations for emerging model architectures☆4,823Updated this week
- Foundation Architecture for (M)LLMs☆3,133Apr 11, 2024Updated last year
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆422Jan 6, 2025Updated last year
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆703Jan 26, 2026Updated 2 months ago
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆226Mar 25, 2026Updated 2 weeks ago
- Implementation of https://srush.github.io/annotated-s4☆515Jun 20, 2025Updated 9 months ago
- Fast and memory-efficient exact attention☆23,185Updated this week
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆654Dec 27, 2024Updated last year
- Cramming the training of a (BERT-type) language model into limited compute.☆1,360Jun 13, 2024Updated last year
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,065Mar 7, 2024Updated 2 years ago
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆68Apr 24, 2024Updated last year
- ☆167Jul 5, 2023Updated 2 years ago
- ☆317Jun 21, 2024Updated last year
- Repository for StripedHyena, a state-of-the-art beyond Transformer architecture☆419Mar 7, 2024Updated 2 years ago