knotgrass / Griffin
Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models
☆9Updated 4 months ago
Alternatives and similar repositories for Griffin:
Users that are interested in Griffin are comparing it to the libraries listed below
- my attempts at implementing various bits of Sepp Hochreiter's new xLSTM architecture☆130Updated 11 months ago
- PyTorch implementation of Retentive Network: A Successor to Transformer for Large Language Models☆14Updated last year
- Toy genetic algorithm in Pytorch☆39Updated this week
- ☆8Updated 6 months ago
- Implementation of MambaFormer in Pytorch ++ Zeta from the paper: "Can Mamba Learn How to Learn? A Comparative Study on In-Context Learnin…☆20Updated this week
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆52Updated 3 weeks ago
- Implementation of xLSTM in Pytorch from the paper: "xLSTM: Extended Long Short-Term Memory"☆119Updated 3 weeks ago
- An implementation of mLSTM and sLSTM in PyTorch.☆26Updated 10 months ago
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling☆192Updated 3 weeks ago
- This repository contains a better implementation of Kolmogorov-Arnold networks☆61Updated 11 months ago
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆105Updated last year
- State Space Models☆68Updated 11 months ago
- PyTorch implementation of Structured State Space for Sequence Modeling (S4), based on Annotated S4.☆80Updated last year
- A modified CNN architecture using Kolmogorov-Arnold Networks☆77Updated 11 months ago
- Trying out the Mamba architecture on small examples (cifar-10, shakespeare char level etc.)☆45Updated last year
- ☆16Updated 6 months ago
- Cuda implementation of Extended Long Short Term Memory (xLSTM) with C++ and PyTorch ports☆87Updated 10 months ago
- ☆115Updated last week
- Variations of Kolmogorov-Arnold Networks☆114Updated 11 months ago
- Implementation of a modular, high-performance, and simplistic mamba for high-speed applications☆34Updated 5 months ago
- Implementation of a Light Recurrent Unit in Pytorch☆47Updated 6 months ago
- Pytorch implementation of the xLSTM model by Beck et al. (2024)☆162Updated 8 months ago
- several types of attention modules written in PyTorch for learning purposes☆50Updated 6 months ago
- This code implements a Radial Basis Function (RBF) based Kolmogorov-Arnold Network (KAN) for function approximation.☆28Updated 10 months ago
- Pytorch (Lightning) implementation of the Mamba model☆26Updated last week
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)☆50Updated last month
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆116Updated 11 months ago
- This repository contains the codes to replicate the simulations from the paper: "Wav-KAN: Wavelet Kolmogorov-Arnold Networks". It showca…☆142Updated 2 months ago
- Transformer model based on Kolmogorov–Arnold Network(KAN), which is an alternative of Multi-Layer Perceptron(MLP)☆28Updated 3 weeks ago
- Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models☆13Updated last year