CLAIRE-Labo / StructuredFFNLinks
The official code of "Building on Efficient Foundations: Effectively Training LLMs with Structured Feedforward Layers"
☆19Updated last year
Alternatives and similar repositories for StructuredFFN
Users that are interested in StructuredFFN are comparing it to the libraries listed below
Sorting:
- ☆53Updated last year
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆79Updated last month
- ☆51Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 6 months ago
- Accelerated First Order Parallel Associative Scan☆193Updated this week
- ☆91Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆135Updated last year
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆74Updated last year
- ☆29Updated last year
- ☆62Updated last year
- ☆57Updated last year
- A library for unit scaling in PyTorch☆133Updated 5 months ago
- A toolkit for scaling law research ⚖☆53Updated 11 months ago
- Griffin MQA + Hawk Linear RNN Hybrid☆89Updated last year
- Combining SOAP and MUON☆17Updated 10 months ago
- Triton Implementation of HyperAttention Algorithm☆48Updated 2 years ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆243Updated 6 months ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆138Updated last year
- ☆32Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated 2 years ago
- Attention Kernels for Symmetric Power Transformers☆128Updated 3 months ago
- An implementation of the Llama architecture, to instruct and delight☆21Updated 6 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- nanoGPT-like codebase for LLM training☆113Updated last month
- sigma-MoE layer☆20Updated last year
- ☆50Updated last week
- HomebrewNLP in JAX flavour for maintable TPU-Training☆51Updated last year
- ☆19Updated 3 weeks ago
- Normalized Transformer (nGPT)☆194Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jax☆91Updated last year