A repository for log-time feedforward networks
☆224Apr 9, 2024Updated last year
Alternatives and similar repositories for fastfeedforward
Users that are interested in fastfeedforward are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The repository for the code of the UltraFastBERT paper☆519Mar 24, 2024Updated 2 years ago
- FastFeedForward Networks☆20Dec 8, 2023Updated 2 years ago
- Zeta implementation of a reusable and plug in and play feedforward from the paper "Exponentially Faster Language Modeling"☆16Nov 11, 2024Updated last year
- ☆13Aug 23, 2024Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆87Mar 14, 2022Updated 4 years ago
- Yet another LLM☆10Apr 6, 2023Updated 2 years ago
- ☆35Apr 12, 2024Updated last year
- Mamba R1 represents a novel architecture that combines the efficiency of Mamba's state space models with the scalability of Mixture of Ex…☆25Oct 13, 2025Updated 5 months ago
- ☆19Jun 10, 2024Updated last year
- some mixture of experts architecture implementations☆26Mar 22, 2024Updated 2 years ago
- ☆15Apr 26, 2022Updated 3 years ago
- Brainwave is a state-of-the-art neural decoder that transforms electroencephalogram (EEG) and brain signals into multimodal outputs inclu…☆14Oct 6, 2025Updated 5 months ago
- Beyond Language Models: Byte Models are Digital World Simulators☆335Jun 6, 2024Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 6 months ago
- Fast approximate inference on a single GPU with sparsity aware offloading☆39Jan 4, 2024Updated 2 years ago
- ☆50Mar 14, 2024Updated 2 years ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆397Feb 24, 2024Updated 2 years ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆58Updated this week
- Clustered Compositional Embeddings☆11Oct 25, 2023Updated 2 years ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆474Apr 21, 2024Updated last year
- Code for "Accelerating Training with Neuron Interaction and Nowcasting Networks" [ICLR 2025]☆28Feb 20, 2026Updated last month
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆281Nov 3, 2023Updated 2 years ago
- ☆317Jun 21, 2024Updated last year
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆563Dec 28, 2024Updated last year
- Convolutions for Sequence Modeling☆911Jun 13, 2024Updated last year
- ☆83Apr 16, 2024Updated last year
- Minimalistic large language model 3D-parallelism training☆2,617Feb 19, 2026Updated last month
- Cramming the training of a (BERT-type) language model into limited compute.☆1,362Jun 13, 2024Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆270Oct 3, 2025Updated 5 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- Unofficial PyTorch/🤗Transformers(Gemma/Llama3) implementation of Leave No Context Behind: Efficient Infinite Context Transformers with I…☆375Apr 23, 2024Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆736Apr 10, 2024Updated last year
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆362Feb 5, 2026Updated last month
- zero shot NER fine tuning☆14Mar 17, 2025Updated last year
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆374Dec 12, 2024Updated last year
- FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores☆344Dec 28, 2024Updated last year
- Implementation of "Audio xLSTMs: Learning Self-supervised audio representations with xLSTMs" in PyTorch☆19Updated this week
- A simple and minimal open source implementation of "Introducing LFM2: The Fastest On-Device Foundation Models on the Market" from Liquid …☆23Mar 9, 2026Updated 2 weeks ago
- Elevate your language models with insightful diversity metrics.☆11Feb 4, 2024Updated 2 years ago