A repository for log-time feedforward networks
☆224Apr 9, 2024Updated 2 years ago
Alternatives and similar repositories for fastfeedforward
Users that are interested in fastfeedforward are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The repository for the code of the UltraFastBERT paper☆518Mar 24, 2024Updated 2 years ago
- FastFeedForward Networks☆20Dec 8, 2023Updated 2 years ago
- Zeta implementation of a reusable and plug in and play feedforward from the paper "Exponentially Faster Language Modeling"☆16Nov 11, 2024Updated last year
- ☆13Aug 23, 2024Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆88Mar 14, 2022Updated 4 years ago
- Deploy open-source AI quickly and easily - Special Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- Yet another LLM☆10Apr 6, 2023Updated 3 years ago
- ☆35Apr 12, 2024Updated 2 years ago
- Mamba R1 represents a novel architecture that combines the efficiency of Mamba's state space models with the scalability of Mixture of Ex…☆25Oct 13, 2025Updated 6 months ago
- ☆19Jun 10, 2024Updated last year
- some mixture of experts architecture implementations☆27Mar 22, 2024Updated 2 years ago
- ☆15Apr 26, 2022Updated 4 years ago
- [CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference☆30Mar 14, 2024Updated 2 years ago
- Script for downloading GitHub.☆13Sep 24, 2020Updated 5 years ago
- Beyond Language Models: Byte Models are Digital World Simulators☆334Jun 6, 2024Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 7 months ago
- Fast approximate inference on a single GPU with sparsity aware offloading☆39Jan 4, 2024Updated 2 years ago
- Brainwave is a state-of-the-art neural decoder that transforms electroencephalogram (EEG) and brain signals into multimodal outputs inclu…☆14Oct 6, 2025Updated 6 months ago
- ☆50Mar 14, 2024Updated 2 years ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆397Feb 24, 2024Updated 2 years ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆59Apr 20, 2026Updated 2 weeks ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆474Apr 21, 2024Updated 2 years ago
- Clustered Compositional Embeddings☆12Oct 25, 2023Updated 2 years ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆280Nov 3, 2023Updated 2 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- ☆317Jun 21, 2024Updated last year
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆562Dec 28, 2024Updated last year
- Convolutions for Sequence Modeling☆912Jun 13, 2024Updated last year
- Quantization-aware training with spiking neural networks☆55Feb 18, 2022Updated 4 years ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Jun 12, 2024Updated last year
- ☆83Apr 16, 2024Updated 2 years ago
- Minimalistic large language model 3D-parallelism training☆2,674Apr 7, 2026Updated 3 weeks ago
- Cramming the training of a (BERT-type) language model into limited compute.☆1,364Jun 13, 2024Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆273Oct 3, 2025Updated 7 months ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- HGRN2: Gated Linear RNNs with State Expansion☆57Aug 20, 2024Updated last year
- Unofficial PyTorch/🤗Transformers(Gemma/Llama3) implementation of Leave No Context Behind: Efficient Infinite Context Transformers with I…☆376Apr 23, 2024Updated 2 years ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆736Apr 10, 2024Updated 2 years ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆368Apr 13, 2026Updated 3 weeks ago
- zero shot NER fine tuning☆14Mar 17, 2025Updated last year
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆375Dec 12, 2024Updated last year
- Explorations into some recent techniques surrounding speculative decoding☆300Dec 22, 2024Updated last year