A repository for log-time feedforward networks
☆224Apr 9, 2024Updated 2 years ago
Alternatives and similar repositories for fastfeedforward
Users that are interested in fastfeedforward are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The repository for the code of the UltraFastBERT paper☆518Mar 24, 2024Updated 2 years ago
- FastFeedForward Networks☆20Dec 8, 2023Updated 2 years ago
- Zeta implementation of a reusable and plug in and play feedforward from the paper "Exponentially Faster Language Modeling"☆16Nov 11, 2024Updated last year
- ☆13Aug 23, 2024Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆87Mar 14, 2022Updated 4 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Memoria is a human-inspired memory architecture for neural networks.☆88Oct 20, 2024Updated last year
- Yet another LLM☆10Apr 6, 2023Updated 3 years ago
- ☆35Apr 12, 2024Updated 2 years ago
- Mamba R1 represents a novel architecture that combines the efficiency of Mamba's state space models with the scalability of Mixture of Ex…☆25Oct 13, 2025Updated 6 months ago
- ☆19Jun 10, 2024Updated last year
- some mixture of experts architecture implementations☆27Mar 22, 2024Updated 2 years ago
- ☆15Apr 26, 2022Updated 3 years ago
- [CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference☆30Mar 14, 2024Updated 2 years ago
- Script for downloading GitHub.☆13Sep 24, 2020Updated 5 years ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Beyond Language Models: Byte Models are Digital World Simulators☆335Jun 6, 2024Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 6 months ago
- Fast approximate inference on a single GPU with sparsity aware offloading☆39Jan 4, 2024Updated 2 years ago
- Brainwave is a state-of-the-art neural decoder that transforms electroencephalogram (EEG) and brain signals into multimodal outputs inclu…☆14Oct 6, 2025Updated 6 months ago
- ☆50Mar 14, 2024Updated 2 years ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆397Feb 24, 2024Updated 2 years ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆58Mar 30, 2026Updated 2 weeks ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆472Apr 21, 2024Updated last year
- Clustered Compositional Embeddings☆12Oct 25, 2023Updated 2 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Code for "Accelerating Training with Neuron Interaction and Nowcasting Networks" [ICLR 2025]☆29Feb 20, 2026Updated last month
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆281Nov 3, 2023Updated 2 years ago
- ☆317Jun 21, 2024Updated last year
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆561Dec 28, 2024Updated last year
- Convolutions for Sequence Modeling☆911Jun 13, 2024Updated last year
- Quantization-aware training with spiking neural networks☆53Feb 18, 2022Updated 4 years ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Jun 12, 2024Updated last year
- ☆83Apr 16, 2024Updated last year
- Minimalistic large language model 3D-parallelism training☆2,644Apr 7, 2026Updated last week
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Cramming the training of a (BERT-type) language model into limited compute.☆1,360Jun 13, 2024Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆272Oct 3, 2025Updated 6 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆57Aug 20, 2024Updated last year
- Unofficial PyTorch/🤗Transformers(Gemma/Llama3) implementation of Leave No Context Behind: Efficient Infinite Context Transformers with I…☆375Apr 23, 2024Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆736Apr 10, 2024Updated 2 years ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆364Updated this week
- zero shot NER fine tuning☆14Mar 17, 2025Updated last year