kyegomez / Blockwise-Parallel-Transformer
32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.
☆47Updated last year
Alternatives and similar repositories for Blockwise-Parallel-Transformer:
Users that are interested in Blockwise-Parallel-Transformer are comparing it to the libraries listed below
- Triton Implementation of HyperAttention Algorithm☆47Updated last year
- Linear Attention Sequence Parallelism (LASP)☆81Updated 10 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- Sparse Backpropagation for Mixture-of-Expert Training☆29Updated 9 months ago
- ☆53Updated 9 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆26Updated 6 months ago
- Transformers components but in Triton☆32Updated 3 weeks ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆39Updated last year
- The implementation for MLSys 2023 paper: "Cuttlefish: Low-rank Model Training without All The Tuning"☆44Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆97Updated 6 months ago
- NAACL '24 (Best Demo Paper RunnerUp) / MlSys @ NeurIPS '23 - RedCoast: A Lightweight Tool to Automate Distributed Training and Inference☆64Updated 4 months ago
- Here we will test various linear attention designs.☆60Updated 11 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆36Updated last year
- ☆50Updated 5 months ago
- Using FlexAttention to compute attention with different masking patterns☆43Updated 6 months ago
- ☆102Updated last year
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆35Updated 10 months ago
- ☆43Updated last month
- Odysseus: Playground of LLM Sequence Parallelism☆68Updated 10 months ago
- Repository for CPU Kernel Generation for LLM Inference☆25Updated last year
- ☆143Updated last year
- DPO, but faster 🚀☆40Updated 4 months ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated last year
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆116Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆54Updated 7 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆30Updated 10 months ago
- [ICLR 2025] Official PyTorch implementation of "Forgetting Transformer: Softmax Attention with a Forget Gate"☆90Updated last week
- Stick-breaking attention☆50Updated last month
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆27Updated last year
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆100Updated 10 months ago