shreyansh26 / Attention-Mask-Patterns
Using FlexAttention to compute attention with different masking patterns
☆43Updated 7 months ago
Alternatives and similar repositories for Attention-Mask-Patterns:
Users that are interested in Attention-Mask-Patterns are comparing it to the libraries listed below
- Here we will test various linear attention designs.☆60Updated last year
- ☆53Updated 9 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆59Updated 6 months ago
- DPO, but faster 🚀☆41Updated 5 months ago
- Triton Implementation of HyperAttention Algorithm☆47Updated last year
- ☆31Updated last year
- ☆44Updated 2 months ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- ☆78Updated 8 months ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆60Updated 3 months ago
- ☆47Updated last year
- ☆49Updated last year
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆36Updated last year
- This repo is based on https://github.com/jiaweizzhao/GaLore☆27Updated 7 months ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆73Updated last year
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆29Updated last month
- ☆20Updated 11 months ago
- GoldFinch and other hybrid transformer components☆45Updated 9 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆37Updated last year
- Official implementation of "The Sparse Frontier: Sparse Attention Trade-offs in Transformer LLMs"☆22Updated last week
- Stick-breaking attention☆52Updated last month
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆45Updated 2 weeks ago
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆30Updated last month
- Flash-Muon: An Efficient Implementation of Muon Optimzer☆91Updated this week
- ☆13Updated last month
- JORA: JAX Tensor-Parallel LoRA Library (ACL 2024)☆33Updated last year
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.