kyegomez / AttentionIsOFFByOneLinks
Implementation of "Attention Is Off By One" by Evan Miller
☆197Updated 2 years ago
Alternatives and similar repositories for AttentionIsOFFByOne
Users that are interested in AttentionIsOFFByOne are comparing it to the libraries listed below
Sorting:
- Official implementation of TransNormerLLM: A Faster and Better LLM☆248Updated last year
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆185Updated last year
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆338Updated 9 months ago
- Lion and Adam optimization comparison☆64Updated 2 years ago
- ☆200Updated 2 years ago
- Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”☆251Updated 3 months ago
- Low-bit optimizers for PyTorch☆133Updated 2 years ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆104Updated last year
- Rectified Rotary Position Embeddings☆384Updated last year
- A Tight-fisted Optimizer☆50Updated 2 years ago
- Root Mean Square Layer Normalization☆258Updated 2 years ago
- Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"☆371Updated 2 years ago
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆337Updated 8 months ago
- Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels☆112Updated 2 years ago
- ☆187Updated last year
- ☆157Updated 2 years ago
- ☆235Updated last year
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆106Updated 2 years ago
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆63Updated 2 years ago
- ☆293Updated 11 months ago
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆384Updated 2 years ago
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆323Updated 9 months ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆230Updated last year
- ☆106Updated last year
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆126Updated 10 months ago
- Pytorch Implementation of the sparse attention from the paper: "Generating Long Sequences with Sparse Transformers"☆92Updated last month
- Keras implement of Finite Scalar Quantization☆83Updated 2 years ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆373Updated last year
- Official PyTorch implementation of QA-LoRA☆145Updated last year
- Implementation of Block Recurrent Transformer - Pytorch☆223Updated last year