kyegomez / AttentionIsOFFByOne
Implementation of "Attention Is Off By One" by Evan Miller
☆190Updated last year
Alternatives and similar repositories for AttentionIsOFFByOne:
Users that are interested in AttentionIsOFFByOne are comparing it to the libraries listed below
- Official implementation of TransNormerLLM: A Faster and Better LLM☆242Updated last year
- ☆189Updated last year
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆279Updated 2 months ago
- Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”☆248Updated last year
- Rectified Rotary Position Embeddings☆366Updated 11 months ago
- Low-bit optimizers for PyTorch☆128Updated last year
- ☆192Updated 6 months ago
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆320Updated last year
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆105Updated last year
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆100Updated 10 months ago
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆283Updated 3 weeks ago
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆165Updated 11 months ago
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆377Updated last year
- [ICLR 2022] Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention☆190Updated 2 years ago
- ☆147Updated last year
- ☆219Updated 10 months ago
- ☆179Updated 6 months ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆121Updated 3 months ago
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆60Updated last year
- A Tight-fisted Optimizer☆47Updated 2 years ago
- ☆625Updated this week
- Lion and Adam optimization comparison☆61Updated 2 years ago
- Official PyTorch implementation of QA-LoRA☆131Updated last year
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆641Updated 3 months ago
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆284Updated last month
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆152Updated last week
- Tutel MoE: Optimized Mixture-of-Experts Library, Support DeepSeek FP8/FP4☆803Updated this week
- ☆185Updated last week
- Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels☆102Updated last year
- [CVPR 2023 Highlight] This is the official implementation of "Stitchable Neural Networks".☆248Updated 2 years ago