kyegomez / AttentionIsOFFByOneLinks
Implementation of "Attention Is Off By One" by Evan Miller
☆191Updated last year
Alternatives and similar repositories for AttentionIsOFFByOne
Users that are interested in AttentionIsOFFByOne are comparing it to the libraries listed below
Sorting:
- Official implementation of TransNormerLLM: A Faster and Better LLM☆243Updated last year
- Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”☆249Updated last year
- Rectified Rotary Position Embeddings☆370Updated last year
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆296Updated 3 months ago
- ☆191Updated last year
- ☆151Updated last year
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆287Updated 3 months ago
- Lion and Adam optimization comparison☆61Updated 2 years ago
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆327Updated 2 years ago
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆166Updated last year
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆101Updated 11 months ago
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆379Updated last year
- Tutel MoE: Optimized Mixture-of-Experts Library, Support DeepSeek FP8/FP4☆829Updated this week
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆294Updated 2 months ago
- ☆222Updated 11 months ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆121Updated 4 months ago
- ☆640Updated 2 weeks ago
- Low-bit optimizers for PyTorch☆128Updated last year
- ☆198Updated 7 months ago
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆643Updated 5 months ago
- A Tight-fisted Optimizer☆48Updated 2 years ago
- [CVPR 2023 Highlight] This is the official implementation of "Stitchable Neural Networks".☆247Updated 2 years ago
- My implementation of "Patch n’ Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution"☆234Updated 2 months ago
- This repository contains the implementation for the paper "EMP-SSL: Towards Self-Supervised Learning in One Training Epoch."☆228Updated last year
- Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels☆105Updated last year
- Root Mean Square Layer Normalization☆241Updated 2 years ago
- Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"☆363Updated last year
- [ICLR 2022] Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention☆193Updated 2 years ago
- The CUDA version of the RWKV language model ( https://github.com/BlinkDL/RWKV-LM )☆220Updated 5 months ago
- Reorder-based post-training quantization for large language model☆190Updated 2 years ago