transformer-vq / transformer_vq
☆170Updated 9 months ago
Related projects: ⓘ
- ☆119Updated last week
- ☆125Updated this week
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆182Updated 4 months ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆120Updated 4 months ago
- Keras implement of Finite Scalar Quantization☆58Updated 10 months ago
- Official implementation of TransNormerLLM: A Faster and Better LLM☆223Updated 7 months ago
- ☆94Updated 6 months ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆78Updated last year
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆116Updated 4 months ago
- Low-bit optimizers for PyTorch☆109Updated 11 months ago
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆253Updated last year
- ☆103Updated 6 months ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆95Updated 3 months ago
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆63Updated 3 months ago
- A repository for DenseSSMs☆86Updated 5 months ago
- ☆54Updated 2 months ago
- Implementation of "Attention Is Off By One" by Evan Miller☆177Updated last year
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆114Updated 2 months ago
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆233Updated 4 months ago
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models☆244Updated last month
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆225Updated 2 months ago
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆53Updated last year
- [ICLR 2024 (Spotlight)] "Frozen Transformers in Language Models are Effective Visual Encoder Layers"☆215Updated 8 months ago
- Lossless Training Speed Up by Unbiased Dynamic Data Pruning☆310Updated 2 weeks ago
- Awesome list of papers that extend Mamba to various applications.☆124Updated 2 weeks ago
- Reading list for research topics in state-space models☆209Updated last week
- Rectified Rotary Position Embeddings☆329Updated 4 months ago
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆111Updated 6 months ago
- Some preliminary explorations of Mamba's context scaling.☆184Updated 7 months ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆278Updated 3 months ago