OpenNLPLab / TransnormerLLMLinks
Official implementation of TransNormerLLM: A Faster and Better LLM
☆247Updated last year
Alternatives and similar repositories for TransnormerLLM
Users that are interested in TransnormerLLM are comparing it to the libraries listed below
Sorting:
- Rectified Rotary Position Embeddings☆375Updated last year
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆323Updated 5 months ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆122Updated 6 months ago
- Low-bit optimizers for PyTorch☆130Updated last year
- ☆223Updated last year
- Implementation of "Attention Is Off By One" by Evan Miller☆193Updated last year
- ☆269Updated last year
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆167Updated last year
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆102Updated last year
- ☆196Updated last year
- Official PyTorch implementation of QA-LoRA☆138Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆146Updated 10 months ago
- Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels☆106Updated 2 years ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Updated last year
- ☆153Updated 2 years ago
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆173Updated last year
- ☆204Updated 9 months ago
- Recurrent Memory Transformer☆150Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆136Updated last year
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆338Updated 2 years ago
- ☆106Updated last year
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆160Updated 3 months ago
- Unofficial PyTorch/🤗Transformers(Gemma/Llama3) implementation of Leave No Context Behind: Efficient Infinite Context Transformers with I…☆367Updated last year
- 🔥 A minimal training framework for scaling FLA models☆209Updated last month
- Training code for Baby-Llama, our submission to the strict-small track of the BabyLM challenge.☆81Updated last year
- A repository sharing the literatures about long-context large language models, including the methodologies and the evaluation benchmarks☆265Updated last year
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆225Updated 3 months ago
- Some preliminary explorations of Mamba's context scaling.☆216Updated last year
- Lion and Adam optimization comparison☆62Updated 2 years ago
- ☆192Updated last week