OpenNLPLab / TransnormerLLM
Official implementation of TransNormerLLM: A Faster and Better LLM
☆243Updated last year
Alternatives and similar repositories for TransnormerLLM:
Users that are interested in TransnormerLLM are comparing it to the libraries listed below
- Rectified Rotary Position Embeddings☆360Updated 10 months ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆272Updated last month
- ☆220Updated 9 months ago
- Official PyTorch implementation of QA-LoRA☆129Updated last year
- ☆253Updated last year
- ☆145Updated last year
- ☆189Updated last year
- Implementation of "Attention Is Off By One" by Evan Miller☆190Updated last year
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆121Updated 2 months ago
- Low-bit optimizers for PyTorch☆125Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆223Updated last month
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆141Updated 6 months ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆154Updated 9 months ago
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆597Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆619Updated 8 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆393Updated 5 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆150Updated 3 months ago
- A repository sharing the literatures about long-context large language models, including the methodologies and the evaluation benchmarks☆260Updated 7 months ago
- DSIR large-scale data selection framework for language model training☆244Updated 11 months ago
- Recurrent Memory Transformer☆149Updated last year
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆321Updated 9 months ago
- [ACL 2024] Progressive LLaMA with Block Expansion.☆499Updated 10 months ago
- Some preliminary explorations of Mamba's context scaling.☆212Updated last year
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆99Updated 9 months ago
- ☆182Updated this week
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆129Updated 9 months ago
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆287Updated 10 months ago
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆407Updated 2 months ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆448Updated 11 months ago
- ☆182Updated 5 months ago