OpenNLPLab / TransnormerLLMLinks
Official implementation of TransNormerLLM: A Faster and Better LLM
☆247Updated last year
Alternatives and similar repositories for TransnormerLLM
Users that are interested in TransnormerLLM are comparing it to the libraries listed below
Sorting:
- Rectified Rotary Position Embeddings☆381Updated last year
- ☆269Updated last year
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆324Updated 6 months ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆102Updated last year
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆123Updated 7 months ago
- Low-bit optimizers for PyTorch☆130Updated last year
- ☆225Updated last year
- Implementation of "Attention Is Off By One" by Evan Miller☆195Updated last year
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆147Updated 11 months ago
- ☆196Updated last year
- Implementation of NAACL 2024 Outstanding Paper "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆148Updated 5 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆435Updated 10 months ago
- Official PyTorch implementation of QA-LoRA☆138Updated last year
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆169Updated last year
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆136Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆206Updated last year
- The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction☆388Updated last year
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆176Updated last year
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆398Updated last year
- Unofficial PyTorch/🤗Transformers(Gemma/Llama3) implementation of Leave No Context Behind: Efficient Infinite Context Transformers with I…☆367Updated last year
- Some preliminary explorations of Mamba's context scaling.☆216Updated last year
- Recurrent Memory Transformer☆150Updated 2 years ago
- A prototype repo for hybrid training of pipeline parallel and distributed data parallel with comments on core code snippets. Feel free to…☆56Updated 2 years ago
- ☆194Updated 2 weeks ago
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆61Updated 2 years ago
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆291Updated last year
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆160Updated 4 months ago
- ☆154Updated 2 years ago