Montinger / Transformer-WorkbenchLinks
Playground for Transformers
☆53Updated 2 years ago
Alternatives and similar repositories for Transformer-Workbench
Users that are interested in Transformer-Workbench are comparing it to the libraries listed below
Sorting:
- several types of attention modules written in PyTorch for learning purposes☆53Updated 3 weeks ago
- PyTorch implementation of moe, which stands for mixture of experts☆52Updated 4 years ago
- Efficient Infinite Context Transformers with Infini-attention Pytorch Implementation + QwenMoE Implementation + Training Script + 1M cont…☆86Updated last year
- LoRA and DoRA from Scratch Implementations☆215Updated last year
- minimal scripts for 24GB VRAM GPUs. training, inference, whatever☆50Updated last month
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Updated last year
- a curated list of the role of small models in the LLM era☆111Updated last year
- Fast instruction tuning with Llama2☆11Updated last year
- PyTorch implementation of Retentive Network: A Successor to Transformer for Large Language Models☆14Updated 2 years ago
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆77Updated last year
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆179Updated last year
- Code for KaLM-Embedding models☆112Updated 7 months ago
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆189Updated last year
- ☆48Updated last year
- code for the ddp tutorial☆32Updated 3 years ago
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling☆213Updated this week
- 📚 Text Classification with LoRA (Low-Rank Adaptation) of Language Models - Efficiently fine-tune large language models for text classifi…☆54Updated 2 years ago
- ☆31Updated last year
- ☆133Updated 2 years ago
- Root Mean Square Layer Normalization☆261Updated 2 years ago
- Code for NeurIPS LLM Efficiency Challenge☆60Updated last year
- Pytorch Implementation of the paper: "Learning to (Learn at Test Time): RNNs with Expressive Hidden States"☆25Updated last week
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆58Updated last week
- Implementation of Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer in PyTorch.☆53Updated 2 years ago
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆74Updated 2 years ago
- [TMLR 2026] When Attention Collapses: How Degenerate Layers in LLMs Enable Smaller, Stronger Models☆121Updated 11 months ago
- Code for paper titled "Towards the Law of Capacity Gap in Distilling Language Models"☆103Updated last year
- Scripts of LLM pre-training and fine-tuning (w/wo LoRA, DeepSpeed)☆87Updated 2 years ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆32Updated last year
- A work in progress. Trying to write about all interesting or necessary pieces in the current development of LLMs and generative AI. Gra…☆199Updated 2 years ago