mengxiayu / LLMSuperWeight
Code for studying the super weight in LLM
☆72Updated last month
Alternatives and similar repositories for LLMSuperWeight:
Users that are interested in LLMSuperWeight are comparing it to the libraries listed below
- ☆125Updated last year
- ☆108Updated 4 months ago
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆112Updated 5 months ago
- Prune transformer layers☆67Updated 8 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆67Updated 3 months ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆56Updated this week
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models"☆58Updated 3 months ago
- Official Implementation of SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks☆33Updated 7 months ago
- nanoGPT-like codebase for LLM training☆85Updated this week
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆61Updated 5 months ago
- ☆192Updated last month
- Explorations into some recent techniques surrounding speculative decoding☆233Updated last month
- PyTorch building blocks for OLMo☆49Updated this week
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- Boosting 4-bit inference kernels with 2:4 Sparsity☆64Updated 4 months ago
- ☆66Updated 6 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆147Updated last week
- ☆75Updated 6 months ago
- Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding (EMNLP 2023 Long)☆56Updated 4 months ago
- Language models scale reliably with over-training and on downstream tasks☆96Updated 9 months ago
- ☆74Updated last year
- Simple and efficient pytorch-native transformer training and inference (batched)☆67Updated 9 months ago
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆49Updated 3 months ago
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆158Updated last month
- Code accompanying the paper "Massive Activations in Large Language Models"☆138Updated 10 months ago
- ☆85Updated 8 months ago
- ☆51Updated 8 months ago
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆221Updated last week
- Triton-based implementation of Sparse Mixture of Experts.☆194Updated 2 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆91Updated 2 months ago