hwang595 / Cuttlefish
The implementation for MLSys 2023 paper: "Cuttlefish: Low-rank Model Training without All The Tuning"
☆43Updated last year
Alternatives and similar repositories for Cuttlefish:
Users that are interested in Cuttlefish are comparing it to the libraries listed below
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆79Updated last year
- ☆36Updated 5 months ago
- ☆49Updated last year
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆28Updated 7 months ago
- ☆55Updated 2 weeks ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆67Updated 2 months ago
- Stick-breaking attention☆41Updated 2 weeks ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆56Updated this week
- ☆98Updated 10 months ago
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated 7 months ago
- [ICML 2024] SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Models☆18Updated 8 months ago
- Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆76Updated 2 months ago
- Here we will test various linear attention designs.☆58Updated 9 months ago
- ☆74Updated last year
- ☆29Updated 11 months ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆55Updated 3 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models"☆58Updated 3 months ago
- Using FlexAttention to compute attention with different masking patterns☆40Updated 4 months ago
- NAACL '24 (Best Demo Paper RunnerUp) / MlSys @ NeurIPS '23 - RedCoast: A Lightweight Tool to Automate Distributed Training and Inference☆63Updated last month
- Linear Attention Sequence Parallelism (LASP)☆76Updated 7 months ago
- Official code for the paper "Attention as a Hypernetwork"☆23Updated 7 months ago
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆114Updated 10 months ago
- Sparse Backpropagation for Mixture-of-Expert Training☆27Updated 6 months ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆38Updated 11 months ago
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆42Updated 6 months ago
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆27Updated 10 months ago
- Fast and memory-efficient exact attention☆57Updated last month
- ☆14Updated last year
- The Efficiency Spectrum of LLM☆52Updated last year
- [ICML2024 Spotlight] Fine-Tuning Pre-trained Large Language Models Sparsely☆20Updated 7 months ago