UtkarshSaxena1 / EigenAttnLinks
☆16Updated 10 months ago
Alternatives and similar repositories for EigenAttn
Users that are interested in EigenAttn are comparing it to the libraries listed below
Sorting:
- ☆15Updated 9 months ago
- ☆31Updated 6 months ago
- ☆30Updated last year
- Official implementation of the paper: "A deeper look at depth pruning of LLMs"☆15Updated last year
- ☆20Updated 9 months ago
- ☆34Updated 2 years ago
- DELLA-Merging: Reducing Interference in Model Merging through Magnitude-Based Sampling☆35Updated last year
- [ICML 2024] SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Models☆21Updated last year
- [ACL 2024 Findings] Light-PEFT: Lightening Parameter-Efficient Fine-Tuning via Early Pruning☆11Updated last year
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆28Updated last year
- Codes for Merging Large Language Models☆33Updated last year
- [ICLR 2025] Official Pytorch Implementation of "Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN" by Pengxia…☆26Updated last month
- Code for the EMNLP24 paper "A simple and effective L2 norm based method for KV Cache compression."☆16Updated 8 months ago
- ☆41Updated 4 months ago
- ☆10Updated 11 months ago
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆92Updated 2 months ago
- ☆16Updated 10 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆54Updated 6 months ago
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆39Updated 10 months ago
- ☆19Updated 8 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆67Updated 6 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆119Updated last month
- ☆55Updated last month
- ☆23Updated last month
- Code accompanying the paper "Massive Activations in Large Language Models"☆176Updated last year
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆42Updated last year
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆38Updated last year
- [EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMs☆14Updated last year
- Official Implementation of FastKV: KV Cache Compression for Fast Long-Context Processing with Token-Selective Propagation☆22Updated 3 months ago
- [NeurIPS 2024 Main Track] Code for the paper titled "Instruction Tuning With Loss Over Instructions"☆39Updated last year