VITA-Group / Junk_DNA_HypothesisLinks
[ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, Souvik Kundu, Zhangyang Wang
☆16Updated last month
Alternatives and similar repositories for Junk_DNA_Hypothesis
Users that are interested in Junk_DNA_Hypothesis are comparing it to the libraries listed below
Sorting:
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆51Updated 2 years ago
- [ICML‘2024] "LoCoCo: Dropping In Convolutions for Long Context Compression", Ruisi Cai, Yuandong Tian, Zhangyang Wang, Beidi Chen☆16Updated 8 months ago
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆35Updated last year
- The official implementation for Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free☆40Updated 3 weeks ago
- ☆15Updated 9 months ago
- Less is More: Task-aware Layer-wise Distillation for Language Model Compression (ICML2023)☆35Updated last year
- ☆56Updated last year
- [ICML2024 Spotlight] Fine-Tuning Pre-trained Large Language Models Sparsely☆23Updated 11 months ago
- ☆18Updated 6 months ago
- Codebase for decoding compressed trust.☆23Updated last year
- A Sober Look at Language Model Reasoning☆52Updated last week
- ☆15Updated last month
- Official implementation of the paper: "A deeper look at depth pruning of LLMs"☆15Updated 10 months ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated last year
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆85Updated 7 months ago
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆17Updated last year
- ☆19Updated 10 months ago
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆45Updated 7 months ago
- Long Context Extension and Generalization in LLMs☆56Updated 8 months ago
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆28Updated last year
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆25Updated 6 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆81Updated 3 weeks ago
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆31Updated last year
- Code for "RSQ: Learning from Important Tokens Leads to Better Quantized LLMs"☆17Updated 2 weeks ago
- Repo for ACL2023 Findings paper "Emergent Modularity in Pre-trained Transformers"☆23Updated last year
- ThinK: Thinner Key Cache by Query-Driven Pruning☆18Updated 3 months ago
- ☆35Updated last year
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆52Updated 3 months ago
- Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding (EMNLP 2023 Long)☆60Updated 8 months ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year