alvin-zyl / CoLALinks
Implementation of CoLA: Compute-Efficient Pre-Training of LLMs via Low-Rank Activation
☆22Updated 3 months ago
Alternatives and similar repositories for CoLA
Users that are interested in CoLA are comparing it to the libraries listed below
Sorting:
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆25Updated 2 months ago
- Learning adapter weights from task descriptions☆18Updated last year
- ☆49Updated last year
- A Sober Look at Language Model Reasoning☆63Updated last week
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated last month
- Code for "Reasoning to Learn from Latent Thoughts"☆104Updated 2 months ago
- [NeurIPS 2023 Spotlight] Temperature Balancing, Layer-wise Weight Analysis, and Neural Network Training☆35Updated last month
- Test-time-training on nearest neighbors for large language models☆41Updated last year
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆36Updated last year
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆81Updated 3 weeks ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆85Updated 7 months ago
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆55Updated 8 months ago
- ☆15Updated 9 months ago
- ☆131Updated 10 months ago
- Code for Paper (Preserving Diversity in Supervised Fine-tuning of Large Language Models)☆24Updated 3 weeks ago
- This is the official implementation of ScaleBiO: Scalable Bilevel Optimization for LLM Data Reweighting☆19Updated 10 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆52Updated 3 months ago
- ☆51Updated last month
- Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding (EMNLP 2023 Long)☆60Updated 8 months ago
- Bayesian low-rank adaptation for large language models☆23Updated last year
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆35Updated 3 weeks ago
- Repo for ACL2023 Findings paper "Emergent Modularity in Pre-trained Transformers"☆23Updated 2 years ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆38Updated last year
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆31Updated last year
- Code accompanying the paper "Massive Activations in Large Language Models"☆162Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆51Updated 2 years ago
- ☆48Updated last year
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆72Updated 2 years ago
- An effective weight-editing method for mitigating overly short reasoning in LLMs, and a mechanistic study uncovering how reasoning length…☆12Updated last month
- official code for paper Probing the Decision Boundaries of In-context Learning in Large Language Models. https://arxiv.org/abs/2406.11233…☆18Updated 9 months ago