YefanZhou / TempBalanceLinks
[NeurIPS 2023 Spotlight] Temperature Balancing, Layer-wise Weight Analysis, and Neural Network Training
☆35Updated last month
Alternatives and similar repositories for TempBalance
Users that are interested in TempBalance are comparing it to the libraries listed below
Sorting:
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆35Updated last year
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆31Updated last year
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆44Updated 7 months ago
- ☆49Updated last year
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆102Updated last year
- A Sober Look at Language Model Reasoning☆52Updated last week
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆59Updated 3 months ago
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆31Updated 7 months ago
- Welcome to the 'In Context Learning Theory' Reading Group☆28Updated 6 months ago
- ☆34Updated 5 months ago
- official code for paper Probing the Decision Boundaries of In-context Learning in Large Language Models. https://arxiv.org/abs/2406.11233…☆18Updated 9 months ago
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆45Updated 7 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆84Updated 7 months ago
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated last month
- Implementation of CoLA: Compute-Efficient Pre-Training of LLMs via Low-Rank Activation☆22Updated 3 months ago
- Code for "Towards Revealing the Mystery behind Chain of Thought: a Theoretical Perspective"☆20Updated last year
- Code accompanying the paper "Massive Activations in Large Language Models"☆162Updated last year
- [ICML 2024] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆104Updated 11 months ago
- An effective weight-editing method for mitigating overly short reasoning in LLMs, and a mechanistic study uncovering how reasoning length…☆12Updated 3 weeks ago
- Code for paper: Aligning Large Language Models with Representation Editing: A Control Perspective☆32Updated 4 months ago
- ☆15Updated 9 months ago
- Test-time-training on nearest neighbors for large language models☆41Updated last year
- ☆36Updated 2 months ago
- ☆26Updated last year
- Code for "Reasoning to Learn from Latent Thoughts"☆104Updated 2 months ago
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆82Updated 7 months ago
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆25Updated last month
- Official repository for our paper, Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Mode…☆16Updated 6 months ago
- Code for the paper: Why Transformers Need Adam: A Hessian Perspective☆59Updated 2 months ago
- ☆67Updated 3 years ago