UW-Madison-Lee-Lab / Expressive_Power_of_LoRALinks
Code for "The Expressive Power of Low-Rank Adaptation".
☆20Updated last year
Alternatives and similar repositories for Expressive_Power_of_LoRA
Users that are interested in Expressive_Power_of_LoRA are comparing it to the libraries listed below
Sorting:
- ☆29Updated 2 years ago
- ☆20Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- ☆45Updated last year
- ☆20Updated last year
- The repository contains code for Adaptive Data Optimization☆25Updated 7 months ago
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆75Updated 8 months ago
- Efficient Scaling laws and collaborative pretraining.☆16Updated 5 months ago
- ☆18Updated 8 months ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆17Updated last year
- Minimum Description Length probing for neural network representations☆18Updated 5 months ago
- ☆87Updated last year
- [ACL 2023]: Training Trajectories of Language Models Across Scales https://arxiv.org/pdf/2212.09803.pdf☆24Updated last year
- ☆32Updated last year
- ☆32Updated last year
- ☆27Updated 2 years ago
- Sparse and discrete interpretability tool for neural networks☆63Updated last year
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated last year
- ☆18Updated 2 years ago
- ☆51Updated last year
- Code for EMNLP'24 paper - On Diversified Preferences of Large Language Model Alignment☆16Updated 11 months ago
- ☆23Updated 9 months ago
- ☆27Updated 5 months ago
- Codebase for Context-aware Meta-learned Loss Scaling (CaMeLS). https://arxiv.org/abs/2305.15076.☆25Updated last year
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆23Updated 2 months ago
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated 2 months ago
- ☆17Updated 5 months ago
- Self-Supervised Alignment with Mutual Information☆20Updated last year
- Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models☆46Updated last year