UW-Madison-Lee-Lab / Expressive_Power_of_LoRALinks
Code for "The Expressive Power of Low-Rank Adaptation".
☆20Updated last year
Alternatives and similar repositories for Expressive_Power_of_LoRA
Users that are interested in Expressive_Power_of_LoRA are comparing it to the libraries listed below
Sorting:
- ☆19Updated 10 months ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- ☆15Updated 6 months ago
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated last month
- ☆32Updated last year
- The repository contains code for Adaptive Data Optimization☆24Updated 5 months ago
- ☆29Updated last year
- ☆18Updated 2 years ago
- Exploration of automated dataset selection approaches at large scales.☆41Updated 3 months ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- Efficient Scaling laws and collaborative pretraining.☆16Updated 4 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆73Updated 7 months ago
- Code for the paper "Data Feedback Loops: Model-driven Amplification of Dataset Biases"☆16Updated 2 years ago
- Minimum Description Length probing for neural network representations☆19Updated 4 months ago
- Latest Weight Averaging (NeurIPS HITY 2022)☆30Updated last year
- ☆32Updated last year
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆52Updated 3 months ago
- ☆25Updated 3 months ago
- ☆20Updated last year
- ☆45Updated last year
- In-context Example Selection with Influences☆15Updated 2 years ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated last year
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆76Updated last year
- ☆28Updated 3 months ago
- ☆26Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆51Updated 2 years ago
- ☆18Updated 4 months ago
- ☆23Updated 8 months ago
- Long Context Extension and Generalization in LLMs☆56Updated 8 months ago
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆29Updated last year