WeiHuang05 / Awesome_Large_Foundation_Model_Theory
Welcome to the 'In Context Learning Theory' Reading Group
☆28Updated 4 months ago
Alternatives and similar repositories for Awesome_Large_Foundation_Model_Theory:
Users that are interested in Awesome_Large_Foundation_Model_Theory are comparing it to the libraries listed below
- Welcome to the Awesome Feature Learning in Deep Learning Thoery Reading Group! This repository serves as a collaborative platform for sch…☆172Updated 2 months ago
- Neural Tangent Kernel Papers☆106Updated 2 months ago
- This repo contains papers, books, tutorials and resources on Riemannian optimization.☆31Updated 3 months ago
- Code for the paper: Why Transformers Need Adam: A Hessian Perspective☆53Updated last week
- [NeurIPS 2023 Spotlight] Temperature Balancing, Layer-wise Weight Analysis, and Neural Network Training☆34Updated last year
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆35Updated 2 months ago
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆30Updated 4 months ago
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆97Updated last year
- Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation (ICML'24 Oral)☆14Updated 8 months ago
- [ICML 2024] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆91Updated 8 months ago
- Pytorch code for experiments on Linear Transformers☆20Updated last year
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆35Updated last year
- A curated list of Model Merging methods.☆90Updated 6 months ago
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆28Updated 11 months ago
- Code for paper: Aligning Large Language Models with Representation Editing: A Control Perspective☆25Updated last month
- [ICLR'24] "DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training" by Aochuan Chen*, Yimeng Zhang*, Jinghan Jia, James Di…☆52Updated 5 months ago
- Bayesian Low-Rank Adaptation for Large Language Models☆29Updated 9 months ago
- This is an official repository for "LAVA: Data Valuation without Pre-Specified Learning Algorithms" (ICLR2023).☆47Updated 9 months ago
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆42Updated 4 months ago
- ☆65Updated 3 months ago
- `dattri` is a PyTorch library for developing, benchmarking, and deploying efficient data attribution algorithms.☆68Updated last month
- [NeurIPS 2024] AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models☆21Updated this week
- An implementation of the penalty-based bilevel gradient descent (PBGD) algorithm and the iterative differentiation (ITD/RHG) methods.☆17Updated 2 years ago
- source code for paper "Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models"☆23Updated 9 months ago
- ☆50Updated last year
- [ICLR 2022] Training L_inf-dist-net with faster acceleration and better training strategies☆22Updated 3 years ago
- Efficient empirical NTKs in PyTorch☆18Updated 2 years ago