quanta-fine-tuning / quantaLinks
(NeurIPS 2024) QuanTA: Efficient High-Rank Fine-Tuning of LLMs with Quantum-Informed Tensor Adaptation
☆33Updated 11 months ago
Alternatives and similar repositories for quanta
Users that are interested in quanta are comparing it to the libraries listed below
Sorting:
- A thoroughly investigated survey for tensorial neural networks.☆140Updated 9 months ago
- Code for the paper: Why Transformers Need Adam: A Hessian Perspective☆64Updated 7 months ago
- Official implementation of Stochastic Taylor Derivative Estimator (STDE) NeurIPS2024☆122Updated 11 months ago
- 😎 A curated list of tensor decomposition resources for model compression.☆84Updated last week
- [NAACL 24 Oral] LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models☆37Updated 9 months ago
- ☆33Updated last year
- Pytorch implementation of KFAC - this is a port of https://github.com/tensorflow/kfac/☆26Updated last year
- Tensor-Train decomposition in pytorch☆74Updated 8 months ago
- Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation (ICML'24 Oral)☆13Updated last year
- summer school materials☆46Updated 2 years ago
- ☆13Updated 9 months ago
- [ICLR'24] "DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training" by Aochuan Chen*, Yimeng Zhang*, Jinghan Jia, James Di…☆66Updated last year
- Neural Tangent Kernel Papers☆118Updated 9 months ago
- Implementation of LPLR algorithm for matrix compression☆31Updated last year
- This repository contains the official code for Energy Transformer---an efficient Energy-based Transformer variant for graph classificatio…☆25Updated last year
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆35Updated 11 months ago
- source code of (quasi-)Givens Orthogonal Fine Tuning integrated to peft lib☆17Updated 7 months ago
- Welcome to the 'In Context Learning Theory' Reading Group☆30Updated 11 months ago
- Unofficial Implementation of Selective Attention Transformer☆17Updated last year
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆35Updated last year
- Omnigrok: Grokking Beyond Algorithmic Data☆62Updated 2 years ago
- [NeurIPS '25] Multi-Token Prediction Needs Registers☆23Updated last month
- DoG is SGD's Best Friend: A Parameter-Free Dynamic Step Size Schedule☆63Updated 2 years ago
- Pytorch code for experiments on Linear Transformers☆23Updated last year
- Collect optimizer related papers, data, repositories☆99Updated 11 months ago
- source code for paper "Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models"☆32Updated last year
- Parallelizing non-linear sequential models over the sequence length☆54Updated 4 months ago
- ☆19Updated 7 months ago
- Distributed K-FAC preconditioner for PyTorch☆91Updated last week
- ☆71Updated 10 months ago