yifanycc / lorettaView external linksLinks
[NAACL 24 Oral] LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models
☆39Jan 9, 2025Updated last year
Alternatives and similar repositories for loretta
Users that are interested in loretta are comparing it to the libraries listed below
Sorting:
- ☆43Jul 22, 2024Updated last year
- [EMNLP 24] Source code for paper 'AdaZeta: Adaptive Zeroth-Order Tensor-Train Adaption for Memory-Efficient Large Language Models Fine-Tu…☆12Dec 15, 2024Updated last year
- Metamodeling, sensitivity analysis and visualization using the tensor train format☆21Sep 8, 2022Updated 3 years ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Apr 28, 2024Updated last year
- Code for the paper "Spectral Editing of Activations for Large Language Model Alignments"☆29Dec 20, 2024Updated last year
- ☆34Aug 23, 2023Updated 2 years ago
- MAP: Low-compute Model Merging with Amortized Pareto Fronts via Quadratic Approximation☆13Sep 2, 2024Updated last year
- ☆15Nov 7, 2024Updated last year
- (NeurIPS 2024) QuanTA: Efficient High-Rank Fine-Tuning of LLMs with Quantum-Informed Tensor Adaptation☆35Nov 18, 2025Updated 2 months ago
- This repo is to demo the concept of lossless compression with Transformers as encoder and decoder.☆14May 2, 2024Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Jan 15, 2024Updated 2 years ago
- SLiM: One-shot Quantized Sparse Plus Low-rank Approximation of LLMs (ICML 2025)☆32Nov 28, 2025Updated 2 months ago
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆39Jul 1, 2023Updated 2 years ago
- static-dir files for a simple-server demo with ReactJS visualizations☆16Nov 28, 2018Updated 7 years ago
- ☆129Jan 22, 2024Updated 2 years ago
- [NAACL 2025] MiLoRA: Harnessing Minor Singular Components for Parameter-Efficient LLM Finetuning☆19May 31, 2025Updated 8 months ago
- [ICLR 2022] Code for paper "Exploring Extreme Parameter Compression for Pre-trained Language Models"(https://arxiv.org/abs/2205.10036)☆22May 24, 2023Updated 2 years ago
- [ICLR 2025🔥] SVD-LLM & [NAACL 2025🔥] SVD-LLM V2☆281Aug 28, 2025Updated 5 months ago
- [ICLR 2025] Official implementation of paper "Dynamic Low-Rank Sparse Adaptation for Large Language Models".☆23Mar 16, 2025Updated 10 months ago
- Compressing Large Language Models using Low Precision and Low Rank Decomposition☆106Nov 24, 2025Updated 2 months ago
- ICLR 2025☆30May 21, 2025Updated 8 months ago
- ☆26Nov 23, 2023Updated 2 years ago
- Repo for the EMNLP'24 Paper "Dual-Space Knowledge Distillation for Large Language Models". A general white-box KD framework for both same…☆61Aug 26, 2025Updated 5 months ago
- Code for paper "Parameter Efficient Multi-task Model Fusion with Partial Linearization"☆25Sep 13, 2024Updated last year
- ☆63Oct 17, 2023Updated 2 years ago
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆33Feb 19, 2025Updated 11 months ago
- Code and resources for the Lorenz et al. (2021) QNLP paper☆29Jul 20, 2023Updated 2 years ago
- Contains the codebase for Quantum Natural Language Generation project☆24Nov 2, 2022Updated 3 years ago
- [AAAI 2024] MELO: Enhancing Model Editing with Neuron-indexed Dynamic LoRA☆27Apr 9, 2024Updated last year
- source code for paper "Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models"☆34Jun 20, 2024Updated last year
- ☆30Jul 22, 2024Updated last year
- ☆65Dec 16, 2020Updated 5 years ago
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Jul 15, 2025Updated 6 months ago
- ☆31Mar 23, 2024Updated last year
- Conic10K: A large-scale dataset for closed-vocabulary math problem understanding. Accepted to EMNLP2023 Findings.☆31Dec 6, 2023Updated 2 years ago
- ☆125Jul 6, 2024Updated last year
- Codebase for Instruction Following without Instruction Tuning☆36Sep 24, 2024Updated last year
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Apr 15, 2024Updated last year
- Research in compressing convolutional layers of CNN using low-rank Tucker tensor decomposition☆11Nov 1, 2023Updated 2 years ago