SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)
☆39Nov 1, 2024Updated last year
Alternatives and similar repositories for SLTrain
Users that are interested in SLTrain are comparing it to the libraries listed below
Sorting:
- Welcome to the 'In Context Learning Theory' Reading Group☆30Nov 8, 2024Updated last year
- [ICLR 2025] "Rethinking LLM Unlearning Objectives: A Gradient Perspective and Go Beyond"☆17Feb 27, 2025Updated last year
- ☆18Jun 4, 2024Updated last year
- A Dual Framework for Low-rank Tensor Completion☆17Jun 3, 2019Updated 6 years ago
- [NeurIPS 2024] This is the official repository for our paper: ''Expanding Sparse Tuning for Low Memory Usage''.☆24Nov 8, 2025Updated 4 months ago
- [NeurIPS'23] Binary Classification with Confidence Difference☆10May 13, 2024Updated last year
- [ICLR 2025 Spotlight] Code release for "Sharpness-Aware Minimization Efficiently Selects Flatter Minima Late In Training"☆18Feb 20, 2025Updated last year
- [ICML‘24] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆124Jul 6, 2025Updated 8 months ago
- [ICCV 2023] Black Box Few-Shot Adaptation for Vision-Language models☆26May 14, 2024Updated last year
- [ICML 2024] Code release for "On the Emergence of Cross-Task Linearity in Pretraining-Finetuning Paradigm"☆11Feb 20, 2025Updated last year
- [NeurIPS 2022] "Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks"☆13Nov 11, 2022Updated 3 years ago
- ☆18Apr 16, 2025Updated 11 months ago
- ☆20Jul 19, 2024Updated last year
- Riemannian Optimization Using JAX☆54Oct 30, 2023Updated 2 years ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Jun 12, 2024Updated last year
- BESA is a differentiable weight pruning technique for large language models.☆17Mar 4, 2024Updated 2 years ago
- Software package for Hankel structured low-rank approximation☆37Feb 28, 2019Updated 7 years ago
- Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?☆118Oct 21, 2024Updated last year
- A curated list of awesome Deep Learning theories that shed light on the mysteries of DL☆10Jul 20, 2018Updated 7 years ago
- ☆70Nov 15, 2024Updated last year
- ☆71Jul 11, 2024Updated last year
- Efficient Riemannian Optimization on Stiefel Manifold via Cayley Transform☆45Apr 26, 2019Updated 6 years ago
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆39Jul 18, 2025Updated 8 months ago
- Structured Pruning Adapters in PyTorch☆19Aug 30, 2023Updated 2 years ago
- Implementations of the algorithms described in the paper: On the Convergence Theory for Hessian-Free Bilevel Algorithms.☆11Nov 1, 2024Updated last year
- Minimalistic port of NanoGUI claim works with SDL API w/o external dependencies.☆12Sep 4, 2019Updated 6 years ago
- Implementation of the Regularized Nonlinear Acceleration algorithm☆12Oct 4, 2018Updated 7 years ago
- [CVPR2022 Oral] The official code for "TransRank: Self-supervised Video Representation Learning via Ranking-based Transformation Recognit…☆18Aug 1, 2022Updated 3 years ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,684Oct 28, 2024Updated last year
- This paper list focuses on the theoretical and empirical analysis of language models, especially large language models (LLMs). The papers…☆98Dec 2, 2024Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆92Oct 30, 2024Updated last year
- ☆16Oct 14, 2020Updated 5 years ago
- Official implementation of "When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture" published at Neur…☆37Sep 19, 2024Updated last year
- ☆25Mar 14, 2026Updated last week
- ☆63Oct 17, 2023Updated 2 years ago
- ☆27Apr 11, 2023Updated 2 years ago
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆57Nov 20, 2024Updated last year
- ☆12Sep 26, 2019Updated 6 years ago
- ☆14Mar 18, 2022Updated 4 years ago