☆19Jan 3, 2025Updated last year
Alternatives and similar repositories for S2FT
Users that are interested in S2FT are comparing it to the libraries listed below
Sorting:
- ☆19Dec 23, 2024Updated last year
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆36Apr 4, 2024Updated last year
- [ICML2024 Spotlight] Fine-Tuning Pre-trained Large Language Models Sparsely☆24Jun 26, 2024Updated last year
- Code repository for the paper on "Predicting the Performance of Black-Box LLMs through Self-Queries".☆12Jan 9, 2025Updated last year
- Code for the paper "FinRLlama: A Solution to LLM-Engineered Signals Challenge at FinRL Contest 2024"☆13Feb 14, 2025Updated last year
- code for EMNLP 2024 paper: Interpreting Arithmetic Mechanism in Large Language Models through Comparative Neuron Analysis☆12Nov 17, 2024Updated last year
- ☆13Jan 22, 2025Updated last year
- BlockRank makes LLMs efficient and scalable for RAG and in-context ranking☆41Dec 12, 2025Updated 2 months ago
- [ACL 2024 Findings] Light-PEFT: Lightening Parameter-Efficient Fine-Tuning via Early Pruning☆13Sep 2, 2024Updated last year
- Official Code For Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM☆14Dec 27, 2023Updated 2 years ago
- Official implementation of the paper: "A deeper look at depth pruning of LLMs"☆15Jul 24, 2024Updated last year
- ☆40Nov 22, 2025Updated 3 months ago
- Implementation code for ACL2024:Advancing Parameter Efficiency in Fine-tuning via Representation Editing☆15Apr 20, 2024Updated last year
- Code for our paper Resources and Evaluations for Multi-Distribution Dense Information Retrieval☆16Jan 16, 2024Updated 2 years ago
- Official Pytorch Implementation of Paper "DarwinLM: Evolutionary Structured Pruning of Large Language Models"☆20Feb 21, 2025Updated last year
- Control LLM☆22Apr 6, 2025Updated 10 months ago
- ☆32Oct 13, 2025Updated 4 months ago
- Exploring Model Kinship for Merging Large Language Models☆27Apr 16, 2025Updated 10 months ago
- Code for "Counterfactual Token Generation in Large Language Models", Arxiv 2024.☆32Nov 7, 2024Updated last year
- MiSS is a novel PEFT method that features a low-rank structure but introduces a new update mechanism distinct from LoRA, achieving an exc…☆31Jan 28, 2026Updated last month
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆172Nov 26, 2025Updated 3 months ago
- ☆33Jul 8, 2024Updated last year
- Quantize transformers to any learned arbitrary 4-bit numeric format☆51Jan 25, 2026Updated last month
- Checkpointable dataset utilities for foundation model training☆32Jan 29, 2024Updated 2 years ago
- ICLR 2025☆31May 21, 2025Updated 9 months ago
- Work in progress.☆79Nov 25, 2025Updated 3 months ago
- Official Pytorch Implementation of "Outlier-weighed Layerwise Sampling for LLM Fine-tuning" by Pengxiang Li, Lu Yin, Xiaowei Gao, Shiwei …☆35Jun 3, 2025Updated 8 months ago
- ☆32Nov 11, 2024Updated last year
- [ICML2025] LoRA fine-tune directly on the quantized models.☆39Nov 25, 2024Updated last year
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆32Nov 4, 2024Updated last year
- ☆59Nov 17, 2025Updated 3 months ago
- Official implementation for "Sparse Concept Bottleneck Models: Gumbel Tricks in Contrastive Learning"☆12Jun 20, 2025Updated 8 months ago
- This is the repository of DEER, a Dynamic Early Exit in Reasoning method for Large Reasoning Language Models.☆179Jul 7, 2025Updated 7 months ago
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆35Mar 7, 2025Updated 11 months ago
- Agentic Learning Powered by AWorld☆90Feb 13, 2026Updated 2 weeks ago
- This work has been accepted to Findings of EMNLP 2025!☆48Sep 5, 2025Updated 5 months ago
- Evaluation and dataset construction code for the CVPR 2025 paper "Vision-Language Models Do Not Understand Negation"☆45Apr 22, 2025Updated 10 months ago
- ☆42Apr 22, 2025Updated 10 months ago
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆30Mar 28, 2024Updated last year