MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning
☆361Aug 7, 2024Updated last year
Alternatives and similar repositories for MoRA
Users that are interested in MoRA are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆472Apr 21, 2024Updated last year
- KURE: 고려대학교에서 개발한, 한국어 검색에 특화된 임베딩 모델☆210Apr 4, 2026Updated last week
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)☆419Jun 30, 2025Updated 9 months ago
- ☆232Jun 24, 2024Updated last year
- [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation☆958Mar 24, 2026Updated 3 weeks ago
- Deploy open-source AI quickly and easily - Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆205Jul 17, 2024Updated last year
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,564Mar 5, 2026Updated last month
- LLM 모델의 외국어 토큰 생성을 막는 코드 구현☆85Aug 7, 2025Updated 8 months ago
- [ICLR 2025] RaSA: Rank-Sharing Low-Rank Adaptation☆10May 19, 2025Updated 10 months ago
- Code for paper "Patch-Level Training for Large Language Models"☆98Nov 10, 2025Updated 5 months ago
- [KO-Platy🥮] Korean-Open-platypus를 활용하여 llama-2-ko를 fine-tuning한 KO-platypus model☆73Aug 24, 2025Updated 7 months ago
- '내마리'는 나의 이야기에 귀를 기울임으로써 나에게 공감하고, 이야기의 맥락을 파악하고, 더 깊은 내용을 질문해주는 챗봇입니다.☆14Sep 9, 2023Updated 2 years ago
- GRadient-INformed MoE☆264Sep 25, 2024Updated last year
- The code repository for the CURLoRA research paper. Stable LLM continual fine-tuning and catastrophic forgetting mitigation.☆53Aug 28, 2024Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- 언어모델을 학습하기 위한 공개 한국어 instruction dataset들을 모아 두었습니다.☆456Apr 13, 2025Updated last year
- ICLR 2025☆31May 21, 2025Updated 10 months ago
- Modified Beam Search with periodical restart☆12Sep 12, 2024Updated last year
- ☆125Jul 6, 2024Updated last year
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆164Apr 13, 2025Updated last year
- 한국어 언어모델 다분야 사고력 벤치마크☆206Oct 17, 2024Updated last year
- Implementation of DoRA☆307Jun 7, 2024Updated last year
- evolve llm training instruction, from english instruction to any language.☆120Sep 15, 2023Updated 2 years ago
- [ACL 2024 Findings] Light-PEFT: Lightening Parameter-Efficient Fine-Tuning via Early Pruning☆13Sep 2, 2024Updated last year
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆220Nov 25, 2025Updated 4 months ago
- Tools for merging pretrained large language models.☆6,973Mar 15, 2026Updated last month
- ☆67Mar 21, 2024Updated 2 years ago
- ☆20Jul 24, 2024Updated last year
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆455May 13, 2025Updated 11 months ago
- Benchmark in Korean Context☆138Sep 26, 2023Updated 2 years ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆510Aug 26, 2024Updated last year
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,683Oct 28, 2024Updated last year
- [NAACL 2025] MiLoRA: Harnessing Minor Singular Components for Parameter-Efficient LLM Finetuning☆19May 31, 2025Updated 10 months ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- DeMo: Decoupled Momentum Optimization☆198Dec 2, 2024Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆266Dec 4, 2025Updated 4 months ago
- IA3방식으로 KoAlpaca를 fine tuning한 한국어 LLM모델☆69Aug 21, 2023Updated 2 years ago
- ☆71Jul 11, 2024Updated last year
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,236May 8, 2024Updated last year
- Gugugo: 한국어 오픈소스 번역 모델 프로젝트☆84Apr 7, 2024Updated 2 years ago
- The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction☆390Jul 9, 2024Updated last year