TUDB-Labs / mLoRA
An Efficient "Factory" to Build Multiple LoRA Adapters
☆305Updated last month
Alternatives and similar repositories for mLoRA:
Users that are interested in mLoRA are comparing it to the libraries listed below
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆545Updated 3 months ago
- A flexible and efficient training framework for large-scale alignment tasks☆333Updated last month
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆314Updated 11 months ago
- A repository sharing the literatures about long-context large language models, including the methodologies and the evaluation benchmarks☆260Updated 8 months ago
- Collection of training data management explorations for large language models☆321Updated 8 months ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆152Updated 7 months ago
- LongBench v2 and LongBench (ACL 2024)☆824Updated 2 months ago
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆424Updated 5 months ago
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆216Updated last week
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆395Updated 5 months ago
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆355Updated 6 months ago
- Pytorch implementation of DoReMi, a method for optimizing the data mixture weights in language modeling datasets☆317Updated last year
- The related works and background techniques about Openai o1☆217Updated 2 months ago
- This repository has transferred to https://github.com/TUDB-Labs/MoE-PEFT☆22Updated 7 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆247Updated 3 months ago
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆248Updated last year
- A series of technical report on Slow Thinking with LLM☆615Updated this week
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆357Updated 2 months ago
- Super-Efficient RLHF Training of LLMs with Parameter Reallocation☆255Updated 2 months ago
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆181Updated last year
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆313Updated 6 months ago
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆181Updated 5 months ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆167Updated 2 weeks ago
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)☆337Updated 2 months ago
- Fast inference from large lauguage models via speculative decoding☆700Updated 7 months ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆216Updated last year
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆603Updated 2 months ago
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆945Updated 3 months ago
- ☆513Updated 3 months ago
- ☆574Updated 2 weeks ago