TemporaryLoRA / Temp-LoRALinks
☆105Updated last year
Alternatives and similar repositories for Temp-LoRA
Users that are interested in Temp-LoRA are comparing it to the libraries listed below
Sorting:
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆154Updated last year
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆250Updated 6 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Updated last year
- [ACL 2025, Main Conference, Oral] Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Process☆28Updated 10 months ago
- ☆48Updated last year
- Repository of LV-Eval Benchmark☆67Updated 9 months ago
- [SIGIR'24] The official implementation code of MOELoRA.☆168Updated 11 months ago
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)☆87Updated 4 months ago
- ☆63Updated 7 months ago
- Counting-Stars (★)☆83Updated 3 weeks ago
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆122Updated 7 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆160Updated this week
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆264Updated 9 months ago
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆185Updated 8 months ago
- ☆203Updated 4 months ago
- Official completion of “Training on the Benchmark Is Not All You Need”.☆34Updated 5 months ago
- A Comprehensive Survey on Long Context Language Modeling☆152Updated 3 weeks ago
- ☆101Updated 8 months ago
- Reformatted Alignment☆113Updated 9 months ago
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆176Updated last year
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆113Updated 2 months ago
- [ICLR 2025] 🧬 RegMix: Data Mixture as Regression for Language Model Pre-training (Spotlight)☆149Updated 4 months ago
- On Memorization of Large Language Models in Logical Reasoning☆67Updated 2 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆133Updated last year
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆127Updated this week
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆251Updated 3 weeks ago
- Collection of papers for scalable automated alignment.☆91Updated 8 months ago
- The official repository of the Omni-MATH benchmark.☆84Updated 6 months ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆121Updated 5 months ago