mst272 / simple-lora-plusLinks
A simple implementation of LoRA+: Efficient Low Rank Adaptation of Large Models
☆9Updated last year
Alternatives and similar repositories for simple-lora-plus
Users that are interested in simple-lora-plus are comparing it to the libraries listed below
Sorting:
- ☆15Updated last year
- Pretrain、decay、SFT a CodeLLM from scratch 🧙♂️☆35Updated last year
- ZO2 (Zeroth-Order Offloading): Full Parameter Fine-Tuning 175B LLMs with 18GB GPU Memory☆95Updated last month
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆50Updated 10 months ago
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆120Updated 7 months ago
- [SIGIR'24] The official implementation code of MOELoRA.☆167Updated 10 months ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆147Updated 2 months ago
- ☆23Updated 2 weeks ago
- ☆89Updated last week
- DuoDecoding: Hardware-aware Heterogeneous Speculative Decoding with Dynamic Multi-Sequence Drafting☆14Updated 3 months ago
- This is a repo for showcasing using MCTS with LLMs to solve gsm8k problems☆82Updated 2 months ago
- Reproduction of the complete process of DeepSeek-R1 on small-scale models, including Pre-training, SFT, and RL.☆26Updated 2 months ago
- The code for "AttentionPredictor: Temporal Pattern Matters for Efficient LLM Inference", Qingyue Yang, Jie Wang, Xing Li, Zhihai Wang, Ch…☆18Updated 2 weeks ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆117Updated last month
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆186Updated this week
- this is an implementation for the paper Improve Mathematical Reasoning in Language Models by Automated Process Supervision from google de…☆32Updated 2 months ago
- The blog, read report and code example for AGI/LLM related knowledge.☆39Updated 4 months ago
- ☆60Updated 2 weeks ago
- ☆105Updated 11 months ago
- Efficient Mixture of Experts for LLM Paper List☆68Updated 5 months ago
- Code for "CREAM: Consistency Regularized Self-Rewarding Language Models", ICLR 2025.☆22Updated 3 months ago
- ☆108Updated 6 months ago
- [arXiv 2025] Efficient Reasoning Models: A Survey☆166Updated last week
- [ICLR 2025] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration☆49Updated 3 months ago
- Official codebase for "GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning".☆73Updated this week
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆69Updated 3 months ago
- ☆36Updated last month
- ☆29Updated 3 months ago
- Due to the huge vocaburary size (151,936) of Qwen models, the Embedding and LM Head weights are excessively heavy. Therefore, this projec…☆21Updated 9 months ago
- ☆138Updated 10 months ago