pjlab-sys4nlp / llama-moeLinks
⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)
☆967Updated 6 months ago
Alternatives and similar repositories for llama-moe
Users that are interested in llama-moe are comparing it to the libraries listed below
Sorting:
- LongBench v2 and LongBench (ACL 25'&24')☆903Updated 5 months ago
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆617Updated last year
- Fast inference from large lauguage models via speculative decoding☆762Updated 10 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆897Updated 4 months ago
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baich…☆1,026Updated 8 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆731Updated 8 months ago
- Best practice for training LLaMA models in Megatron-LM☆656Updated last year
- A repository sharing the literatures about long-context large language models, including the methodologies and the evaluation benchmarks☆263Updated 10 months ago
- Collaborative Training of Large Language Models in an Efficient Way☆415Updated 9 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆556Updated 6 months ago
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3.☆1,325Updated 2 weeks ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,055Updated last week
- Codebase for Merging Language Models (ICML 2024)☆832Updated last year
- ☆942Updated 4 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,258Updated 3 months ago
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆456Updated 8 months ago
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆332Updated 2 years ago
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆800Updated last week
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆639Updated 5 months ago
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,180Updated last year
- A simple and effective LLM pruning approach.☆763Updated 10 months ago
- A series of technical report on Slow Thinking with LLM☆699Updated 2 weeks ago
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆348Updated last year
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆717Updated 3 months ago
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆396Updated last year
- distributed trainer for LLMs☆577Updated last year
- [TMLR 2024] Efficient Large Language Models: A Survey☆1,172Updated this week
- Pytorch implementation of DoReMi, a method for optimizing the data mixture weights in language modeling datasets☆332Updated last year
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)☆356Updated last week
- ☆319Updated 11 months ago