pjlab-sys4nlp / llama-moeLinks
⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)
☆962Updated 5 months ago
Alternatives and similar repositories for llama-moe
Users that are interested in llama-moe are comparing it to the libraries listed below
Sorting:
- LongBench v2 and LongBench (ACL 2024)☆888Updated 4 months ago
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆611Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆727Updated 8 months ago
- Fast inference from large lauguage models via speculative decoding☆745Updated 9 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆896Updated 3 months ago
- Best practice for training LLaMA models in Megatron-LM☆654Updated last year
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3.☆1,277Updated this week
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆554Updated 5 months ago
- A repository sharing the literatures about long-context large language models, including the methodologies and the evaluation benchmarks☆263Updated 10 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,040Updated this week
- Codebase for Merging Language Models (ICML 2024)☆824Updated last year
- A series of technical report on Slow Thinking with LLM☆679Updated last week
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,249Updated 2 months ago
- O1 Replication Journey☆1,990Updated 4 months ago
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆450Updated 7 months ago
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆339Updated last year
- ☆939Updated 3 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆410Updated 7 months ago
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baich…☆1,014Updated 7 months ago
- Collaborative Training of Large Language Models in an Efficient Way☆415Updated 9 months ago
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆755Updated last week
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆705Updated 2 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,534Updated last year
- The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.☆1,113Updated last week
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆461Updated last year
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆631Updated 4 months ago
- ☆319Updated 10 months ago
- Large Reasoning Models☆804Updated 6 months ago
- Rectified Rotary Position Embeddings☆367Updated last year
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)☆356Updated 4 months ago