microsoft / WizardLM2Links
☆59Updated last year
Alternatives and similar repositories for WizardLM2
Users that are interested in WizardLM2 are comparing it to the libraries listed below
Sorting:
- Mixture-of-Experts (MoE) Language Model☆189Updated 8 months ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆136Updated 10 months ago
- ☆315Updated 8 months ago
- My implementation of "Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models"☆97Updated last year
- Its an open source LLM based on MOE Structure.☆58Updated 11 months ago
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆138Updated last year
- ☆223Updated last week
- ☆121Updated 11 months ago
- Family of instruction-following LLMs powered by Evol-Instruct: WizardLM, WizardCoder☆45Updated last year
- A Toolkit for Running On-device Large Language Models (LLMs) in APP☆72Updated 11 months ago
- ☆51Updated 10 months ago
- GLM Series Edge Models☆141Updated 3 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆130Updated 11 months ago
- ☆40Updated last year
- SUS-Chat: Instruction tuning done right☆48Updated last year
- ☆83Updated 2 weeks ago
- LLM based agents with proactive interactions, long-term memory, external tool integration, and local deployment capabilities.☆99Updated last week
- [ICLR 2025] A trinity of environments, tools, and benchmarks for general virtual agents☆202Updated last month
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆248Updated 3 weeks ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆136Updated 6 months ago
- ☆157Updated 10 months ago
- ☆76Updated last year
- Generate multi-round conversation roleplay data based on self-instruct and evol-instruct.☆127Updated 4 months ago
- ☆157Updated 9 months ago
- ☆94Updated 6 months ago
- Deployment a light and full OpenAI API for production with vLLM to support /v1/embeddings with all embeddings models.☆42Updated 10 months ago
- FuseAI Project☆87Updated 4 months ago
- Data preparation code for CrystalCoder 7B LLM☆44Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆132Updated 11 months ago
- The RedStone repository includes code for preparing extensive datasets used in training large language models.☆135Updated 2 weeks ago