victorsungo / WizardLMLinks
Family of instruction-following LLMs powered by Evol-Instruct: WizardLM, WizardCoder
☆44Updated last year
Alternatives and similar repositories for WizardLM
Users that are interested in WizardLM are comparing it to the libraries listed below
Sorting:
- FuseAI Project☆87Updated 9 months ago
- ☆95Updated 11 months ago
- Code for paper titled "Towards the Law of Capacity Gap in Distilling Language Models"☆102Updated last year
- Mixture-of-Experts (MoE) Language Model☆191Updated last year
- My implementation of "Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models"☆98Updated 2 years ago
- ☆35Updated 2 years ago
- Langchain implementation of HuggingGPT☆133Updated 2 years ago
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- ☆122Updated last year
- ☆126Updated last year
- Data preparation code for Amber 7B LLM☆93Updated last year
- a Fine-tuned LLaMA that is Good at Arithmetic Tasks☆178Updated 2 years ago
- Open Implementations of LLM Analyses☆107Updated last year
- ☆320Updated last year
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- Official repo for "Make Your LLM Fully Utilize the Context"☆259Updated last year
- The data processing pipeline for the Koala chatbot language model☆118Updated 2 years ago
- XVERSE-MoE-A36B: A multilingual large language model developed by XVERSE Technology Inc.☆38Updated last year
- LongQLoRA: Extent Context Length of LLMs Efficiently☆166Updated last year
- ☆129Updated last year
- An Implementation of "Orca: Progressive Learning from Complex Explanation Traces of GPT-4"☆42Updated last year
- [ICLR 2025] A trinity of environments, tools, and benchmarks for general virtual agents☆219Updated 4 months ago
- Code and data for CoachLM, an automatic instruction revision approach LLM instruction tuning.☆60Updated last year
- ☆78Updated last year
- Reformatted Alignment☆112Updated last year
- Pre-training code for CrystalCoder 7B LLM☆55Updated last year
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆190Updated last year
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆169Updated last year
- An implemtation of Everyting of Thoughts (XoT).☆152Updated last year
- Evaluating LLMs with CommonGen-Lite☆91Updated last year