victorsungo / WizardLM
Family of instruction-following LLMs powered by Evol-Instruct: WizardLM, WizardCoder
☆45Updated 6 months ago
Related projects ⓘ
Alternatives and complementary repositories for WizardLM
- FuseAI Project☆76Updated 2 months ago
- Expert Specialized Fine-Tuning☆144Updated last month
- Reformatted Alignment☆112Updated last month
- ☆77Updated last month
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆127Updated 5 months ago
- Data preparation code for CrystalCoder 7B LLM☆42Updated 6 months ago
- My implementation of "Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models"☆92Updated last year
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆124Updated 3 months ago
- Self-Evolved Diverse Data Sampling for Efficient Instruction Tuning☆66Updated 11 months ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆34Updated 10 months ago
- ☆116Updated 5 months ago
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆75Updated 9 months ago
- ☆53Updated 5 months ago
- ☆283Updated last month
- ☆78Updated 6 months ago
- Data preparation code for Amber 7B LLM☆82Updated 6 months ago
- Official repo of Respond-and-Respond: data, code, and evaluation☆96Updated 3 months ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆73Updated 9 months ago
- An Experiment on Dynamic NTK Scaling RoPE☆61Updated 11 months ago
- a Fine-tuned LLaMA that is Good at Arithmetic Tasks☆174Updated last year
- ☆41Updated 2 months ago
- Pre-training code for CrystalCoder 7B LLM☆53Updated 6 months ago
- SkyScript-100M: 1,000,000,000 Pairs of Scripts and Shooting Scripts for Short Drama: https://arxiv.org/abs/2408.09333v2☆98Updated 2 months ago
- Code and data for CoachLM, an automatic instruction revision approach LLM instruction tuning.☆58Updated 7 months ago
- Code for paper titled "Towards the Law of Capacity Gap in Distilling Language Models"☆96Updated 4 months ago
- Offical Repo for "Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale"☆190Updated 3 weeks ago
- ☆57Updated last month
- A pipeline parallel training script for LLMs.☆83Updated this week
- ☆51Updated 3 months ago
- ☆73Updated 10 months ago