nickrosh / evol-teacher
Open Source WizardCoder Dataset
☆158Updated last year
Alternatives and similar repositories for evol-teacher
Users that are interested in evol-teacher are comparing it to the libraries listed below
Sorting:
- ☆270Updated 2 years ago
- evol augment any dataset online☆59Updated last year
- ☆84Updated last year
- Run evaluation on LLMs using human-eval benchmark☆411Updated last year
- ☆308Updated 11 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆249Updated 5 months ago
- All available datasets for Instruction Tuning of Large Language Models☆250Updated last year
- ☆179Updated 2 years ago
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".☆243Updated 6 months ago
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆163Updated 9 months ago
- Accepted by Transactions on Machine Learning Research (TMLR)☆127Updated 7 months ago
- Unofficial implementation of AlpaGasus☆91Updated last year
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆208Updated last year
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆377Updated 10 months ago
- Code for paper "LEVER: Learning to Verifiy Language-to-Code Generation with Execution" (ICML'23)☆87Updated last year
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆240Updated last year
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆261Updated 8 months ago
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆258Updated last year
- ☆106Updated last year
- 🐙 OctoPack: Instruction Tuning Code Large Language Models☆464Updated 3 months ago
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆139Updated last week
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆219Updated last year
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆53Updated 6 months ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆324Updated 7 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆204Updated 11 months ago
- ☆44Updated 11 months ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆140Updated 9 months ago
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels …☆261Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆461Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Users☆222Updated 6 months ago