nickrosh / evol-teacher
Open Source WizardCoder Dataset
☆156Updated last year
Alternatives and similar repositories for evol-teacher:
Users that are interested in evol-teacher are comparing it to the libraries listed below
- ☆268Updated last year
- evol augment any dataset online☆59Updated last year
- ☆307Updated 9 months ago
- Accepted by Transactions on Machine Learning Research (TMLR)☆126Updated 5 months ago
- Run evaluation on LLMs using human-eval benchmark☆400Updated last year
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆313Updated 5 months ago
- All available datasets for Instruction Tuning of Large Language Models☆247Updated last year
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆246Updated 3 months ago
- ☆120Updated 9 months ago
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆137Updated 9 months ago
- Unofficial implementation of AlpaGasus☆90Updated last year
- ☆104Updated last year
- A large-scale, fine-grained, diverse preference dataset (and models).☆335Updated last year
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆218Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆206Updated 10 months ago
- Generative Judge for Evaluating Alignment☆230Updated last year
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".☆238Updated 4 months ago
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆51Updated 5 months ago
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆179Updated 5 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆136Updated 4 months ago
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆374Updated 8 months ago
- A self-ailgnment method for role-play. Benchmark for role-play. Resources for "Large Language Models are Superpositions of All Characters…☆186Updated 9 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆186Updated 7 months ago
- ☆263Updated 7 months ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆129Updated 8 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆147Updated 6 months ago
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆148Updated 7 months ago
- ☆84Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆117Updated last year
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆296Updated 6 months ago