bigcode-project / selfcodealignLinks
[NeurIPS'24] SelfCodeAlign: Self-Alignment for Code Generation
☆323Updated 11 months ago
Alternatives and similar repositories for selfcodealign
Users that are interested in selfcodealign are comparing it to the libraries listed below
Sorting:
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆316Updated 2 years ago
- Run evaluation on LLMs using human-eval benchmark☆426Updated 2 years ago
- 🐙 OctoPack: Instruction Tuning Code Large Language Models☆479Updated 11 months ago
- RepoQA: Evaluating Long-Context Code Understanding☆128Updated last year
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆164Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆254Updated last year
- Accepted by Transactions on Machine Learning Research (TMLR)☆137Updated last year
- ☆313Updated last year
- ☆278Updated 2 years ago
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898☆238Updated last year
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆175Updated last year
- Fine-tune SantaCoder for Code/Text Generation.☆196Updated 2 years ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆170Updated 5 months ago
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆186Updated last year
- [ICML '24] R2E: Turn any GitHub Repository into a Programming Agent Environment☆139Updated 9 months ago
- ☆131Updated 7 months ago
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆120Updated 3 months ago
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆157Updated last year
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆249Updated last year
- ☆85Updated 2 years ago
- Open Source WizardCoder Dataset☆163Updated 2 years ago
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆238Updated 5 months ago
- evol augment any dataset online☆61Updated 2 years ago
- A simple unified framework for evaluating LLMs☆261Updated 9 months ago
- Code and data for "Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs"☆473Updated last year
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 10 months ago
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆241Updated last week
- Benchmarking LLMs with Challenging Tasks from Real Users☆245Updated last year
- ☆159Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆205Updated last year