bigcode-project / selfcodealignLinks
[NeurIPS'24] SelfCodeAlign: Self-Alignment for Code Generation
β306Updated 4 months ago
Alternatives and similar repositories for selfcodealign
Users that are interested in selfcodealign are comparing it to the libraries listed below
Sorting:
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"β306Updated last year
- π OctoPack: Instruction Tuning Code Large Language Modelsβ469Updated 5 months ago
- Run evaluation on LLMs using human-eval benchmarkβ415Updated last year
- RepoQA: Evaluating Long-Context Code Understandingβ109Updated 8 months ago
- β¨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024β168Updated 10 months ago
- Scaling Data for SWE-agentsβ283Updated this week
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluationβ148Updated 9 months ago
- β104Updated 2 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.β173Updated 5 months ago
- The official evaluation suite and dynamic data release for MixEval.β242Updated 8 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language Mβ¦β229Updated 8 months ago
- β310Updated last year
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898β221Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)β204Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Usersβ228Updated 8 months ago
- β319Updated 9 months ago
- A simple unified framework for evaluating LLMsβ221Updated 2 months ago
- Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"β566Updated 3 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.β173Updated 4 months ago
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answeβ¦β156Updated last year
- evol augment any dataset onlineβ59Updated last year
- Official repo for "Make Your LLM Fully Utilize the Context"β252Updated last year
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"β582Updated 3 weeks ago
- β270Updated 2 years ago
- Accepted by Transactions on Machine Learning Research (TMLR)β129Updated 9 months ago
- A compact LLM pretrained in 9 days by using high quality dataβ317Updated 3 months ago
- The code and data for "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark" [NeurIPS 2024]β259Updated 4 months ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]β498Updated 2 months ago
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.β203Updated 2 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.β132Updated last year