bigcode-project / selfcodealignLinks
[NeurIPS'24] SelfCodeAlign: Self-Alignment for Code Generation
β316Updated 7 months ago
Alternatives and similar repositories for selfcodealign
Users that are interested in selfcodealign are comparing it to the libraries listed below
Sorting:
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"β310Updated last year
- π OctoPack: Instruction Tuning Code Large Language Modelsβ472Updated 7 months ago
- RepoQA: Evaluating Long-Context Code Understandingβ117Updated 11 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.β172Updated 8 months ago
- Run evaluation on LLMs using human-eval benchmarkβ419Updated 2 years ago
- β¨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024β171Updated last year
- The official evaluation suite and dynamic data release for MixEval.β246Updated 10 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluationβ153Updated 11 months ago
- Fine-tune SantaCoder for Code/Text Generation.β195Updated 2 years ago
- β312Updated last year
- Experiments on speculative sampling with Llama modelsβ128Updated 2 years ago
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answeβ¦β158Updated last year
- β275Updated 2 years ago
- A simple unified framework for evaluating LLMsβ248Updated 5 months ago
- β84Updated 2 years ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language Mβ¦β240Updated 11 months ago
- β320Updated last year
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898β224Updated last year
- A compact LLM pretrained in 9 days by using high quality dataβ327Updated 5 months ago
- Code and data for "Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs"β470Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Usersβ241Updated 11 months ago
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.β215Updated this week
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)β206Updated last year
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)β157Updated last month
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]β547Updated 2 months ago
- This is work done by the Oxen.ai Community, trying to reproduce the Self-Rewarding Language Model paper from MetaAI.β130Updated 10 months ago
- Official repo for "Make Your LLM Fully Utilize the Context"β254Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward expβ¦β224Updated 2 weeks ago
- [NeurIPS'25] Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"β602Updated 6 months ago
- β160Updated last year