xyliu-cs / RISELinks
Official Implementation of RISE (Reinforcing Reasoning with Self-Verification)
☆27Updated last week
Alternatives and similar repositories for RISE
Users that are interested in RISE are comparing it to the libraries listed below
Sorting:
- ☆31Updated this week
- ☆26Updated this week
- A novel approach to improve the safety of large language models, enabling them to transition effectively from unsafe to safe state.☆60Updated last month
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆65Updated 9 months ago
- Training and Benchmarking LLMs for Code Preference.☆33Updated 7 months ago
- ☆34Updated 2 years ago
- [ICSE'25] Aligning the Objective of LLM-based Program Repair☆15Updated 3 months ago
- 🚀 SWE-bench Goes Live!☆80Updated this week
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆99Updated last month
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆55Updated 8 months ago
- NaturalCodeBench (Findings of ACL 2024)☆65Updated 8 months ago
- XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts☆33Updated 11 months ago
- Knowledge transfer from high-resource to low-resource programming languages for Code LLMs☆14Updated 9 months ago
- Code repo for the paper: Attacking Vision-Language Computer Agents via Pop-ups☆33Updated 6 months ago
- Code for the paper <SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning>☆49Updated last year
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆77Updated 11 months ago
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆58Updated last year
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆63Updated 8 months ago
- Benchmarking LLMs' Emotional Alignment with Humans☆104Updated 4 months ago
- ☆34Updated last month
- Code and Results of the Paper: On the Resilience of Multi-Agent Systems with Malicious Agents☆21Updated 4 months ago
- ☆27Updated 5 months ago
- ☆36Updated 2 weeks ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆127Updated 11 months ago
- ☆53Updated last week
- An open-source library for contamination detection in NLP datasets and Large Language Models (LLMs).☆57Updated 10 months ago
- [LREC-COLING'24] HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalization☆39Updated 3 months ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆62Updated 11 months ago
- Evaluate the Quality of Critique☆35Updated last year
- The repository of the project "Fine-tuning Large Language Models with Sequential Instructions", code base comes from open-instruct and LA…☆29Updated 7 months ago