uiuc-kang-lab / leapLinks
LEAP is an end-to-end library designed to support social science research by automatically analyzing user-collected unstructured data in response to their natural language queries. [VLDB'2025]
☆15Updated 4 months ago
Alternatives and similar repositories for leap
Users that are interested in leap are comparing it to the libraries listed below
Sorting:
- Reproducing R1 for Code with Reliable Rewards☆208Updated last month
- Course information for CS598-Topics in LLM Agents(25'Spring) under the direction of Prof. Jiaxuan You ( jiaxuan@illinois.edu ).☆30Updated last month
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆77Updated 10 months ago
- ☆16Updated 6 months ago
- [ICLR 2025 Workshop] "Landscape of Thoughts: Visualizing the Reasoning Process of Large Language Models"☆19Updated last week
- The course website for Large Language Models Methods and Applications☆28Updated last year
- The awesome agents in the era of large language models☆64Updated last year
- [ACL 2025 main] FR-Spec: Frequency-Ranked Speculative Sampling☆29Updated this week
- ☆43Updated last year
- ☆52Updated last week
- ☆64Updated last month
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆125Updated last month
- Repo-Level Code generation papers☆178Updated 2 months ago
- The repo for In-context Autoencoder☆127Updated last year
- A Framework for LLM-based Multi-Agent Reinforced Training and Inference☆89Updated last week
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆54Updated 2 months ago
- Making code edting up to 7.7x faster using multi-layer speculation☆21Updated 3 months ago
- Must-read papers on improving efficiency for LLM serving clusters☆29Updated last week
- A new tool learning benchmark aiming at well-balanced stability and reality, based on ToolBench.☆149Updated last month
- GenRM-CoT: Data release for verification rationales☆61Updated 7 months ago
- ☆38Updated 2 months ago
- ☆69Updated 6 months ago
- A banchmark list for evaluation of large language models.☆119Updated last month
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆120Updated 8 months ago
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".☆244Updated 7 months ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆79Updated 2 months ago
- ☆229Updated 9 months ago
- Must-read papers on Repository-level Code Generation & Issue Resolution 🔥☆89Updated this week
- A Comprehensive Benchmark for Software Development.☆106Updated last year
- ☆17Updated last year