uiuc-kang-lab / leap
LEAP is an end-to-end library designed to support social science research by automatically analyzing user-collected unstructured data in response to their natural language queries. [VLDB'2025]
☆14Updated last month
Alternatives and similar repositories for leap:
Users that are interested in leap are comparing it to the libraries listed below
- ☆15Updated last year
- ☆33Updated last year
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆71Updated 2 weeks ago
- The repo for In-context Autoencoder☆112Updated 10 months ago
- The awesome agents in the era of large language models☆59Updated last year
- A new tool learning benchmark aiming at well-balanced stability and reality, based on ToolBench.☆132Updated last week
- The course website for Large Language Models Methods and Applications☆29Updated 10 months ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆293Updated 7 months ago
- A list of awesome papers on LLM tool learning.☆22Updated 7 months ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆105Updated 6 months ago
- This is a collection of research papers for Self-Correcting Large Language Models with Automated Feedback.☆506Updated 4 months ago
- ☆42Updated 9 months ago
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Models☆91Updated 7 months ago
- [COLM'24] "Deductive Beam Search: Decoding Deducible Rationale for Chain-of-Thought Reasoning"☆20Updated 8 months ago
- ☆16Updated 5 months ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆106Updated 5 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆81Updated 8 months ago
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆115Updated 9 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆89Updated 9 months ago
- [EMNLP 2023] Poisoning Retrieval Corpora by Injecting Adversarial Passages https://arxiv.org/abs/2310.19156☆30Updated last year
- A survey on harmful fine-tuning attack for large language model☆147Updated last week
- ☆66Updated last year
- Personalized Steering of Large Language Models: Versatile Steering Vectors Through Bi-directional Preference Optimization☆18Updated 7 months ago
- Direct preference optimization with f-divergences.☆13Updated 4 months ago