nuochenpku / COMEDY
This is the official project of paper: Compress to Impress: Unleashing the Potential of Compressive Memory in Real-World Long-Term Conversations
☆18Updated 5 months ago
Alternatives and similar repositories for COMEDY:
Users that are interested in COMEDY are comparing it to the libraries listed below
- [ICLR 2025] InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales☆83Updated 2 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆47Updated 3 months ago
- [EMNLP 2024] Ask-before-Plan: Proactive Language Agents for Real-World Planning☆20Updated 5 months ago
- MemoChat: Tuning LLMs to Use Memos for Consistent Long-Range Open-Domain Conversation☆27Updated last year
- This the implementation of LeCo☆32Updated 3 months ago
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆45Updated 4 months ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆74Updated 10 months ago
- ☆36Updated 3 months ago
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- Code for ICLR 2024 paper "CRAFT: Customizing LLMs by Creating and Retrieving from Specialized Toolsets"☆53Updated 10 months ago
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Updated 4 months ago
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large L…☆48Updated 9 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆110Updated 9 months ago
- A scalable automated alignment method for large language models. Resources for "Aligning Large Language Models via Self-Steering Optimiza…☆16Updated 4 months ago
- ☆69Updated last year
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆33Updated last month
- ☆34Updated last year
- A framework for editing the CoTs for better factuality☆50Updated last year
- Code for the 2024 arXiv publication "Fine-Tuning with Divergent Chains of Thought Boosts Reasoning Through Self-Correction in Language Mo…☆24Updated 9 months ago
- L-CITEEVAL: DO LONG-CONTEXT MODELS TRULY LEVERAGE CONTEXT FOR RESPONDING?☆23Updated 5 months ago
- Instruct Once, Chat Consistently in Multiple Rounds: An Efficient Tuning Framework for Dialogue (ACL 2024)☆23Updated 8 months ago
- [COLING 2025] ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenarios☆65Updated 4 months ago
- [ICLR 2025] This is the code repo for our ICLR’25 paper "RAG-DDR: Optimizing Retrieval-Augmented Generation Using Differentiable Data Rew…☆33Updated 2 months ago
- Code for Paper: Teaching Language Models to Critique via Reinforcement Learning☆92Updated this week
- Codebase for Instruction Following without Instruction Tuning☆34Updated 6 months ago
- ☆55Updated 5 months ago
- ☆12Updated last year
- Merging Generated and Retrieved Knowledge for Open-Domain QA (EMNLP 2023)☆22Updated last year
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year