jianghoucheng / AnyEdit
☆15Updated last month
Alternatives and similar repositories for AnyEdit
Users that are interested in AnyEdit are comparing it to the libraries listed below
Sorting:
- Official implementation for "ALI-Agent: Assessing LLMs'Alignment with Human Values via Agent-based Evaluation"☆16Updated last week
- This is the repo for the survey of Bias and Fairness in IR with LLMs.☆53Updated last month
- Awesome-Efficient-Inference-for-LRMs is a collection of state-of-the-art, novel, exciting, token-efficient methods for Large Reasoning Mo…☆63Updated last month
- ☆14Updated 8 months ago
- [NeurIPS 2024] The implementation of paper "On Softmax Direct Preference Optimization for Recommendation"☆74Updated 5 months ago
- AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models, ICLR 2025 (Outstanding Paper)☆214Updated 3 weeks ago
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆63Updated this week
- The latest progress of Personalized Large Language Models (LLMs).☆18Updated last week
- ☆10Updated 3 weeks ago
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆42Updated 5 months ago
- [ICLR 2025 Workshop] "Landscape of Thoughts: Visualizing the Reasoning Process of Large Language Models"☆18Updated 2 weeks ago
- ☆18Updated 2 months ago
- ☆17Updated last year
- TrustEval: A modular and extensible toolkit for comprehensive trust evaluation of generative foundation models (GenFMs)☆100Updated last week
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆35Updated 4 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆54Updated this week
- Official repository for 'Safety Challenges in Large Reasoning Models: A Survey' - Exploring safety risks, attacks, and defenses for Large…☆27Updated 2 weeks ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆116Updated 7 months ago
- ☆25Updated 11 months ago
- [ACL 2024] Learning to Edit: Aligning LLMs with Knowledge Editing☆35Updated 8 months ago
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆46Updated 4 months ago
- A curated list of resources for activation engineering☆74Updated last week
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆22Updated last month
- Repo of "Large Language Model-based Human-Agent Collaboration for Complex Task Solving(EMNLP2024 Findings)"☆32Updated 7 months ago
- The implementation for ICLR 2025 Oral: From Exploration to Mastery: Enabling LLMs to Master Tools via Self-Driven Interactions.☆33Updated last month
- SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities☆14Updated last month
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆119Updated last month
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆32Updated 6 months ago
- ☆23Updated 6 months ago
- Awesome things about generative recommendation models.☆35Updated 2 weeks ago