ZhangYiqun018 / StickerConvLinks
☆59Updated last year
Alternatives and similar repositories for StickerConv
Users that are interested in StickerConv are comparing it to the libraries listed below
Sorting:
- ☆85Updated last year
- [EMNLP 2024] ”ESC-Eval: Evaluating Emotion Support Conversations in Large Language Models“☆25Updated last year
- (ICLR'25) A Comprehensive Framework for Developing and Evaluating Multimodal Role-Playing Agents☆89Updated 10 months ago
- ☆129Updated last month
- ☆27Updated last year
- ☆17Updated 2 months ago
- Official repo for "AlignGPT: Multi-modal Large Language Models with Adaptive Alignment Capability"☆34Updated last year
- Code and data for the paper "Steering Conversational Large Language Models for Long Emotional Support Conversations" along with a UI to v…☆13Updated 8 months ago
- Latest Advances on Reasoning of Multimodal Large Language Models (Multimodal R1 \ Visual R1) ) 🍓☆35Updated 8 months ago
- MMoE: Multimodal Mixture-of-Experts (EMNLP 2024)☆12Updated last year
- [ICLR 2025] ChartMimic: Evaluating LMM’s Cross-Modal Reasoning Capability via Chart-to-Code Generation☆129Updated last week
- [ACL24] EmoBench: Evaluating the Emotional Intelligence of Large Language Models☆100Updated 7 months ago
- This repository contains the code for SFT, RLHF, and DPO, designed for vision-based LLMs, including the LLaVA models and the LLaMA-3.2-vi…☆118Updated 6 months ago
- An Arena-style Automated Evaluation Benchmark for Detailed Captioning☆56Updated 6 months ago
- [EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context In…☆104Updated last year
- [ACL'2024 Findings] GAOKAO-MM: A Chinese Human-Level Benchmark for Multimodal Models Evaluation☆74Updated last year
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆98Updated last year
- A Self-Training Framework for Vision-Language Reasoning☆88Updated 11 months ago
- 自己阅读的多模态对话系统论文(及部分笔记)汇总☆22Updated 2 years ago
- This is the repository of our ACL 2024 paper "ESCoT: Towards Interpretable Emotional Support Dialogue Systems".☆35Updated 7 months ago
- ☆111Updated 3 months ago
- Code for ACL 2024 paper "Soft Self-Consistency Improves Language Model Agents"☆25Updated last year
- ☆27Updated last year
- code for "CoMT: A Novel Benchmark for Chain of Multi-modal Thought on Large Vision-Language Models"☆19Updated 9 months ago
- [ACL 2024 Oral] This is the code repo for our ACL‘24 paper "MARVEL: Unlocking the Multi-Modal Capability of Dense Retrieval via Visual Mo…☆39Updated last year
- The demo, code and data of FollowRAG☆75Updated 5 months ago
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.☆71Updated 9 months ago
- [ACL' 25] The official code repository for PRMBench: A Fine-grained and Challenging Benchmark for Process-Level Reward Models.☆86Updated 10 months ago
- [SIGIR'24] The official implementation code of MOELoRA.☆186Updated last year
- 😎 curated list of awesome LMM hallucinations papers, methods & resources.☆150Updated last year