xjywhu / Awesome-Multimodal-LLM-for-Code
Multimodal Large Language Model for Code Generation under Multimodal Scenarios
☆42Updated this week
Alternatives and similar repositories for Awesome-Multimodal-LLM-for-Code:
Users that are interested in Awesome-Multimodal-LLM-for-Code are comparing it to the libraries listed below
- Accepted LLM Papers in NeurIPS 2024☆33Updated 3 months ago
- A Lightweight Visual Understanding and Reasoning Benchmark for Evaluating Large Multimodal Models through Coding Tasks☆17Updated 2 months ago
- Code for ACL (main) paper "JumpCoder: Go Beyond Autoregressive Coder via Online Modification"☆24Updated 8 months ago
- ☆37Updated 7 months ago
- TrustAgent: Towards Safe and Trustworthy LLM-based Agents☆33Updated 6 months ago
- Must-read papers and blogs about parametric knowledge mechanism in LLMs.☆11Updated last week
- ☆18Updated 3 months ago
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆109Updated 2 months ago
- This paper list focuses on the theoretical and empirical analysis of language models, especially large language models (LLMs). The papers…☆71Updated last month
- ☆28Updated 3 months ago
- ☆21Updated 2 months ago
- Recent papers on (1) Psychology of LLMs; (2) Biases in LLMs.☆44Updated last year
- [NeurIPS 2024] GITA: Graph to Image-Text Integration for Vision-Language Graph Reasoning☆45Updated 2 months ago
- The course website for Large Language Models Methods and Applications☆28Updated 8 months ago
- SEA is an automated paper review framework capable of generating comprehensive and high-quality review feedback with high consistency for…☆58Updated 2 months ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆65Updated 3 months ago
- ☆13Updated 11 months ago
- This is the repo for the survey of Bias and Fairness in IR with LLMs.☆47Updated 3 months ago
- A survey on harmful fine-tuning attack for large language model☆129Updated last week
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆88Updated 8 months ago
- ☆28Updated 7 months ago
- ☆26Updated 3 months ago
- OOD Generalization相关文章的阅读笔记☆29Updated last month
- [ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion☆34Updated 3 months ago
- A novel approach to improve the safety of large language models, enabling them to transition effectively from unsafe to safe state.☆58Updated this week
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆65Updated 6 months ago
- Benchmarking LLMs' Gaming Ability in Multi-Agent Environments☆65Updated this week
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆51Updated 2 months ago
- The awesome agents in the era of large language models☆59Updated last year
- This is the repo for the paper "OS Agents: A Survey on MLLM-based Agents for General Computing Devices Use".☆189Updated this week