coderonion / awesome-llm-and-aigc
🚀🚀🚀A collection of some wesome public projects about Large Language Model(LLM), Vision Language Model(VLM), Vision Language Action(VLA), AI Generated Content(AIGC), the related Datasets and Applications.
☆579Updated this week
Alternatives and similar repositories for awesome-llm-and-aigc:
Users that are interested in awesome-llm-and-aigc are comparing it to the libraries listed below
- A list of awesome AIGC works☆559Updated last year
- 支持中英文双语视觉-文本对话的开源可商用多模态模型。☆361Updated last year
- ☆764Updated 5 months ago
- BuboGPT: Enabling Visual Grounding in Multi-Modal LLMs☆506Updated last year
- huggingface mirror download☆557Updated 2 months ago
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"☆858Updated last month
- LLMs interview notes and answers:该仓库主要记录大模型(LLMs)算法工程师相关的面试题和参考答案☆1,186Updated last year
- [TLLM'23] PandaGPT: One Model To Instruction-Follow Them All☆780Updated last year
- Official code implementation of Vary-toy (Small Language Model Meets with Reinforced Vision Vocabulary)☆613Updated last month
- 🤖 Awesome list of AGI Agents. Agents 精选资源合集.☆334Updated last year
- Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conv…☆395Updated last month
- 多模态中文LLaMA&Alpaca大语言模型(VisualCLA)☆436Updated last year
- Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.☆587Updated last week
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imag…☆485Updated 9 months ago
- OpenLLMWiki: Docs of OpenLLMAI. Survey, reproduction and domain/task adaptation of open source chatgpt alternatives/implementations. PiXi…☆255Updated last month
- [ICLR'24 spotlight] Chinese and English Multimodal Large Model Series (Chat and Paint) | 基于CPM基础模型的中英双语多模态大模型系列☆1,052Updated 7 months ago
- ☆902Updated last year
- Codes for VPGTrans: Transfer Visual Prompt Generator across LLMs. VL-LLaMA, VL-Vicuna.☆271Updated last year
- XVERSE-13B: A multilingual large language model developed by XVERSE Technology Inc.☆648Updated 9 months ago
- The official GitHub page for the review paper "Sora: A Review on Background, Technology, Limitations, and Opportunities of Large Vision M…☆494Updated 10 months ago
- mPLUG-Owl: The Powerful Multi-modal Large Language Model Family☆2,401Updated last week
- Multimodal-GPT☆1,487Updated last year
- ☆757Updated 6 months ago
- 多模态 MM +Chat 合集☆238Updated 3 weeks ago
- Llama3-Tutorial(XTuner、LMDeploy、OpenCompass)☆500Updated 8 months ago
- AGI资料汇总学习(主要包括LLM和AIGC),持续更新......☆324Updated last week
- 主要记录大语言大模型(LLMs) 算法(应用)工程师多模态相关知识☆118Updated 8 months ago
- Awesome_Multimodel is a curated GitHub repository that provides a comprehensive collection of resources for Multimodal Large Language Mod…☆287Updated last month
- A novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings.☆599Updated 2 months ago
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)☆758Updated 6 months ago