LC1332 / awesome-colab-projectLinks
Awesome Colab Projects Collection
☆29Updated last year
Alternatives and similar repositories for awesome-colab-project
Users that are interested in awesome-colab-project are comparing it to the libraries listed below
Sorting:
- A Simple MLLM Surpassed QwenVL-Max with OpenSource Data Only in 14B LLM.☆38Updated last year
- flow mirror models from JZX AI Labs☆43Updated last year
- GLM Series Edge Models☆156Updated 6 months ago
- Our 2nd-gen LMM☆34Updated last year
- Follow the rapid development of AIGC models and applications. | 跟上AIGC模型和应用快速发展的步伐 🚀☆81Updated 2 years ago
- 大语言模型训练和服务调研☆36Updated 2 years ago
- baichuan and baichuan2 finetuning and alpaca finetuning☆33Updated 9 months ago
- 多轮共情对话模型PICA☆97Updated 2 years ago
- ☆79Updated last year
- Copy the MLP of llama3 8 times as 8 experts , created a router with random initialization,add load balancing loss to construct an 8x8b Mo…☆27Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆68Updated 2 years ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- ☆235Updated last year
- 基于langchain设计的智能体任务,包含规划会话场景资源,构建子任务,任务执行器包含(MCTS)☆32Updated last month
- SpeechAgents: Human-Communication Simulation with Multi-Modal Multi-Agent Systems☆84Updated last year
- ☆106Updated 2 years ago
- Kanchil(鼷鹿)是世界上最小的偶蹄目动物,这个开源项目意在探索小模型(6B以下)是否也能具备和人类偏好对齐的能力。☆113Updated 2 years ago
- SUS-Chat: Instruction tuning done right☆49Updated last year
- Fast instruction tuning with Llama2☆11Updated last year
- SkyScript-100M: 1,000,000,000 Pairs of Scripts and Shooting Scripts for Short Drama: https://arxiv.org/abs/2408.09333v2☆132Updated last year
- 最简易的R1结果在小模型上的复现,阐述类O1与DeepSeek R1最重要的本质。Think is all your need。利用实验佐证,对于强推理能力,think思考过程性内容是AGI/ASI的核心。☆44Updated 10 months ago
- 基于baichuan-7b的开源多模态大语言模型☆72Updated 2 years ago
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Updated last year
- Chinese CLIP models with SOTA performance.☆59Updated 2 years ago
- LLaVA combines with Magvit Image tokenizer, training MLLM without an Vision Encoder. Unifying image understanding and generation.☆39Updated last year
- A light proxy solution for HuggingFace hub.☆48Updated 2 years ago
- ☆170Updated last year
- 首个llama2 13b 中文版模型 (Base + 中文对话SFT,实现流畅多轮人机自然语言交互)☆91Updated 2 years ago
- Deep Reasoning Translation (DRT) Project☆240Updated 3 months ago
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆266Updated last year