PKU-RL / LLaMA-Rider
☆28Updated last year
Alternatives and similar repositories for LLaMA-Rider:
Users that are interested in LLaMA-Rider are comparing it to the libraries listed below
- Empirical Study Towards Building An Effective Multi-Modal Large Language Model☆23Updated last year
- Unleashing the Power of Cognitive Dynamics on Large Language Models☆60Updated 5 months ago
- ☆36Updated 6 months ago
- kimi-chat 测试数据☆7Updated last year
- SUS-Chat: Instruction tuning done right☆48Updated last year
- ☆31Updated 3 months ago
- ☆17Updated last year
- ☆44Updated last year
- The official implementation of the paper "Read to Play (R2-Play): Decision Transformer with Multimodal Game Instruction".☆34Updated last year
- The Code Repo for Agent-Pro: Learning to Evolve via Policy-Level Reflection and Optimization☆104Updated 6 months ago
- A Production Tool for Embodied AI☆29Updated 8 months ago
- [ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain☆102Updated 11 months ago
- SkyScript-100M: 1,000,000,000 Pairs of Scripts and Shooting Scripts for Short Drama: https://arxiv.org/abs/2408.09333v2☆118Updated 3 months ago
- XVERSE-MoE-A4.2B: A multilingual large language model developed by XVERSE Technology Inc.☆36Updated 10 months ago
- PreAct: Prediction Enhances Agent's Planning Ability (Coling2025)☆26Updated 3 months ago
- My implementation of "Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models"☆98Updated last year
- FuseAI Project☆83Updated last month
- 基于baichuan-7b的开源多模态大语言模型☆73Updated last year
- zero零训练llm调参☆31Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆70Updated last year
- ☆42Updated 2 months ago
- Official implementation for "OlaGPT: Empowering LLMs With Human-like Problem-Solving Abilities" (keep updating)☆57Updated 11 months ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆36Updated last year
- connecting humans and agents☆76Updated 3 months ago
- Its an open source LLM based on MOE Structure.☆58Updated 8 months ago
- XVERSE-MoE-A36B: A multilingual large language model developed by XVERSE Technology Inc.☆36Updated 6 months ago
- ☆65Updated last year
- Reformatted Alignment☆114Updated 5 months ago
- Code for NeurIPS 2024 paper "AutoManual: Constructing Instruction Manuals by LLM Agents via Interactive Environmental Learning"☆37Updated 4 months ago
- ☆18Updated 7 months ago