PKU-RL / LLaMA-RiderLinks
☆28Updated 2 years ago
Alternatives and similar repositories for LLaMA-Rider
Users that are interested in LLaMA-Rider are comparing it to the libraries listed below
Sorting:
- The official implementation of the paper "Read to Play (R2-Play): Decision Transformer with Multimodal Game Instruction".☆34Updated last year
- Unleashing the Power of Cognitive Dynamics on Large Language Models☆63Updated last year
- The Code Repo for Agent-Pro: Learning to Evolve via Policy-Level Reflection and Optimization☆128Updated last year
- ☆46Updated 2 years ago
- PreAct: Prediction Enhances Agent's Planning Ability (Coling2025)☆30Updated last year
- GPT-4V in Wonderland: LMMs as Smartphone Agents☆135Updated last year
- A Production Tool for Embodied AI☆29Updated last year
- LLM Dynamic Planner - Combining LLM with PDDL Planners to solve an embodied task☆48Updated last year
- ☆37Updated last year
- SkyScript-100M: 1,000,000,000 Pairs of Scripts and Shooting Scripts for Short Drama: https://arxiv.org/abs/2408.09333v2☆134Updated last year
- ☆99Updated last year
- Empirical Study Towards Building An Effective Multi-Modal Large Language Model☆22Updated 2 years ago
- ☆36Updated last year
- Official implementation for "OlaGPT: Empowering LLMs With Human-like Problem-Solving Abilities" (keep updating)☆60Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated 2 years ago
- Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆91Updated 2 years ago
- ☆66Updated 2 years ago
- ☆35Updated 2 years ago
- GROOT: Learning to Follow Instructions by Watching Gameplay Videos (ICLR'24, Spotlight)☆67Updated 2 years ago
- ☆46Updated 7 months ago
- ☆96Updated last year
- XVERSE-MoE-A4.2B: A multilingual large language model developed by XVERSE Technology Inc.☆39Updated last year
- FuseAI Project☆87Updated last year
- ☆106Updated 2 years ago
- ☆12Updated 2 years ago
- A reimplementation of KOSMOS-1 from "Language Is Not All You Need: Aligning Perception with Language Models"☆27Updated 2 years ago
- ☆17Updated 2 years ago
- [ECCV2024] 🐙Octopus, an embodied vision-language model trained with RLEF, emerging superior in embodied visual planning and programming.☆294Updated last year
- ☆123Updated last year
- [ICLR 2024] Trajectory-as-Exemplar Prompting with Memory for Computer Control☆66Updated 3 weeks ago