opendilab / LLMRiddlesLinks
Open-Source Reproduction/Demo of the LLM Riddles Game
☆545Updated last year
Alternatives and similar repositories for LLMRiddles
Users that are interested in LLMRiddles are comparing it to the libraries listed below
Sorting:
- PsyDI: Towards a Personalized and Progressively In-depth Chatbot for Psychological Measurements. (e.g. MBTI Measurement Agent)☆178Updated 3 months ago
- A Game Demo Powered by ChatGPT Agents☆272Updated 2 years ago
- GPT-Fathom is an open-source and reproducible LLM evaluation suite, benchmarking 10+ leading open-source and closed-source LLMs as well a…☆346Updated last year
- Ghost in the Minecraft: Generally Capable Agents for Open-World Environments via Large Language Models with Text-based Knowledge and Memo…☆634Updated 2 years ago
- 羊了个羊 + 深度强化学习(Deep Reinforcement Learning + 3 Tiles Game)☆469Updated 8 months ago
- ☆731Updated 2 years ago
- Building open-ended embodied agent in battle royale FPS game☆37Updated last year
- AgentTuning: Enabling Generalized Agent Abilities for LLMs☆1,471Updated 2 years ago
- A curated list of of awesome UI agents resources, encompassing Web, App, OS, and beyond (continually updated)☆247Updated 2 months ago
- GAOKAO-Bench is an evaluation framework that utilizes GAOKAO questions as a dataset to evaluate large language models.☆695Updated 10 months ago
- TextStarCraft2,a pure language env which support llms play starcraft2☆292Updated 7 months ago
- AgentSims is an easy-to-use infrastructure for researchers from all disciplines to test the specific capacities they are interested in.☆907Updated 2 years ago
- A collection of recent papers on building autonomous agent. Two topics included: RL-based / LLM-based agents.☆735Updated 11 months ago
- Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs…☆580Updated last week
- ☆121Updated 2 years ago
- RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models☆513Updated last year
- FlagEval is an evaluation toolkit for AI large foundation models.☆339Updated 7 months ago
- Crowdfunding open source projects: use OpenReview's high-quality review data to fine-tune a professional review and response LLM. 众筹开源项目:…☆202Updated 2 years ago
- [EMNLP'24] CharacterGLM: Customizing Chinese Conversational AI Characters with Large Language Models☆484Updated last month
- 面向中文大模型价值观的评估与对齐研究☆544Updated 2 years ago
- 大模型多维度中文对齐评测基准 (ACL 2024)☆422Updated last month
- TurtleBench: Evaluating Top Language Models via Real-World Yes/No Puzzles.☆158Updated last year
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆268Updated 9 months ago
- Official github repo for C-Eval, a Chinese evaluation suite for foundation models [NeurIPS 2023]☆1,785Updated 4 months ago
- CMMLU: Measuring massive multitask language understanding in Chinese☆794Updated 11 months ago
- Official Pytorch Implementation for MathGLM☆328Updated 2 years ago
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,556Updated 2 months ago
- A toolkit for inference and evaluation of 'mixtral-8x7b-32kseqlen' from Mistral AI☆773Updated last year
- ☆111Updated last month
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆446Updated last year