aiwaves-cn / Dive-into-LLMsLinks
The official github repo for the open online courses: "Dive into LLMs".
☆10Updated last year
Alternatives and similar repositories for Dive-into-LLMs
Users that are interested in Dive-into-LLMs are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] A comprehensive benchmark for evaluating critique ability of LLMs☆39Updated 6 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorch☆30Updated this week
- A framework for evolving and testing question-answering datasets with various models.☆16Updated last year
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated last year
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆50Updated 3 weeks ago
- Codebase for Instruction Following without Instruction Tuning☆34Updated 9 months ago
- ☆33Updated this week
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆32Updated last year
- The rule-based evaluation subset and code implementation of Omni-MATH☆22Updated 6 months ago
- This is the implementation for the paper "LARGE LANGUAGE MODEL CASCADES WITH MIX- TURE OF THOUGHT REPRESENTATIONS FOR COST- EFFICIENT REA…☆23Updated last year
- Middleware for LLMs: Tools Are Instrumental for Language Agents in Complex Environments (EMNLP'2024)☆37Updated 5 months ago
- Code for paper: Long cOntext aliGnment via efficient preference Optimization☆14Updated 4 months ago
- A Dynamic Visual Benchmark for Evaluating Mathematical Reasoning Robustness of Vision Language Models☆24Updated 7 months ago
- Syntax Error-Free and Generalizable Tool Use for LLMs via Finite-State Decoding☆27Updated last year
- Extensive Self-Contrast Enables Feedback-Free Language Model Alignment☆21Updated last year
- [NAACL 2025] Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs☆24Updated 9 months ago
- The open source implementation of "Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers"☆20Updated last year
- Code for the paper <SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning>☆48Updated last year
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large L…☆50Updated last year
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆99Updated last month
- The code implementation of MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Models…☆34Updated last year
- ☆40Updated 2 weeks ago
- Code for ICLR 2024 paper "CRAFT: Customizing LLMs by Creating and Retrieving from Specialized Toolsets"☆57Updated last year
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆81Updated 10 months ago
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆25Updated 3 months ago
- [NAACL'25] "Revealing the Barriers of Language Agents in Planning"☆12Updated this week
- LongHeads: Multi-Head Attention is Secretly a Long Context Processor☆29Updated last year
- ☆24Updated 5 months ago
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆57Updated 8 months ago