jd-coderepos / awesome-llmsLinks
π€ A collection of AWESOME structured summaries of Large Language Models (LLMs)
β27Updated last year
Alternatives and similar repositories for awesome-llms
Users that are interested in awesome-llms are comparing it to the libraries listed below
Sorting:
- Reward Model framework for LLM RLHFβ61Updated last year
- β20Updated 3 years ago
- The code implementation of MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Modelsβ¦β34Updated last year
- Codebase accompanying the Summary of a Haystack paper.β78Updated 8 months ago
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"β106Updated 8 months ago
- CoNLI: a plug-and-play framework for ungrounded hallucination detection and reductionβ31Updated last year
- Small and Efficient Mathematical Reasoning LLMsβ71Updated last year
- This repository includes a benchmark and code for the paper "Evaluating LLMs at Detecting Errors in LLM Responses".β29Updated 9 months ago
- Code and Dataset for Learning to Solve Complex Tasks by Talking to Agentsβ24Updated 3 years ago
- ReBase: Training Task Experts through Retrieval Based Distillationβ29Updated 4 months ago
- ZYN: Zero-Shot Reward Models with Yes-No Questionsβ34Updated last year
- Scalable Meta-Evaluation of LLMs as Evaluatorsβ42Updated last year
- Code for EMNLP 2024 paper "Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning"β54Updated 8 months ago
- Evaluation on Logical Reasoning and Abstract Reasoning Challengesβ27Updated last month
- β49Updated 6 months ago
- β25Updated 2 years ago
- Code for our paper Resources and Evaluations for Multi-Distribution Dense Information Retrievalβ14Updated last year
- The open source implementation of the base model behind GPT-4 from OPENAI [Language + Multi-Modal]β10Updated last year
- minimal LLM scripts for 24GB VRAM GPUs. training, inference, whateverβ39Updated 2 weeks ago
- LangChain, Llama2-Chat, and zero- and few-shot prompting are used to generate synthetic datasets for IR and RAG system evaluationβ37Updated last year
- EMNLP 2024 "Re-reading improves reasoning in large language models". Simply repeating the question to get bidirectional understanding forβ¦β26Updated 5 months ago
- Code Repository for Blog - How to Productionize Large Language Models (LLMs)β11Updated last year
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)β29Updated last year
- Code repo for MathAgentβ16Updated last year
- Neuro-Symbolic Integration Brings Causal and Reliable Reasoning Proofsβ36Updated last year
- Supervised instruction finetuning for LLM with HF trainer and Deepspeedβ35Updated last year
- Truth Forest: Toward Multi-Scale Truthfulness in Large Language Models through Intervention without Tuningβ46Updated last year
- β21Updated 3 months ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response formatβ27Updated last year
- Tree prompting: easy-to-use scikit-learn interface for improved prompting.β37Updated last year