jd-coderepos / awesome-llms
π€ A collection of AWESOME structured summaries of Large Language Models (LLMs)
β27Updated last year
Alternatives and similar repositories for awesome-llms
Users that are interested in awesome-llms are comparing it to the libraries listed below
Sorting:
- β20Updated 3 years ago
- Code for our paper Resources and Evaluations for Multi-Distribution Dense Information Retrievalβ14Updated last year
- The code implementation of MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Modelsβ¦β34Updated last year
- β43Updated 9 months ago
- A curated and updated list of relevant articles and repositories on Reinforcement Learning from AI Feedback (RLAIF)β12Updated last year
- Repository containing awesome resources regarding Hugging Face tooling.β47Updated last year
- Codebase accompanying the Summary of a Haystack paper.β78Updated 7 months ago
- Verifiers for LLM Reinforcement Learningβ50Updated last month
- Code and Dataset for Learning to Solve Complex Tasks by Talking to Agentsβ24Updated 2 years ago
- β15Updated last month
- β38Updated 10 months ago
- β24Updated last year
- The official github repo for the open online courses: "Dive into LLMs".β10Updated last year
- Explore the use of DSPy for extracting features from PDFs πβ39Updated last year
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response formatβ27Updated last year
- Reward Model framework for LLM RLHFβ61Updated last year
- Official repo for NAACL 2024 Findings paper "LeTI: Learning to Generate from Textual Interactions."β64Updated last year
- Hugging Face and Pyserini interoperabilityβ20Updated last year
- Code repo for MathAgentβ16Updated last year
- This repo contains code for the paper "Psychologically-informed chain-of-thought prompts for metaphor understanding in large language modβ¦β14Updated 2 years ago
- Code Prompting Elicits Conditional Reasoning Abilities in Text+Code LLMs. EMNLP 2024β20Updated 6 months ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absoluteβ¦β49Updated 10 months ago
- Exploration using DSPy to optimize modules to maximize performance on the OpenToM datasetβ16Updated last year
- β13Updated 6 months ago
- β12Updated 7 months ago
- Collection of resources for RL and Reasoningβ25Updated 3 months ago
- FBI: Finding Blindspots in LLM Evaluations with Interpretable Checklistsβ29Updated 2 months ago
- Code Repository for Blog - How to Productionize Large Language Models (LLMs)β11Updated last year
- β14Updated last year
- β25Updated 2 years ago