lxx0628 / Prompting-Framework-SurveyLinks
A curated list of awesome publications and researchers on prompting framework updated and maintained by The Intelligent System Security (IS2).
☆85Updated 6 months ago
Alternatives and similar repositories for Prompting-Framework-Survey
Users that are interested in Prompting-Framework-Survey are comparing it to the libraries listed below
Sorting:
- Open Implementations of LLM Analyses☆105Updated 10 months ago
- Codebase accompanying the Summary of a Haystack paper.☆79Updated 10 months ago
- ☆96Updated 10 months ago
- Mixing Language Models with Self-Verification and Meta-Verification☆105Updated 7 months ago
- Model, Code & Data for the EMNLP'23 paper "Making Large Language Models Better Data Creators"☆135Updated last year
- Meta-CoT: Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models☆97Updated last year
- CodeSage: Code Representation Learning At Scale (ICLR 2024)☆111Updated 9 months ago
- a curated list of the role of small models in the LLM era☆103Updated 10 months ago
- Lean implementation of various multi-agent LLM methods, including Iteration of Thought (IoT)☆118Updated 6 months ago
- ☆48Updated last year
- Official repo of Respond-and-Respond: data, code, and evaluation☆103Updated last year
- ☆76Updated 6 months ago
- Google Deepmind's PromptBreeder for automated prompt engineering implemented in langchain expression language.☆135Updated last year
- ☆41Updated last year
- Data and evaluation scripts for "CodePlan: Repository-level Coding using LLMs and Planning", FSE 2024☆73Updated 11 months ago
- Langchain implementation of HuggingGPT☆132Updated 2 years ago
- 🔧 Compare how Agent systems perform on several benchmarks. 📊🚀☆99Updated last week
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆49Updated last year
- Evaluating LLMs with fewer examples☆160Updated last year
- Reward Model framework for LLM RLHF☆61Updated 2 years ago
- LLM reads a paper and produce a working prototype☆58Updated 4 months ago
- ☆78Updated last year
- ☆77Updated 10 months ago
- Learning to Program with Natural Language☆6Updated last year
- RAGElo is a set of tools that helps you selecting the best RAG-based LLM agents by using an Elo ranker☆114Updated last month
- A list of LLM benchmark frameworks.☆68Updated last year
- Experimental Code for StructuredRAG: JSON Response Formatting with Large Language Models☆111Updated 4 months ago
- Interactive coding assistant for data scientists and machine learning developers, empowered by large language models.☆95Updated 10 months ago
- Code accompanying "How I learned to start worrying about prompt formatting".☆108Updated 2 months ago
- [NeurIPS 2024] Evaluation harness for SWT-Bench, a benchmark for evaluating LLM repository-level test-generation☆52Updated last week