lxx0628 / Prompting-Framework-SurveyLinks
A curated list of awesome publications and researchers on prompting framework updated and maintained by The Intelligent System Security (IS2).
☆86Updated 10 months ago
Alternatives and similar repositories for Prompting-Framework-Survey
Users that are interested in Prompting-Framework-Survey are comparing it to the libraries listed below
Sorting:
- Open Implementations of LLM Analyses☆108Updated last year
- Mixing Language Models with Self-Verification and Meta-Verification☆110Updated last year
- Official repo of Respond-and-Respond: data, code, and evaluation☆104Updated last year
- ☆43Updated last year
- RAGElo is a set of tools that helps you selecting the best RAG-based LLM agents by using an Elo ranker☆124Updated last month
- Codebase accompanying the Summary of a Haystack paper.☆79Updated last year
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆51Updated last year
- a curated list of the role of small models in the LLM era☆110Updated last year
- 📚 A curated list of papers & technical articles on AI Quality & Safety☆194Updated 7 months ago
- CodeSage: Code Representation Learning At Scale (ICLR 2024)☆114Updated last year
- Reward Model framework for LLM RLHF☆61Updated 2 years ago
- Evaluating LLMs with fewer examples☆169Updated last year
- Meta-CoT: Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models☆100Updated 2 years ago
- Score LLM pretraining data with classifiers☆55Updated 2 years ago
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- Backtracing: Retrieving the Cause of the Query, EACL 2024 Long Paper, Findings.☆92Updated last year
- Flacuna was developed by fine-tuning Vicuna on Flan-mini, a comprehensive instruction collection encompassing various tasks. Vicuna is al…☆111Updated 2 years ago
- Listing all reported open-source LLMs achieving a higher score than proprietary, paying OpenAI models (ChatGPT, GPT-4).☆68Updated last year
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆109Updated last year
- Official Implementation of InstructZero; the first framework to optimize bad prompts of ChatGPT(API LLMs) and finally obtain good prompts…☆197Updated last year
- ☆81Updated last month
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆117Updated last month
- Evaluating LLMs with CommonGen-Lite☆93Updated last year
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆117Updated 2 years ago
- 🔧 Compare how Agent systems perform on several benchmarks. 📊🚀☆102Updated 4 months ago
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- [NAACL 2024] Struc-Bench: Are Large Language Models Good at Generating Complex Structured Tabular Data? https://aclanthology.org/2024.naa…☆55Updated 4 months ago
- Model, Code & Data for the EMNLP'23 paper "Making Large Language Models Better Data Creators"☆137Updated 2 years ago
- ☆86Updated last year
- LangCode - Improving alignment and reasoning of large language models (LLMs) with natural language embedded program (NLEP).☆48Updated 2 years ago