Saibo-creator / Awesome-LLM-Constrained-DecodingLinks
A curated list of papers related to constrained decoding of LLM, along with their relevant code and resources.
โ287Updated last month
Alternatives and similar repositories for Awesome-LLM-Constrained-Decoding
Users that are interested in Awesome-LLM-Constrained-Decoding are comparing it to the libraries listed below
Sorting:
- ๐ค A specialized library for integrating context-free grammars (CFG) in EBNF with the Hugging Face Transformersโ128Updated 7 months ago
- Reproducible, flexible LLM evaluationsโ264Updated 3 weeks ago
- [EMNLP 2023] Adapting Language Models to Compress Long Contextsโ318Updated last year
- Codes for the paper "โBench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718โ355Updated last year
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. ๐งฎโจโ263Updated last year
- โ239Updated last year
- A simple unified framework for evaluating LLMsโ254Updated 7 months ago
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.โ215Updated 2 months ago
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factualityโ218Updated last year
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Modelsโ111Updated 3 months ago
- LOFT: A 1 Million+ Token Long-Context Benchmarkโ219Updated 5 months ago
- RewardBench: the first evaluation tool for reward models.โ653Updated 5 months ago
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied witโฆโ144Updated last year
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".โ218Updated 2 weeks ago
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".โ258Updated last year
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"โ362Updated last year
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Modelsโ189Updated last year
- A Survey on Data Selection for Language Modelsโ252Updated 6 months ago
- The HELMET Benchmarkโ182Updated 3 months ago
- Data and Code for Program of Thoughts [TMLR 2023]โ292Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Contextโ477Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"โ237Updated 2 months ago
- A curated list of LLM Interpretability related material - Tutorial, Library, Survey, Paper, Blog, etc..โ284Updated 7 months ago
- A banchmark list for evaluation of large language models.โ149Updated 2 months ago
- A Comprehensive Benchmark for Software Development.โ118Updated last year
- Code for STaR: Bootstrapping Reasoning With Reasoning (NeurIPS 2022)โ218Updated 2 years ago
- A curated collection of LLM reasoning and planning resources, including key papers, limitations, benchmarks, and additional learning mateโฆโ305Updated 8 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluationโ157Updated last year
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"โ441Updated last year
- Automatic evals for LLMsโ557Updated 4 months ago