Saibo-creator / Awesome-LLM-Constrained-DecodingLinks
A curated list of papers related to constrained decoding of LLM, along with their relevant code and resources.
โ280Updated 2 weeks ago
Alternatives and similar repositories for Awesome-LLM-Constrained-Decoding
Users that are interested in Awesome-LLM-Constrained-Decoding are comparing it to the libraries listed below
Sorting:
- ๐ค A specialized library for integrating context-free grammars (CFG) in EBNF with the Hugging Face Transformersโ125Updated 6 months ago
- A simple unified framework for evaluating LLMsโ254Updated 6 months ago
- โ239Updated last year
- [EMNLP 2023] Adapting Language Models to Compress Long Contextsโ317Updated last year
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".โ256Updated 11 months ago
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factualityโ217Updated last year
- Reproducible, flexible LLM evaluationsโ257Updated last week
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.โ215Updated last month
- Reproducing R1 for Code with Reliable Rewardsโ262Updated 5 months ago
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. ๐งฎโจโ259Updated last year
- LOFT: A 1 Million+ Token Long-Context Benchmarkโ218Updated 4 months ago
- Async pipelined version of Verlโ121Updated 6 months ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)โ159Updated 2 months ago
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"โ361Updated last year
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Modelsโ109Updated 3 months ago
- Codes for the paper "โBench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718โ353Updated last year
- A banchmark list for evaluation of large language models.โ145Updated last month
- REST: Retrieval-Based Speculative Decoding, NAACL 2024โ210Updated last month
- The HELMET Benchmarkโ178Updated 2 months ago
- Data and Code for Program of Thoughts [TMLR 2023]โ289Updated last year
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluationโ154Updated last year
- Repoformer: Selective Retrieval for Repository-Level Code Completion (ICML 2024)โ61Updated 4 months ago
- โจ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024โ174Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Contextโ477Updated last year
- A Comprehensive Benchmark for Software Development.โ115Updated last year
- Automatic evals for LLMsโ550Updated 4 months ago
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scaleโ263Updated 3 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"โ441Updated last year
- A Comprehensive Survey on Long Context Language Modelingโ197Updated 3 months ago
- ๐ AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and Interactive Coding Agent, ACL'24 Best Resourceโฆโ296Updated this week