xlang-ai / batch-prompting
[EMNLP 2023 Industry Track] A simple prompting approach that enables the LLMs to run inference in batches.
☆72Updated last year
Alternatives and similar repositories for batch-prompting
Users that are interested in batch-prompting are comparing it to the libraries listed below
Sorting:
- Long Context Extension and Generalization in LLMs☆55Updated 7 months ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆77Updated last year
- ☆64Updated last year
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆57Updated last year
- Code for preprint "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆38Updated last week
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆55Updated 3 weeks ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆59Updated 7 months ago
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆52Updated 2 months ago
- ☆73Updated 6 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆140Updated 6 months ago
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆79Updated 8 months ago
- ☆35Updated last year
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆74Updated 11 months ago
- ☆72Updated last year
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆36Updated 2 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆81Updated 9 months ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆105Updated 2 months ago
- ☆38Updated last year
- ☆65Updated 2 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆134Updated 7 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆44Updated last year
- General Reasoner: Advancing LLM Reasoning Across All Domains☆82Updated last week
- List of papers on Self-Correction of LLMs.☆72Updated 4 months ago
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆42Updated 6 months ago
- Codebase for Instruction Following without Instruction Tuning☆34Updated 7 months ago
- Code for paper 'Data-Efficient FineTuning'☆29Updated last year
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆76Updated last year
- [ICLR 2025] MiniPLM: Knowledge Distillation for Pre-Training Language Models☆45Updated 5 months ago
- Codes for our paper "Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation" (EMNLP 2023 Findings)☆41Updated last year