imagination-research / sot
[ICLR 2024] Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation
☆167Updated last year
Alternatives and similar repositories for sot:
Users that are interested in sot are comparing it to the libraries listed below
- Simple extension on vLLM to help you speed up reasoning model without training.☆148Updated this week
- The official repo for "LLoCo: Learning Long Contexts Offline"☆116Updated 10 months ago
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆118Updated 10 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆301Updated last year
- ☆125Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆205Updated 11 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆115Updated 5 months ago
- ☆198Updated 5 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆221Updated 6 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆121Updated last year
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆134Updated 5 months ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆105Updated 2 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆199Updated 9 months ago
- ☆78Updated 3 months ago
- A toolkit for fine-tuning, inferencing, and evaluating GreenBitAI's LLMs.☆83Updated last month
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆143Updated 7 months ago
- PB-LLM: Partially Binarized Large Language Models☆152Updated last year
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆107Updated 7 months ago
- A list of LLM benchmark frameworks.☆66Updated last year
- ☆121Updated 11 months ago
- Evaluating LLMs with CommonGen-Lite☆90Updated last year
- Experiments on speculative sampling with Llama models☆125Updated last year
- Mixing Language Models with Self-Verification and Meta-Verification☆104Updated 4 months ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆274Updated last year
- ☆119Updated 8 months ago
- Implementation of NAACL 2024 Outstanding Paper "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆142Updated last month
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆161Updated 9 months ago
- FuseAI Project☆85Updated 3 months ago
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆94Updated last year
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆172Updated last month