gkamradt / LLMTest_NeedleInAHaystackLinks
Doing simple retrieval from LLM models at various context lengths to measure accuracy
☆2,167Updated last year
Alternatives and similar repositories for LLMTest_NeedleInAHaystack
Users that are interested in LLMTest_NeedleInAHaystack are comparing it to the libraries listed below
Sorting:
- YaRN: Efficient Context Window Extension of Large Language Models☆1,668Updated last year
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,940Updated 6 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,293Updated 2 weeks ago
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,550Updated 2 years ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,897Updated 2 years ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,657Updated last year
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,667Updated this week
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,084Updated 2 weeks ago
- A library for advanced large language model reasoning☆2,328Updated 8 months ago
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,555Updated 3 weeks ago
- AllenAI's post-training codebase☆3,562Updated this week
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,877Updated last week
- Official repo for the paper "Scaling Synthetic Data Creation with 1,000,000,000 Personas"☆1,460Updated 11 months ago
- Evaluate your LLM's response with Prometheus and GPT4 💯☆1,043Updated 9 months ago
- This includes the original implementation of SELF-RAG: Learning to Retrieve, Generate and Critique through self-reflection by Akari Asai,…☆2,315Updated last year
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,407Updated last year
- Arena-Hard-Auto: An automatic LLM benchmark.☆994Updated 7 months ago
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,769Updated last year
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,699Updated last year
- Enforce the output format (JSON Schema, Regex etc) of a language model☆1,986Updated 5 months ago
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆826Updated 10 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆905Updated 4 months ago
- Data and tools for generating and inspecting OLMo pre-training data.☆1,404Updated 3 months ago
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,312Updated 8 months ago
- Minimalistic large language model 3D-parallelism training☆2,544Updated last month
- Recipes to scale inference-time compute of open models☆1,124Updated 8 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,316Updated 11 months ago
- This repo contains the source code for RULER: What’s the Real Context Size of Your Long-Context Language Models?☆1,445Updated 2 months ago
- ☆1,033Updated last year
- A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)☆3,151Updated 2 months ago