gkamradt / LLMTest_NeedleInAHaystackLinks
Doing simple retrieval from LLM models at various context lengths to measure accuracy
☆2,068Updated last year
Alternatives and similar repositories for LLMTest_NeedleInAHaystack
Users that are interested in LLMTest_NeedleInAHaystack are comparing it to the libraries listed below
Sorting:
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,896Updated 3 months ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,634Updated last year
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,927Updated this week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,078Updated last week
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,514Updated 2 years ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,627Updated last year
- A library for advanced large language model reasoning☆2,300Updated 5 months ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,867Updated last year
- AllenAI's post-training codebase☆3,284Updated this week
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,526Updated 9 months ago
- This includes the original implementation of SELF-RAG: Learning to Retrieve, Generate and Critique through self-reflection by Akari Asai,…☆2,235Updated last year
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…☆970Updated last year
- ☆1,035Updated 10 months ago
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆786Updated 7 months ago
- Evaluate your LLM's response with Prometheus and GPT4 💯☆1,009Updated 6 months ago
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,399Updated last year
- Official repo for the paper "Scaling Synthetic Data Creation with 1,000,000,000 Personas"☆1,388Updated 8 months ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,715Updated this week
- The papers are organized according to our survey: Evaluating Large Language Models: A Comprehensive Survey.☆785Updated last year
- A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)☆2,925Updated 3 weeks ago
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,533Updated 3 weeks ago
- Arena-Hard-Auto: An automatic LLM benchmark.☆956Updated 4 months ago
- Data and tools for generating and inspecting OLMo pre-training data.☆1,340Updated last week
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆894Updated last month
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,756Updated last year
- MTEB: Massive Text Embedding Benchmark☆2,955Updated this week
- ☆1,309Updated 8 months ago
- The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".☆1,576Updated 5 months ago
- A collection of benchmarks and datasets for evaluating LLM.☆525Updated last year
- ☆552Updated 11 months ago