gkamradt / LLMTest_NeedleInAHaystack
Doing simple retrieval from LLM models at various context lengths to measure accuracy
☆1,762Updated 7 months ago
Alternatives and similar repositories for LLMTest_NeedleInAHaystack:
Users that are interested in LLMTest_NeedleInAHaystack are comparing it to the libraries listed below
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,568Updated this week
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,695Updated 2 months ago
- A library for advanced large language model reasoning☆2,060Updated last month
- This includes the original implementation of SELF-RAG: Learning to Retrieve, Generate and Critique through self-reflection by Akari Asai,…☆2,009Updated 9 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,313Updated this week
- YaRN: Efficient Context Window Extension of Large Language Models☆1,450Updated 11 months ago
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,372Updated 11 months ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,312Updated this week
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,344Updated last year
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…☆922Updated 5 months ago
- Holistic Evaluation of Language Models (HELM), a framework to increase the transparency of language models (https://arxiv.org/abs/2211.09…☆2,131Updated this week
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,801Updated last year
- A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)☆2,437Updated last month
- AgentTuning: Enabling Generalized Agent Abilities for LLMs☆1,402Updated last year
- Official repo for the paper "Scaling Synthetic Data Creation with 1,000,000,000 Personas"☆1,087Updated last month
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,481Updated last year
- ☆1,011Updated 3 months ago
- Minimalistic large language model 3D-parallelism training☆1,701Updated this week
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,466Updated 8 months ago
- Evaluate your LLM's response with Prometheus and GPT4 💯☆883Updated this week
- Official repository for ORPO☆445Updated 9 months ago
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆1,952Updated last year
- AllenAI's post-training codebase☆2,827Updated this week
- Enforce the output format (JSON Schema, Regex etc) of a language model☆1,742Updated 3 weeks ago
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆659Updated this week
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,216Updated 2 weeks ago
- [ACL 2023] One Embedder, Any Task: Instruction-Finetuned Text Embeddings☆1,921Updated 2 months ago
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,694Updated 7 months ago
- The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".☆1,496Updated 9 months ago
- Efficient Retrieval Augmentation and Generation Framework☆1,489Updated 2 months ago