gkamradt / LLMTest_NeedleInAHaystackLinks
Doing simple retrieval from LLM models at various context lengths to measure accuracy
☆1,889Updated 10 months ago
Alternatives and similar repositories for LLMTest_NeedleInAHaystack
Users that are interested in LLMTest_NeedleInAHaystack are comparing it to the libraries listed below
Sorting:
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,771Updated 5 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,629Updated this week
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,757Updated last week
- This includes the original implementation of SELF-RAG: Learning to Retrieve, Generate and Critique through self-reflection by Akari Asai,…☆2,106Updated last year
- Minimalistic large language model 3D-parallelism training☆1,926Updated last week
- YaRN: Efficient Context Window Extension of Large Language Models☆1,497Updated last year
- A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)☆2,618Updated 4 months ago
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,387Updated last year
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆713Updated 3 months ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,835Updated last year
- AllenAI's post-training codebase☆3,018Updated this week
- A library for advanced large language model reasoning☆2,144Updated last week
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding