allenai / DataDecideLinks
☆35Updated 2 months ago
Alternatives and similar repositories for DataDecide
Users that are interested in DataDecide are comparing it to the libraries listed below
Sorting:
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year
- ☆124Updated 8 months ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆218Updated 2 weeks ago
- Functional Benchmarks and the Reasoning Gap☆89Updated last year
- ☆75Updated last year
- ☆88Updated last week
- ☆81Updated last week
- Evaluating LLMs with fewer examples☆167Updated last year
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆73Updated 4 months ago
- ☆88Updated last year
- Replicating O1 inference-time scaling laws☆90Updated 11 months ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆92Updated last year
- ☆108Updated last year
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆94Updated 6 months ago
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆144Updated last year
- ☆103Updated last year
- Organize the Web: Constructing Domains Enhances Pre-Training Data Curation☆69Updated 6 months ago
- This is the official repository for Inheritune.☆115Updated 9 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆246Updated last year
- Repository for the paper Stream of Search: Learning to Search in Language☆151Updated 9 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆82Updated 8 months ago
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆150Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated last year
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 8 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆110Updated 6 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- Learning to Retrieve by Trying - Source code for Grounding by Trying: LLMs with Reinforcement Learning-Enhanced Retrieval☆51Updated last year
- The repository contains code for Adaptive Data Optimization☆28Updated 11 months ago