lm-sys / llm-decontaminator
Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"
☆298Updated last year
Alternatives and similar repositories for llm-decontaminator:
Users that are interested in llm-decontaminator are comparing it to the libraries listed below
- Scaling Data-Constrained Language Models☆333Updated 6 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆253Updated 8 months ago
- A project to improve skills of large language models☆256Updated this week
- The official evaluation suite and dynamic data release for MixEval.☆233Updated 4 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆218Updated 4 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆206Updated 10 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆186Updated 7 months ago
- Official repository for ORPO☆444Updated 9 months ago
- Pre-training code for Amber 7B LLM☆165Updated 10 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆453Updated last year
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆186Updated 3 months ago
- ☆501Updated 4 months ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆255Updated last year
- Website for hosting the Open Foundation Models Cheat Sheet.☆264Updated last week
- RewardBench: the first evaluation tool for reward models.☆526Updated 3 weeks ago
- LOFT: A 1 Million+ Token Long-Context Benchmark☆180Updated this week
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆218Updated last year
- Evaluating LLMs with fewer examples☆147Updated 11 months ago
- The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction☆382Updated 8 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆117Updated last year
- DSIR large-scale data selection framework for language model training☆243Updated 11 months ago
- batched loras☆340Updated last year
- Reproducible, flexible LLM evaluations☆176Updated 3 months ago
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆150Updated last year
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆234Updated last year
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆405Updated 11 months ago
- A repository for research on medium sized language models.☆493Updated 2 months ago
- ☆268Updated last year
- A bagel, with everything.☆317Updated 11 months ago
- Experiments on speculative sampling with Llama models☆125Updated last year