lm-sys / llm-decontaminatorLinks
Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"
☆306Updated last year
Alternatives and similar repositories for llm-decontaminator
Users that are interested in llm-decontaminator are comparing it to the libraries listed below
Sorting:
- Manage scalable open LLM inference endpoints in Slurm clusters☆262Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Users☆228Updated 8 months ago
- The official evaluation suite and dynamic data release for MixEval.☆242Updated 8 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆204Updated last year
- ☆523Updated 7 months ago
- Pre-training code for Amber 7B LLM☆166Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆194Updated 11 months ago
- LOFT: A 1 Million+ Token Long-Context Benchmark☆204Updated 3 weeks ago
- [NeurIPS'24] SelfCodeAlign: Self-Alignment for Code Generation☆306Updated 4 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆463Updated last year
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆219Updated last year
- A project to improve skills of large language models☆456Updated this week
- Official repository for ORPO☆457Updated last year
- Scaling Data-Constrained Language Models☆337Updated 2 weeks ago
- Experiments on speculative sampling with Llama models☆128Updated 2 years ago
- Reproducible, flexible LLM evaluations☆215Updated 2 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆229Updated 8 months ago
- DSIR large-scale data selection framework for language model training☆252Updated last year
- ☆310Updated last year
- A repository for research on medium sized language models.☆502Updated last month
- A simple unified framework for evaluating LLMs☆221Updated 2 months ago
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆203Updated 2 months ago
- Evaluating LLMs with fewer examples☆160Updated last year
- A bagel, with everything.☆322Updated last year
- RuLES: a benchmark for evaluating rule-following in language models☆227Updated 4 months ago
- PyTorch building blocks for the OLMo ecosystem☆258Updated this week
- Evaluation suite for LLMs☆352Updated 3 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆117Updated last year
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆138Updated 8 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆132Updated last year