swj0419 / detect-pretrain-code-contaminationLinks
☆78Updated 2 years ago
Alternatives and similar repositories for detect-pretrain-code-contamination
Users that are interested in detect-pretrain-code-contamination are comparing it to the libraries listed below
Sorting:
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆144Updated 2 years ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆250Updated last year
- Evaluating LLMs with CommonGen-Lite☆94Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆159Updated 2 years ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆169Updated 2 years ago
- Small and Efficient Mathematical Reasoning LLMs☆73Updated 2 years ago
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆77Updated last year
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆81Updated 2 years ago
- [TMLR 2026] When Attention Collapses: How Degenerate Layers in LLMs Enable Smaller, Stronger Models☆122Updated last year
- ☆142Updated 5 months ago
- Official repo for "Make Your LLM Fully Utilize the Context"☆263Updated last year
- ☆120Updated last year
- Data preparation code for Amber 7B LLM☆94Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆201Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆280Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆240Updated last year
- A pipeline for LLM knowledge distillation☆112Updated 10 months ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆150Updated last year
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆121Updated 2 years ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209Updated last year
- 🚢 Data Toolkit for Sailor Language Models☆95Updated 11 months ago
- Merge Transformers language models by use of gradient parameters.☆213Updated last year
- Sakura-SOLAR-DPO: Merge, SFT, and DPO☆116Updated 2 years ago
- ☆161Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆255Updated last year
- Pre-training code for Amber 7B LLM☆170Updated last year
- Experiments on speculative sampling with Llama models☆128Updated 2 years ago
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆104Updated 8 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆180Updated last year