arcee-ai / EvolKitLinks
EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language Models (LLMs).
☆245Updated last year
Alternatives and similar repositories for EvolKit
Users that are interested in EvolKit are comparing it to the libraries listed below
Sorting:
- Manage scalable open LLM inference endpoints in Slurm clusters☆278Updated last year
- A compact LLM pretrained in 9 days by using high quality data☆337Updated 8 months ago
- ☆120Updated last year
- A pipeline for LLM knowledge distillation☆111Updated 9 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆141Updated 2 years ago
- ☆138Updated 4 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆201Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆253Updated last year
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆259Updated last year
- Official repo for "Make Your LLM Fully Utilize the Context"☆260Updated last year
- awesome synthetic (text) datasets☆315Updated last month
- ☆559Updated last year
- Banishing LLM Hallucinations Requires Rethinking Generalization☆276Updated last year
- ☆78Updated 2 years ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆316Updated 2 years ago
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆118Updated 2 months ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆151Updated last year
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆174Updated 11 months ago
- Complex Function Calling Benchmark.☆160Updated 11 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆500Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆205Updated last year
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆274Updated this week
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆366Updated last year
- Pre-training code for Amber 7B LLM☆170Updated last year
- 🚢 Data Toolkit for Sailor Language Models