OFA-Sys / InsTagLinks
InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning
☆284Updated 2 years ago
Alternatives and similar repositories for InsTag
Users that are interested in InsTag are comparing it to the libraries listed below
Sorting:
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆184Updated 6 months ago
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆412Updated 6 months ago
- ☆318Updated last year
- ☆147Updated last year
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆580Updated last year
- ☆182Updated 2 years ago
- ☆147Updated last year
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆138Updated 8 months ago
- Generative Judge for Evaluating Alignment☆248Updated last year
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆269Updated last year
- Collection of training data management explorations for large language models☆336Updated last year
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆90Updated last year
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆257Updated last year
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)☆99Updated 10 months ago
- [ICLR 2025] 🧬 RegMix: Data Mixture as Regression for Language Model Pre-training (Spotlight)☆181Updated 10 months ago
- A large-scale, fine-grained, diverse preference dataset (and models).☆359Updated 2 years ago
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆136Updated last year
- [EMNLP 2023] Lion: Adversarial Distillation of Proprietary Large Language Models☆211Updated last year
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆264Updated 6 months ago
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆198Updated last month
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆253Updated last year
- Naive Bayes-based Context Extension☆326Updated last year
- a-m-team's exploration in large language modeling☆195Updated 7 months ago
- A repository sharing the literatures about long-context large language models, including the methodologies and the evaluation benchmarks☆271Updated last year
- [ACL 2024] FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models☆118Updated 6 months ago
- ☆129Updated 2 years ago
- [ACL 2024] MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialogues☆135Updated last year
- [ACL2024] T-Eval: Evaluating Tool Utilization Capability of Large Language Models Step by Step☆302Updated last year
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆193Updated last year
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆45Updated last year