yiyepiaoling0715 / codellm-data-preprocess-pipelineLinks
代码大模型 预训练&微调&DPO 数据处理 业界处理pipeline sota
☆44Updated last year
Alternatives and similar repositories for codellm-data-preprocess-pipeline
Users that are interested in codellm-data-preprocess-pipeline are comparing it to the libraries listed below
Sorting:
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆249Updated 11 months ago
- ☆147Updated last year
- ☆49Updated last year
- ☆25Updated this week
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆42Updated last year
- ☆83Updated last year
- Heuristic filtering framework for RefineCode☆75Updated 6 months ago
- ☆168Updated 5 months ago
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆263Updated 3 months ago
- ☆125Updated last year
- Inference code of Lingma SWE-GPT☆243Updated 10 months ago
- Official Repository for SIGIR2024 Demo Paper "An Integrated Data Processing Framework for Pretraining Foundation Models"☆82Updated last year
- ☆40Updated last year
- 怎么训练一个LLM分词器☆153Updated 2 years ago
- ☆51Updated last year
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆256Updated 9 months ago
- The code repository of paper "TransferTOD: A Generalizable Chinese Multi-Domain Task-Oriented Dialogue System with Transfer Capabilities"☆20Updated 9 months ago
- ☆73Updated 8 months ago
- ☆145Updated last year
- Hammer: Robust Function-Calling for On-Device Language Models via Function Masking☆101Updated 3 months ago
- ☆96Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- ☆307Updated last year
- SuperCLUE-Agent: 基于中文原生任务的Agent智能体核心能力测评基准☆92Updated last year
- ☆98Updated last year
- ☆36Updated last year
- ☆177Updated last year
- WritingBench: A Comprehensive Benchmark for Generative Writing☆121Updated last month
- ☆33Updated 4 months ago
- We aim to provide the best references to search, select, and synthesize high-quality and large-quantity data for post-training your LLMs.☆59Updated last year