Azure99 / BlossomData
A fluent, scalable, and easy-to-use LLM data processing framework.
☆20Updated this week
Alternatives and similar repositories for BlossomData
Users that are interested in BlossomData are comparing it to the libraries listed below
Sorting:
- Imitate OpenAI with Local Models☆88Updated 8 months ago
- 文本去重☆71Updated 11 months ago
- SUS-Chat: Instruction tuning done right☆48Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆40Updated last year
- ☆143Updated 10 months ago
- ☆169Updated last year
- SearchGPT: Building a quick conversation-based search engine with LLMs.☆46Updated 4 months ago
- Generate multi-round conversation roleplay data based on self-instruct and evol-instruct.☆126Updated 4 months ago
- Light local website for displaying performances from different chat models.☆86Updated last year
- ☆46Updated 10 months ago
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- ☆24Updated 6 months ago
- 本项目旨在对大量文本文件进行快速编码检测和转换以辅助mnbvc语料集项目的数据清洗工作☆61Updated 6 months ago
- ☆29Updated 8 months ago
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆140Updated last year
- Mixture-of-Experts (MoE) Language Model☆186Updated 8 months ago
- Just for debug☆56Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆135Updated 5 months ago
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆77Updated 6 months ago
- 使用langchain进行任务规划,构建子任务的会话场景资源,通过MCTS任务执行器,来让每个子任务通过在上下文中资源,通过自身反思探索来获取自身对问题的最优答案;这种方式依赖模型的对齐偏好,我们在每种偏好上设计了一个工程框架,来完成自我对不同答案的奖励进行采样策略☆29Updated last week
- 代码大模型 预训练&微调&DPO 数据处理 业界处理pipeline sota☆38Updated 9 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆132Updated 11 months ago
- ☆36Updated 8 months ago
- 用于微调LLM的中文指令数据集☆26Updated 2 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆131Updated 10 months ago
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Updated last year
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆57Updated 5 months ago
- ☆40Updated last year
- qwen models finetuning☆97Updated 2 months ago