Azure99 / BlossomDataLinks
A fluent, scalable, and easy-to-use LLM data processing framework.
☆28Updated last week
Alternatives and similar repositories for BlossomData
Users that are interested in BlossomData are comparing it to the libraries listed below
Sorting:
- Mixture-of-Experts (MoE) Language Model☆195Updated last year
- Imitate OpenAI with Local Models☆90Updated last year
- SUS-Chat: Instruction tuning done right☆49Updated 2 years ago
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆141Updated last year
- ☆42Updated last year
- Generate multi-round conversation roleplay data based on self-instruct and evol-instruct.☆137Updated last year
- ☆96Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated 2 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆140Updated last year
- 本项目旨在对大量文本文件进行快速编码检测和转换以辅助mnbvc语料集项目的数据清洗工作☆69Updated 3 months ago
- ☆29Updated last year
- Light local website for displaying performances from different chat models.☆87Updated 2 years ago
- Just for debug☆56Updated last year
- LongQLoRA: Extent Context Length of LLMs Efficiently☆168Updated 2 years ago
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆264Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆139Updated last year
- ☆234Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆132Updated last year
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆48Updated last year
- FuseAI Project☆87Updated last year
- 文本去重☆77Updated last year
- SearchGPT: Building a quick conversation-based search engine with LLMs.☆46Updated last year
- ☆16Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- GLM Series Edge Models☆158Updated 7 months ago
- open-o1: Using GPT-4o with CoT to Create o1-like Reasoning Chains☆116Updated last year
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆58Updated last year
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆256Updated last year
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Updated last year