Azure99 / BlossomDataLinks
A fluent, scalable, and easy-to-use LLM data processing framework.
☆22Updated last week
Alternatives and similar repositories for BlossomData
Users that are interested in BlossomData are comparing it to the libraries listed below
Sorting:
- Mixture-of-Experts (MoE) Language Model☆189Updated 10 months ago
- SUS-Chat: Instruction tuning done right☆48Updated last year
- Generate multi-round conversation roleplay data based on self-instruct and evol-instruct.☆129Updated 6 months ago
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆57Updated 8 months ago
- ☆30Updated 10 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆131Updated last year
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆139Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- ☆230Updated last year
- Imitate OpenAI with Local Models☆87Updated 10 months ago
- 使用langchain进行任务规划,构建子任务的会话场景资源,通过MCTS任务执行器,来让每个子任务通过在上下文中资源,通过自身反思探索来获取自身对问题的最优答案;这种方式依赖模型的对齐偏好,我们在每种偏好上设计了一个工程框架,来完成自我对不同答案的奖励进行采样策略☆29Updated this week
- 本项目旨在对大量文本文件进行快速编码检测和转换以辅助mnbvc语料集项目的数据清洗工作☆61Updated 8 months ago
- ☆48Updated last year
- ☆94Updated 7 months ago
- SearchGPT: Building a quick conversation-based search engine with LLMs.☆46Updated 6 months ago
- ☆40Updated last year
- ☆149Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆136Updated 7 months ago
- GLM Series Edge Models☆144Updated last month
- Just for debug☆56Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆135Updated last year
- LongQLoRA: Extent Context Length of LLMs Efficiently☆166Updated last year
- open-o1: Using GPT-4o with CoT to Create o1-like Reasoning Chains☆116Updated 6 months ago
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆244Updated 8 months ago
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Updated last year
- ☆172Updated last year
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆147Updated 11 months ago
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆263Updated last year
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆40Updated last year
- 🔥Your Daily Dose of AI Research from Hugging Face 🔥 Stay updated with the latest AI breakthroughs! This bot automatically collects and…☆52Updated this week