THUDM / icetk
A unified tokenization tool for Images, Chinese and English.
☆152Updated 2 years ago
Alternatives and similar repositories for icetk:
Users that are interested in icetk are comparing it to the libraries listed below
- ☆172Updated 2 years ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆219Updated last year
- NTK scaled version of ALiBi position encoding in Transformer.☆67Updated last year
- MultilingualShareGPT, the free multi-language corpus for LLM training☆72Updated 2 years ago
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- 中文 Instruction tuning datasets☆130Updated last year
- 中文图书语料MD5链接☆217Updated last year
- ☆128Updated last year
- Kanchil(鼷鹿)是世界上最小的偶蹄目动物,这个开源项目意在探索小模型(6B以下)是否也能具备和人类偏好对齐的能力。☆113Updated 2 years ago
- deep learning☆149Updated this week
- ☆459Updated 10 months ago
- Simple implementation of using lora form the peft library to fine-tune the chatglm-6b☆84Updated 2 years ago
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆115Updated last year
- Light local website for displaying performances from different chat models.☆86Updated last year
- 文本去重☆71Updated 11 months ago
- ☆169Updated last year
- ☆59Updated last year
- Chinese large language model base generated through incremental pre-training on Chinese datasets☆236Updated last year
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆87Updated last year
- Implementation of Chinese ChatGPT☆287Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆65Updated 2 years ago
- ☆84Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated last year
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆47Updated last year
- 本项目旨在对大量文本文件进行快速编码检测和转换以辅助mnbvc语料集项目的数据清洗工作☆61Updated 6 months ago
- 中文大语言模型评测第一期☆108Updated last year
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆95Updated last year
- OPD: Chinese Open-Domain Pre-trained Dialogue Model☆75Updated last year
- SuperCLUE-Math6:新一代中文原生多轮多步数学推理数据集的探索之旅☆55Updated last year
- ☆308Updated 2 years ago