secsilm / chinese-tokens-in-tiktokenLinks
Chinese tokens in tiktoken tokenizers.
☆32Updated last year
Alternatives and similar repositories for chinese-tokens-in-tiktoken
Users that are interested in chinese-tokens-in-tiktoken are comparing it to the libraries listed below
Sorting:
- Evaluation for AI apps and agent☆43Updated last year
- SUS-Chat: Instruction tuning done right☆49Updated last year
- A lightweight script for processing HTML page to markdown format with support for code blocks☆80Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆138Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆68Updated 2 years ago
- Token level visualization tools for large language models☆90Updated 10 months ago
- Prompt 工程师利器,可同时比较多个 Prompts 在多个 LLM 模型上的效果☆96Updated 2 years ago
- 🔥Your Daily Dose of AI Research from Hugging Face 🔥 Stay updated with the latest AI breakthroughs! This bot automatically collects and…☆54Updated last week
- ☆95Updated 11 months ago
- A fluent, scalable, and easy-to-use LLM data processing framework.☆26Updated 3 months ago
- 最简易的R1结果在小模型上的复现,阐述类O1与DeepSeek R1最重要的本质。Think is all your need。利用实验佐证,对于强推理能力,think思考过程性内容是AGI/ASI的核心。☆44Updated 9 months ago
- Fused Qwen3 MoE layer for faster training, compatible with HF Transformers, LoRA, 4-bit quant, Unsloth☆201Updated 2 weeks ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- ☆299Updated 5 months ago
- Fine-Tune LLM Synthetic-Data application and "From Data to AGI: Unlocking the Secrets of Large Language Model"☆16Updated last year
- Deep Reasoning Translation (DRT) Project☆236Updated 2 months ago
- ☆83Updated last year
- Pretrain、decay、SFT a CodeLLM from scratch 🧙♂️☆39Updated last year
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆141Updated last year
- NaturalCodeBench (Findings of ACL 2024)☆67Updated last year
- 百度QA100万数据集☆47Updated last year
- 我们是第一个完全可商用的角色大模型。☆40Updated last year
- ☆109Updated last month
- ☆53Updated 8 months ago
- CodeLLaMA 中文版 - 代码生成助手,huggingface累积下载2w+次☆45Updated 2 years ago
- GLM Series Edge Models☆152Updated 4 months ago
- XVERSE-MoE-A36B: A multilingual large language model developed by XVERSE Technology Inc.☆38Updated last year
- Mixture-of-Experts (MoE) Language Model☆191Updated last year
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆264Updated last year
- ☆36Updated last year