secsilm / chinese-tokens-in-tiktokenLinks
Chinese tokens in tiktoken tokenizers.
☆32Updated last year
Alternatives and similar repositories for chinese-tokens-in-tiktoken
Users that are interested in chinese-tokens-in-tiktoken are comparing it to the libraries listed below
Sorting:
- A fluent, scalable, and easy-to-use LLM data processing framework.☆26Updated 2 months ago
- ☆95Updated 10 months ago
- 最简易的R1结果在小模型上的复现,阐述类O1与DeepSeek R1最重要的本质。Think is all your need。利用实验佐证,对于强推理能力,think思考过程性内容是AGI/ASI的核心。☆44Updated 8 months ago
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆140Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆68Updated 2 years ago
- Evaluation for AI apps and agent☆43Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- 我们是第一个完全可商用的角色大模型。☆40Updated last year
- 🔥Your Daily Dose of AI Research from Hugging Face 🔥 Stay updated with the latest AI breakthroughs! This bot automatically collects and…☆54Updated last week
- SUS-Chat: Instruction tuning done right☆49Updated last year
- ☆296Updated 4 months ago
- GLM Series Edge Models☆149Updated 4 months ago
- Fused Qwen3 MoE layer for faster training, compatible with HF Transformers, LoRA, 4-bit quant, Unsloth☆191Updated last week
- Token level visualization tools for large language models☆89Updated 9 months ago
- Deep Reasoning Translation (DRT) Project☆233Updated last month
- Pretrain、decay、SFT a CodeLLM from scratch 🧙♂️☆39Updated last year
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- XVERSE-MoE-A36B: A multilingual large language model developed by XVERSE Technology Inc.☆38Updated last year
- ☆40Updated last year
- Fine-Tune LLM Synthetic-Data application and "From Data to AGI: Unlocking the Secrets of Large Language Model"☆16Updated last year
- Mixture-of-Experts (MoE) Language Model☆189Updated last year
- this repo is mnbvc text quality classification using fastText☆16Updated 2 years ago
- ☆36Updated last year
- ☆109Updated last week
- XVERSE-MoE-A4.2B: A multilingual large language model developed by XVERSE Technology Inc.☆39Updated last year
- ☆35Updated 2 months ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- 珠算代码大模型(Abacus Code LLM)☆56Updated last year
- open-o1: Using GPT-4o with CoT to Create o1-like Reasoning Chains☆116Updated 9 months ago
- A lightweight script for processing HTML page to markdown format with support for code blocks☆80Updated last year