secsilm / chinese-tokens-in-tiktokenLinks
Chinese tokens in tiktoken tokenizers.
☆32Updated last year
Alternatives and similar repositories for chinese-tokens-in-tiktoken
Users that are interested in chinese-tokens-in-tiktoken are comparing it to the libraries listed below
Sorting:
- Prompt 工程师利器,可同时比较多个 Prompts 在多个 LLM 模型上的效果☆96Updated 2 years ago
- Fused Qwen3 MoE layer for faster training, compatible with HF Transformers, LoRA, 4-bit quant, Unsloth☆176Updated last week
- A lightweight script for processing HTML page to markdown format with support for code blocks☆80Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- 🔥Your Daily Dose of AI Research from Hugging Face 🔥 Stay updated with the latest AI breakthroughs! This bot automatically collects and…☆54Updated this week
- Token level visualization tools for large language models☆88Updated 8 months ago
- ☆95Updated 9 months ago
- ☆110Updated 9 months ago
- Evaluation for AI apps and agent☆43Updated last year
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆140Updated last year
- 最简易的R1结果在小模型上的复现,阐述类O1与DeepSeek R1最重要的本质。Think is all your need。利用实验佐证,对于强 推理能力,think思考过程性内容是AGI/ASI的核心。☆44Updated 7 months ago
- ☆292Updated 3 months ago
- XVERSE-MoE-A36B: A multilingual large language model developed by XVERSE Technology Inc.☆38Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆68Updated 2 years ago
- CodeLLaMA 中文版 - 代码生成助手,huggingface累积下载2w+次☆45Updated 2 years ago
- GLM Series Edge Models☆149Updated 3 months ago
- Deep Reasoning Translation (DRT) Project☆231Updated 3 weeks ago
- SUS-Chat: Instruction tuning done right☆49Updated last year
- A Toolkit for Running On-device Large Language Models (LLMs) in APP☆80Updated last year
- A fluent, scalable, and easy-to-use LLM data processing framework.☆26Updated 2 months ago
- Enable tool-use ability for any LLM model (DeepSeek V3/R1, etc.)☆53Updated 4 months ago
- ☆24Updated 5 months ago
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆57Updated 10 months ago
- ☆34Updated last month
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆33Updated last year
- Pretrain、decay、SFT a CodeLLM from scratch 🧙♂️☆39Updated last year
- 珠算代码大模型(Abacus Code LLM)☆56Updated last year
- Fine-Tune LLM Synthetic-Data application and "From Data to AGI: Unlocking the Secrets of Large Language Model"☆16Updated last year
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆113Updated 4 months ago