secsilm / chinese-tokens-in-tiktokenLinks
Chinese tokens in tiktoken tokenizers.
☆32Updated last year
Alternatives and similar repositories for chinese-tokens-in-tiktoken
Users that are interested in chinese-tokens-in-tiktoken are comparing it to the libraries listed below
Sorting:
- ☆96Updated last year
- Prompt 工程师利器,可同时比较多个 Prompts 在多个 LLM 模型上的效果☆96Updated 2 years ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆68Updated 2 years ago
- A lightweight script for processing HTML page to markdown format with support for code blocks☆82Updated last year
- Evaluation for AI apps and agent☆44Updated 2 years ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆138Updated last year
- 🔥Your Daily Dose of AI Research from Hugging Face 🔥 Stay updated with the latest AI breakthroughs! This bot automatically collects and…☆56Updated this week
- SUS-Chat: Instruction tuning done right☆49Updated 2 years ago
- Deep Reasoning Translation (DRT) Project☆240Updated 4 months ago
- Pretrain、decay、SFT a CodeLLM from scratch 🧙♂️☆40Updated last year
- Gaokao Benchmark for AI☆109Updated 3 years ago
- TurtleBench: Evaluating Top Language Models via Real-World Yes/No Puzzles.☆162Updated last year
- 我们是第一个完全可商用的角色大模型。☆39Updated last year
- 最简易的R1结果在小模型上的复现,阐述类O1与DeepSeek R1最重要的本质。Think is all your need。利用实验佐证,对于强推理能力,think思考过程性内容是AGI/ASI的核心。☆45Updated 11 months ago
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆141Updated last year
- CodeLLaMA 中文版 - 代码生成助手,huggingface累积下载2w+次☆45Updated 2 years ago
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆265Updated last year
- Fused Qwen3 MoE layer for faster training, compatible with Transformers, LoRA, bnb 4-bit quant, Unsloth. Also possible to train LoRA over…☆225Updated this week
- 中文基于满血DeepSeek-R1蒸馏数据集☆63Updated 11 months ago
- [AAAI 2026] The Avengers: A Simple Recipe for Uniting Smaller Language Models to Challenge Proprietary Giants☆43Updated last month
- Mixture-of-Experts (MoE) Language Model☆194Updated last year
- Fine-Tune LLM Synthetic-Data application and "From Data to AGI: Unlocking the Secrets of Large Language Model"☆16Updated last year
- A Toolkit for Running On-device Large Language Models (LLMs) in APP☆79Updated last year
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆40Updated 2 years ago
- GLM Series Edge Models☆156Updated 7 months ago
- ☆113Updated 3 months ago
- 顾名思义:手搓的RAG☆131Updated last year
- Token level visualization tools for large language models☆91Updated last year
- CodeGPT: A Code-Related Dialogue Dataset Generated by GPT and for GPT☆114Updated 2 years ago
- open-o1: Using GPT-4o with CoT to Create o1-like Reasoning Chains☆116Updated last year